USMLE Step 1 to be Pass/Fail

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Sure, though I contend it wasn't poorly written. Shade aside, I can't say if this decision is certainly good or bad. What I can say is that my professional life focuses on performance evaluation and it's essential to have tests that provide meaningful data, and that the data are relevant to desired outcomes. So:
  1. Does a better score on a single test accurately represent intelligence/dedication/capacity for learning?
  2. Does a better score correlate with being a better resident, and a better physician?
Additionally, the response to this decision here and elsewhere seems to center on two main arguments:
  1. I earned my way to where I'm at through blood, sweat, tears, and unsanctioned bare-knuckle boxing matches. If any aspect of the process changes after I went through it, everyone coming after me is an unqualified snowflake.
  2. I get 1200 applications for 8 slots and I have absolutely no way to evaluate all of those applications. I understand that people are more complex than test scores and that high scores don't always net the best residents/physicians, but I have no other tool by which to evaluate candidates that won't take literally 7 years every application year.
Neither of these arguments really hold water. Change happens, and everyone needs to settle with that. Maybe yesterday's physicians look upon today's with disdain because everyone sub-specializes and back in their day they'd give a patient a swig of whiskey and tell them to bite down hard before amputating a leg using a spoon. Side note: if Jenny McJennyson could pass the test and get a high score, what does that say about Jenny's preparation and the test? As for the PDs/APDs, is board score stratification effective & accurate, or simply a convenient way to not look at 700 applications?

Do ABEM scores correlate to on-the-job performance? Are higher-scoring physicians better, or is everyone accepted equally because they've all passed the test and you evaluate performance based on how they actually do their jobs and not how they take tests? What if EM docs were compensated based solely on board score?


1. Jenny couldn't pass the test.
2. You're missing the fact that the old-timey physicians didn't have the sophisticated knowledge that we know have. Hell, immunology was "new" in the 80s. There were no "biologics" (drugs) until the 90s. There was no "personalized cancer care" until the new millennium. Thus, we need to sub-and-sub-sub-specialize.
3. ABEM scores do correlate to on-the-job performance. You need to know what you need to know.

Members don't see this ad.
 
  • Like
Reactions: 1 user
Silly move IMO, in an effort to compromise between the "boards are too stressful" and "boards are necessary to stratify students", they basically tried to take a middle ground, but ultimately I think this is worse than either situation. Either move to an all P/F system for all the boards, and just de-value boards altogether, or keep it the same. Doing this makes no sense. It won't decrease stress, now all the stress moves to a single chance exam with no second chance. And you will have to have that exam done by summer of the end of 3rd year so results are back by Sept, or you will get no interviews in many fields. This decision just makes no sense to me.

The one nice thing about this change though is, I'd imagine no one will expect DO's to take USMLE step 1 now. They will likely just be fine taking comlex 1/2, followed by USMLE step 2. Will also be interesting to see if the AOA changes the comlex score reporting now as a followup to this.

Another thing I can't wait to see how it pans out about this is 4th year rotations. How will any field decide who to let rotate? There will be no Step2 score by the time AIs are applied for. So many programs have board cutoffs for rotation. So now you can have people getting AIs at places or in fields that won't even rank them once they get their test back. Have a Step 2 score of 235 come back after you've done several Ortho AIs. What do you do now, too late to change course and do something like EM where you need SLOEs.

The unintended consequences of this decision will be fascinating to watch.
agree, doesn't seem very well thought out
 
1. Jenny couldn't pass the test.
2. You're missing the fact that the old-timey physicians didn't have the sophisticated knowledge that we know have. Hell, immunology was "new" in the 80s. There were no "biologics" (drugs) until the 90s. There was no "personalized cancer care" until the new millennium. Thus, we need to sub-and-sub-sub-specialize.
3. ABEM scores do correlate to on-the-job performance. You need to know what you need to know.

1. Jenny wouldn't hack it in med school.
2. Science is always progressing the standards for the MCAT and medical school are more competitive than they have ever been. So why is making the basic science test pass/fail and the more relevent step 2 CK scored a big concern. ALso these personalized care bits are added to residency training no need for extreme detail in medical school.
3. SLOEs are better so why not just make all clinical evaluations SLOEs?
 
Members don't see this ad :)
1. Jenny couldn't pass the test.

2. To quote The Incredibles, "When everyone is super, no one will be."
Maybe Jenny would bomb the test. Maybe it doesn't matter because Jenny doesn't have to and sees patients anyway.

Mean step 1 scores have an upward trend. Does this number need to keep going up to determine who is super?
 
1. Jenny wouldn't hack it in med school.
2. Science is always progressing the standards for the MCAT and medical school are more competitive than they have ever been. So why is making the basic science test pass/fail and the more relevent step 2 CK scored a big concern. ALso these personalized care bits are added to residency training no need for extreme detail in medical school.
3. SLOEs are better so why not just make all clinical evaluations SLOEs?
Jenny isn't even applying to med school.
 
1. Jenny couldn't pass the test.
2. You're missing the fact that the old-timey physicians didn't have the sophisticated knowledge that we know have. Hell, immunology was "new" in the 80s. There were no "biologics" (drugs) until the 90s. There was no "personalized cancer care" until the new millennium. Thus, we need to sub-and-sub-sub-specialize.
3. ABEM scores do correlate to on-the-job performance. You need to know what you need to know.
No doubt the science has advanced, but assessment beyond that is outside my scope. The question is how much of that generic information do you need to know to apply to any residency, and does knowing more (and testing better), make you a better resident?
 
It won't really matter. If residency expansion continues, then a Pass and a Pulse will be all that's needed to get into EM. Exactly how HCA wants it.
 
  • Like
Reactions: 4 users
Silly move IMO, in an effort to compromise between the "boards are too stressful" and "boards are necessary to stratify students", they basically tried to take a middle ground, but ultimately I think this is worse than either situation. Either move to an all P/F system for all the boards, and just de-value boards altogether, or keep it the same. Doing this makes no sense. It won't decrease stress, now all the stress moves to a single chance exam with no second chance. And you will have to have that exam done by summer of the end of 3rd year so results are back by Sept, or you will get no interviews in many fields. This decision just makes no sense to me.

The one nice thing about this change though is, I'd imagine no one will expect DO's to take USMLE step 1 now. They will likely just be fine taking comlex 1/2, followed by USMLE step 2. Will also be interesting to see if the AOA changes the comlex score reporting now as a followup to this.

Another thing I can't wait to see how it pans out about this is 4th year rotations. How will any field decide who to let rotate? There will be no Step2 score by the time AIs are applied for. So many programs have board cutoffs for rotation. So now you can have people getting AIs at places or in fields that won't even rank them once they get their test back. Have a Step 2 score of 235 come back after you've done several Ortho AIs. What do you do now, too late to change course and do something like EM where you need SLOEs.

The unintended consequences of this decision will be fascinating to watch.


Apparently now DOs have to take USMLE step 1 in order to be able to take USMLE step 2, this just recently changed supposedly, im at work or id google it fast and link it. Also there is discussion on changing comlex 1 and 2 to P/F also, Since it seems comlex changes always follow usmle, comlex will likely go P/F also soon. Gonna be a clusterfuk.
 
I'm of a mixed opinion on this one. I agree there are a lot of negative repercussions on the do-or-die Step 1 minutiae memorization of non-clinical factoids, and that it likely isn't the best way to sort out the "pecking order" for residency. Not to mention the... mental health issues it might contribute to. Granted the MCAT isn't too far off.

But the flip side is... those of us who didn't go to a top-10 Ivy League med school.... Step 1 was the great equalizer. I feel like my entire med school class had a chip on its shoulder, and we went hard at the test together. Our class mean score was pretty insanely high, and we were collectively proud of that.

In the end, I'm down with a pass-fail step 1. Step 2 CK is more relevant. And I was much better at it, so I am biased. But I see the domino effects when it comes to away rotations, sub-Is, rank lists, etc. Removing a pseudo-quantitative measuring stick, biased as it may be, does decrease the information the rank-list team is working with.

If you recall the history of residency matching BEFORE "the match" it may be instructive. It was back room deals, good ol boy networks, and ivy-league clubs. Quantitative testing is clearly flawed, but I don't want a system where the students able to get into Ivy undergrads fast track into Top med schools and top residencies with a P/F system that actually keeps the under represented down, not empowered.

Its a sticky wicket.
 
  • Like
Reactions: 2 users
Apparently now DOs have to take USMLE step 1 in order to be able to take USMLE step 2, this just recently changed supposedly, im at work or id google it fast and link it. Also there is discussion on changing comlex 1 and 2 to P/F also, Since it seems comlex changes always follow usmle, comlex will likely go P/F also soon. Gonna be a clusterfuk.

Yeah I saw the "you must take step 1 to take step 2" thing when i re-read the announcement. Really sucks for DOs, who already take COMLEX, to have to take USMLE step 1 if its only going to be P/F. Doesn't add anything to their app.
 
If you recall the history of residency matching BEFORE "the match" it may be instructive. It was back room deals, good ol boy networks, and ivy-league clubs. Quantitative testing is clearly flawed, but I don't want a system where the students able to get into Ivy undergrads fast track into Top med schools and top residencies with a P/F system that actually keeps the under represented down, not empowered.

Its a sticky wicket.
And that's why looking at quantitative vs qualitative for step 1 is a red herring... because the application process itself has major issues and stratifying based on test score score does nothing to remedy that. Students are told to apply to 100+ programs and that they must get 12 interviews or they're doomed. For those at the top, they probably take their pick of top residencies. But for most, they just throw everything at the wall and see what sticks.
 
This is atrocious
Yeah I saw the "you must take step 1 to take step 2" thing when i re-read the announcement. Really sucks for DOs, who already take COMLEX, to have to take USMLE step 1 if its only going to be P/F. Doesn't add anything to their app.

Sent from my Pixel 3 using SDN mobile
 
I’d rather medical students who perform better on CK than Step 1 anyways (better barometer of actual clinical knowledge) so I don’t hate the P/F nature of step 1 but totally agreed that this will just push the arms race to CK instead of Step 1.
People who did well on step 1 almost invariably did well on step. That's the nature of good study habits and test taking skills. It's not an either/or thing.
 
Members don't see this ad :)
If you recall the history of residency matching BEFORE "the match" it may be instructive. It was back room deals, good ol boy networks, and ivy-league clubs. Quantitative testing is clearly flawed, but I don't want a system where the students able to get into Ivy undergrads fast track into Top med schools and top residencies with a P/F system that actually keeps the under represented down, not empowered.

This this this this this this THIS. That's my biggest concern/fear by far. 100% feel this is what this change might lead to.
 
I am inclined to agree with you that the pre-clinical curriculum could be trimmed.
Where to trim it ?

I'll speculum (speculate).
I can be wrong.

We could trim biochemistry altogether. You REALLY should have this in-hand by the time you get accepted. Should be a pre-requisite, really.
We could trim a bit of immunology. Nobody really needs to painfully memorize the complement cascade to work clinically.
We could trim some of genetics. The diseases are interesting, but I've yet to encounter a case of Oldsmobile-Jaegermeister disease in my career, and probably never will. And I sure as hell don't need to memorize WHICH chromosome it's located on.

You might survive in EM without biochemistry, but you won’t last a day in many of the IM subspecialties (oncology, ID, etc.). You’ll also struggle in the ICU.

Many schools have cut their preclinical curriculum back to 18 months. There is already a movement in MedEd circles advocating “novel” 3-year curricula for primary care and generalist fields of study (minimize educational costs, shorten the labor supply pipeline, etc.). EM is one of those fields in the experimental phase of 3&3 training programs leading to ABEM certification 6 years from the first day of med school. Do you really want to shorten the training pipeline and give the “Jennies” out there more ammo for their equivalency argument?
 
  • Like
Reactions: 1 user
You might survive in EM without biochemistry, but you won’t last a day in many of the IM subspecialties (oncology, ID, etc.). You’ll also struggle in the ICU.

Many schools have cut their preclinical curriculum back to 18 months. There is already a movement in MedEd circles advocating “novel” 3-year curricula for primary care and generalist fields of study (minimize educational costs, shorten the labor supply pipeline, etc.). EM is one of those fields in the experimental phase of 3&3 training programs leading to ABEM certification 6 years from the first day of med school. Do you really want to shorten the training pipeline and give the “Jennies” out there more ammo for their equivalency argument?

Oh, I don't disagree; but simple biochem really should be a pre-requisite.

As far as the Jennies go, they never took any basic sciences anyways. Their NP curriculum is lacking things like "pathology" and instead has lots of fluff like "Advanced Feelings and how not to Hurt Them."

They can claim equivalence when they can pass the STEPS. Never before that.
 
Oh, I don't disagree; but simple biochem really should be a pre-requisite.

As far as the Jennies go, they never took any basic sciences anyways. Their NP curriculum is lacking things like "pathology" and instead has lots of fluff like "Advanced Feelings and how not to Hurt Them."

They can claim equivalence when they can pass the STEPS. Never before that.
I think anyone that is getting married should have to audit that class
 
  • Like
Reactions: 1 users
Yeah I saw the "you must take step 1 to take step 2" thing when i re-read the announcement. Really sucks for DOs, who already take COMLEX, to have to take USMLE step 1 if its only going to be P/F. Doesn't add anything to their app.

I think step1 is a pre-requisite for step2 CS, not CK
 
  • Like
Reactions: 1 user
Do ABEM scores correlate to on-the-job performance? Are higher-scoring physicians better, or is everyone accepted equally because they've all passed the test and you evaluate performance based on how they actually do their jobs and not how they take tests? What if EM docs were compensated based solely on board score?
Kinda. The ones that fail or never took it are worse than the ones that pass it.

That being said, the thing they wanted to do to help differentiate people was resoundingly shot down (video interviews).
I have a feeling that residency is either going to become like job search things where people are matched rather than everyone applying.
 
Oh, I don't disagree; but simple biochem really should be a pre-requisite.

This was a common feeling among the Undergraduate Medical Education Committee members that set the curriculum at my last academic job. There was always a push-pull between the UMEC and Admissions Committee since one wanted applicants who could complete the curriculum and pass the USMLE, while the other seemed to be heavily influenced by metrics other than ability. Increasing the number of prerequisites would reduce the number of art, music, and interpretive dance majors applying to medical school and the Admissions Committee might not meet their diversity quotas...o_O
 
People who did well on step 1 almost invariably did well on step. That's the nature of good study habits and test taking skills. It's not an either/or thing.

On average, I think that most people improve on their Step 1 Score with Step 2. The national mean Step 1 is around 228ish and Step 2 is 240ish.
 
Interesting to see how programs will rate applicants.

I liked the importance on Step 1 since my school was P/F + quartile-percentage for basic science years. I studied and rocked the test by 2 SD.

The rest was easy.

Step 2 didn't really matter since it came out after interviews.

So now either med schools are going to have to come up with endless subjective grading criteria for basic science and clinical years in order to provide residency programs or it's all on step 2, which is during 4th year-party-time.

Seems like less of an incentive for talented students to exceed.
 
With alot of schools going pure P/F for grades, step 1 going p/f, comlex talking about going p/f, likely step 2 going p/f, Im not sure what objective criteria programs are going to use to rank applicants. I fear its going to be all about the prestige of the school, and other subjective nonsense. It is going to suck to come from a low tier school, DO school, or be an IMG.
Ill be honest, one of the main reasons I got into an EM/IM/CC program on the MD side at the time was because of step 1 and step 2 scores as well as class rank. With that stuff gone, id have been screwed coming from a new DO program. We had little to no research opportunities, very few, if any, well known faculty, poor clinical rotation sites with little opportunity for networking, research, etc. It would have sucked nads and been a huge uphill battle. Id have had to network through ACEP/EMRA more, kissed more butt, done more audition rotations, attempted aways at more legit programs, etc.

I feel for the new med students. It is going to be more stressful and annoying to match, tuition is going up, reimbursements going down, the market is getting flooded, midlevels are expanding their scope, press ganey is ridiculous. Not a good time to be starting on the path. I feel lucky to be this far down the road even though I am a fresh grad. At least I can likely pay my **** off before things tank. A new med student or even new resident could be in for a world of hurt.
 
STEP1 is supposed to be the tool by which to objectively evaluate academic horsepower. CK just doesn't do it like STEP1 does.

Guaranteed this will lead to Jenny McJennysons trying to parlay themselves as equivalent even more.
Of course the problem is that the USMLE has always been designed as a test of minimum competence and not as a magic sorting hat of medical knowledge. The bigger problem is the lack of other metrics that forces residencies to co-opt the examination.
 
  • Like
Reactions: 1 users
With alot of schools going pure P/F for grades, step 1 going p/f, comlex talking about going p/f, likely step 2 going p/f, Im not sure what objective criteria programs are going to use to rank applicants. I fear its going to be all about the prestige of the school, and other subjective nonsense. It is going to suck to come from a low tier school, DO school, or be an IMG.
Ill be honest, one of the main reasons I got into an EM/IM/CC program on the MD side at the time was because of step 1 and step 2 scores as well as class rank. With that stuff gone, id have been screwed coming from a new DO program. We had little to no research opportunities, very few, if any, well known faculty, poor clinical rotation sites with little opportunity for networking, research, etc. It would have sucked nads and been a huge uphill battle. Id have had to network through ACEP/EMRA more, kissed more butt, done more audition rotations, attempted aways at more legit programs, etc.

I feel for the new med students. It is going to be more stressful and annoying to match, tuition is going up, reimbursements going down, the market is getting flooded, midlevels are expanding their scope, press ganey is ridiculous. Not a good time to be starting on the path. I feel lucky to be this far down the road even though I am a fresh grad. At least I can likely pay my **** off before things tank. A new med student or even new resident could be in for a world of hurt.
wow thanks holy **** if i didnt already feel this way

again, thanks
 
Of course the problem is that the USMLE has always been designed as a test of minimum competence and not as a magic sorting hat of medical knowledge. The bigger problem is the lack of other metrics that forces residencies to co-opt the examination.

I somewhat agree.

I somewhat disagree.

Look; the more medical knowledge that you can arrest and wield on STEP1 indicates that you CAN indeed arrest and wield that much medical knowledge. That bodes well for any specialty that you're trying to match into.

Then comes residency, where you need to do the same with the actual clinical practice of medicine/surgery when the rubber meets the road.

The more you can chew up and spit out, the more you can chew up and spit out.

Keep. It. Simple. Stupid.
 
I somewhat agree.

I somewhat disagree.

I don't see what there is to disagree about since both a prior NBME vice-president (Melnick) and chair (First) cowrote an editorial back in 2016 where they stated, "There is an increasingly pervasive practice of using the USMLE score, especially the Step 1 component, to screen applicants for residency. This is despite the fact that the test was not designed to be a primary determinant of the likelihood of success in residency." It's like using FOBT to screen for acute GI bleeds. It's not what it was designed to be used for, but everyone uses it anyways... because reasons (actually, USMLE to screen applicants has more use than FOBTs in acutely ill patients).
 
  • Like
Reactions: 2 users
Seems like less of an incentive for talented students to exceed.
You've now described all of residency.
Work harder, get more work.

Unless you've got personal drive.
 
  • Like
Reactions: 1 users
One thing not mentioned here is that step 1 scores have been showed to correlate with performance on specialty boards. I would think that’s a very important question for the program directors to consider as they all want to ensure their residents can pass the specialty boards on the first attempt.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: 1 user
I don't see what there is to disagree about since both a prior NBME vice-president (Melnick) and chair (First) cowrote an editorial back in 2016 where they stated, "There is an increasingly pervasive practice of using the USMLE score, especially the Step 1 component, to screen applicants for residency. This is despite the fact that the test was not designed to be a primary determinant of the likelihood of success in residency." It's like using FOBT to screen for acute GI bleeds. It's not what it was designed to be used for, but everyone uses it anyways... because reasons (actually, USMLE to screen applicants has more use than FOBTs in acutely ill patients).

I get you.
But you still need a standardized and objective measuring stick to separate the men from the boys.
And no, evals are not objective.
 
I get you.
But you still need a standardized and objective measuring stick to separate the men from the boys.
And no, evals are not objective.
I don't disagree. We need an MCAT style test (one designed from the ground up to sort people) for residency. Maybe Step 1 and 2 need to be combined and the exam is taken June of 3rd year or July of 4th year (early enough for scores to be available for ERAS).
 
On average, I think that most people improve on their Step 1 Score with Step 2. The national mean Step 1 is around 228ish and Step 2 is 240ish.
They are two different tests. Just because they use the same numbers to reflect scores doesn't mean that they have the same scale (eg 75% right on step 1 is a 240, 75% on step 2 is a 250). The percentiles is what matters and I would wager that most people score a similar percentile in each test. I know I did despite my step 2 score being numerically higher.
 
I don't see what there is to disagree about since both a prior NBME vice-president (Melnick) and chair (First) cowrote an editorial back in 2016 where they stated, "There is an increasingly pervasive practice of using the USMLE score, especially the Step 1 component, to screen applicants for residency. This is despite the fact that the test was not designed to be a primary determinant of the likelihood of success in residency." It's like using FOBT to screen for acute GI bleeds. It's not what it was designed to be used for, but everyone uses it anyways... because reasons (actually, USMLE to screen applicants has more use than FOBTs in acutely ill patients).

I’m not aware of any program that uses USMLE scores as a primary determinant of future success. That would mean that programs ranked candidates from highest to lowest based primarily on test scores, and only considered other factors to distinguish candidates having identical scores. To the best of my knowledge, that is not happening anywhere.

Instead, some programs primarily use the USMLE to set their own minimal threshold of future probability at passing speciality board qualifying exams. I have absolutely no problem with that.
 
  • Like
Reactions: 1 user
They are two different tests. Just because they use the same numbers to reflect scores doesn't mean that they have the same scale (eg 75% right on step 1 is a 240, 75% on step 2 is a 250). The percentiles is what matters and I would wager that most people score a similar percentile in each test. I know I did despite my step 2 score being numerically higher.

The scale, 1-300, is the same between the two tests. What is different is the pass threshold and rough mean that is published. However, I’d be careful about trying to estimate a percentile based on the rough STD that is given. The NBME did away with percentages years ago and we have no way of knowing the true distribution around that rough mean.

Having said that, my only point is that most people will get a numerically higher score on Step 2. Whether certain programs account for this fact in their evaluation of USMLE performance is probably variable. We did not - specific points were awarded for scores > 250, 240-249, etc. regardless if it was Step 1 or 2 (along with points for AOA, research, interview performance, etc.).
 
I’m not aware of any program that uses USMLE scores as a primary determinant of future success. That would mean that programs ranked candidates from highest to lowest based primarily on test scores, and only considered other factors to distinguish candidates having identical scores. To the best of my knowledge, that is not happening anywhere.

Instead, some programs primarily use the USMLE to set their own minimal threshold of future probability at passing speciality board qualifying exams. I have absolutely no problem with that.

Correct. I agree that Step scores shouldnt be a primary determinant of how to rank candidates. I agree this wasnt what they were designed for. But I cant imagine there are many programs in any field that use Step scores as the primary determinant on how they rank their list (ie ranking people from high score to low score in order).
 
1. Jenny wouldn't hack it in med school.
2. Science is always progressing the standards for the MCAT and medical school are more competitive than they have ever been. So why is making the basic science test pass/fail and the more relevent step 2 CK scored a big concern. ALso these personalized care bits are added to residency training no need for extreme detail in medical school.
3. SLOEs are better so why not just make all clinical evaluations SLOEs?

Number 3 certainly makes sense...I’m surprised that the idea wasn’t adopted by other specialties.
 
Number 3 certainly makes sense...I’m surprised that the idea wasn’t adopted by other specialties.

Contrary to what you might believe, students do have to make decisions based on their grades.

In an all-SLOE system, when will they find out how competitive they are? Never. Well, they'll find out when Billy gets 0 interview invites for his Ortho application. Guess every applicant to every specialty is applying with FM backups, because apparently every single test is P/F and every single eval is hidden.
 
  • Like
Reactions: 2 users
Contrary to what you might believe, students do have to make decisions based on their grades.

In an all-SLOE system, when will they find out how competitive they are? Never. Well, they'll find out when Billy gets 0 interview invites for his Ortho application. Guess every applicant to every specialty is applying with FM backups, because apparently every single test is P/F and every single eval is hidden.

So much this.
 
  • Like
Reactions: 2 users
Contrary to what you might believe, students do have to make decisions based on their grades.

In an all-SLOE system, when will they find out how competitive they are? Never. Well, they'll find out when Billy gets 0 interview invites for his Ortho application. Guess every applicant to every specialty is applying with FM backups, because apparently every single test is P/F and every single eval is hidden.
Jeezy, simmer down...
 
Contrary to what you might believe, students do have to make decisions based on their grades.

In an all-SLOE system, when will they find out how competitive they are? Never. Well, they'll find out when Billy gets 0 interview invites for his Ortho application. Guess every applicant to every specialty is applying with FM backups, because apparently every single test is P/F and every single eval is hidden.

The same thing happens now Billy gets 3 Ortho interviews and plenty of competitive applicants don’t match so I don’t see the big deal those Ortho spots will be filled. One way or another.
 
Last edited:
Anyone know if there have ever been attempts to employ standardized testing that provides meaningful insight for residency match? Thinking something that assesses for resilience, checks how one's values and work ethic actually align with a given specialty, and maybe includes a personality inventory. Those things tend not to change with repeat testing.

I'm inclined to believe there's a gap between what medical students think they want vs how it actually pans out, plus programs need a quick way to compare applicants. Wondering if a new type of test could help lead to an actual match and less outright competition for lucrative jobs. Such a test would also avoid substantially increasing the expenses applicants incur through the SLOE-based system. The results would be available to applicants and they might actually learn something about themselves. Thoughts?
 
Anyone know if there have ever been attempts to employ standardized testing that provides meaningful insight for residency match? Thinking something that assesses for resilience, checks how one's values and work ethic actually align with a given specialty, and maybe includes a personality inventory. Those things tend not to change with repeat testing.

I'm inclined to believe there's a gap between what medical students think they want vs how it actually pans out, plus programs need a quick way to compare applicants. Wondering if a new type of test could help lead to an actual match and less outright competition for lucrative jobs. Such a test would also avoid substantially increasing the expenses applicants incur through the SLOE-based system. The results would be available to applicants and they might actually learn something about themselves. Thoughts?
This sounds like a subjective nightmare

SVI 2.0 cometh

1582127006765.png
 
This sounds like a subjective nightmare

SVI 2.0 cometh
The SVI introduced other concepts, like how someone looks or how they sound when answering questions with little preparation. A tool like that is a precursor to or replacement for a standard in-person interview. Plus they were subjectively graded by non-medical folks. That's a lot of chaos to consider and not what I'm asking/suggesting.

Rather, personality inventories like MBTI have a long history. And specialties are distinct in their setting, tempo, required skills, level of interaction with others... reasonable to assume these things could be generally agreed upon within a specialty or at least PDs could decide which traits they want. It's the same thing they do during interviews, but with objective support. Plus way more scalable.
 
The SVI introduced other concepts, like how someone looks or how they sound when answering questions with little preparation. A tool like that is a precursor to or replacement for a standard in-person interview. Plus they were subjectively graded by non-medical folks. That's a lot of chaos to consider and not what I'm asking/suggesting.

Rather, personality inventories like MBTI have a long history. And specialties are distinct in their setting, tempo, required skills, level of interaction with others... reasonable to assume these things could be generally agreed upon within a specialty or at least PDs could decide which traits they want. It's the same thing they do during interviews, but with objective support. Plus way more scalable.
competitive specialties and programs need an objective test to stratify candidates, now that test will just be CK.

the MBTI/personality/every-one-gets-a-trophy-tests would never be used, that's what the interview is for.

Step 1 was a beast but I respected the grind, the opportunity to get ahead

Now it seems applicant competitiveness will roll further down the rabbit hole with covert SLOE grades and your idea of grading people on "feels"
 
competitive specialties and programs need an objective test to stratify candidates, now that test will just be CK.

the MBTI/personality/every-one-gets-a-trophy-tests would never be used, that's what the interview is for.

Step 1 was a beast but I respected the grind, the opportunity to get ahead

Now it seems applicant competitiveness will roll further down the rabbit hole with covert SLOE grades and your idea of grading people on "feels"
Step 1 objectively assesses general medical knowledge - no debate there. It's not applied in a specialty-specific way though, and the stratification of candidates based on scores into different specialties is not particularly helpful. Do you concede that dermatologists or radiologists are more intelligent than emergency med physicians? Step 1 scores say so. Does one need to be more intelligent to be a plastic surgeon than a vascular surgeon? Step 1 scores say so. Why is there so much overlap when specialties are compared? Why do applicants with high scores fail to match? Why is there a minimum score for Step 1 at all? Yeah Step 1 can't explain that. If the complaint is that licensing exams set the bar too low, then advocate for raising the bar entirely and deal with the fallout instead of using Step 1 scores for something they aren't intended for.

Personality typing is far from grading based on "feels." If you want to get rid of feelings-based selection, get rid of personal statements, letters of intent, interviews, and letters of recommendation, including the SLOE. People have different qualities and perspectives. These characteristics guide our interactions with others, how we do our jobs, and how adaptable we are. I certainly didn't make Step 1 pass/fail, but it's happening. I encourage adopting a solutions-focused mindset instead of throwing hands into the air screaming about how wrong this is. With that in mind, do you have any suggestions on how candidates could be better assessed?
 
Step 1 objectively assesses general medical knowledge - no debate there. It's not applied in a specialty-specific way though, and the stratification of candidates based on scores into different specialties is not particularly helpful. Do you concede that dermatologists or radiologists are more intelligent than emergency med physicians? Step 1 scores say so. Does one need to be more intelligent to be a plastic surgeon than a vascular surgeon? Step 1 scores say so. Why is there so much overlap when specialties are compared? Why do applicants with high scores fail to match? Why is there a minimum score for Step 1 at all? Yeah Step 1 can't explain that. If the complaint is that licensing exams set the bar too low, then advocate for raising the bar entirely and deal with the fallout instead of using Step 1 scores for something they aren't intended for.

Personality typing is far from grading based on "feels." If you want to get rid of feelings-based selection, get rid of personal statements, letters of intent, interviews, and letters of recommendation, including the SLOE. People have different qualities and perspectives. These characteristics guide our interactions with others, how we do our jobs, and how adaptable we are. I certainly didn't make Step 1 pass/fail, but it's happening. I encourage adopting a solutions-focused mindset instead of throwing hands into the air screaming about how wrong this is. With that in mind, do you have any suggestions on how candidates could be better assessed?
if you crack 250 all doors are open, some would argue even lower. Vascular and Plastics are both highly competitive.

No Im not saying surgeons are smarter than EM, where did I say that?

But yes the cost of entry is much cheaper for EM than Plastics though steps wise, obviously, more money, more prestige, harder to get in.

The rest of the application is auxiliary, neutering the most objective/important piece of the app is just an odd move to me.

Also just another obvious point, I don't want someone operating on my brain that was selected based on personality traits lol
 
if you crack 250 all doors are open, some would argue even lower. Vascular and Plastics are both highly competitive.

No Im not saying surgeons are smarter than EM, where did I say that?

But yes the cost of entry is much cheaper for EM than Plastics though steps wise, obviously, more money, more prestige, harder to get in.

The rest of the application is auxiliary, neutering the most objective/important piece of the app is just an odd move to me.

Also just another obvious point, I don't want someone operating on my brain that was selected based on personality traits lol
Do you have a suggestion on what should be done?
 
Top