Class of 2023 Step 1 Scores Possibly may be converted to P/F on Residency Application per USMLE town hall

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Regarding the advising at your school, the conversation would go much differently at my school (mid-low tier). We would be told to consider a research year even with a good Step 1. If anything, I think Step 1 mania helped students from top-tier schools more on average than those at lower schools, despite talk of Step 1 being an equalizer. People tend to forget that top schools have higher Step 1 averages, barring a few exceptions.
That is fair, I won't deny that being a good test-taker is more of a golden ticket now than probably any time prior. From college to medical school and now to residency, it's like the STEM equivalent of being good-looking for an aspiring actor.

But that's actually one of the arguments Carmody has brought up a few times - students at mid and low tier schools should probably not be big fans of Step 1 as a population, because they post lower averages. It's great for the individual who is an outlier in their class, but not really helping DO or lower tier MD schools as a cohort, when U of State is posting 225 avg and Top 20s are often now 245-250. Seems like going off grades and sub-Is would be a better option for Average Joe.

Members don't see this ad.
 
  • Like
Reactions: 1 users
In fact this is 90% of what being a good surgical resident is all about.
But what's a better proxy for this 90%? Their impression from a month of acting as an intern? Or how hard they slapped spacebar? I like the former
 
  • Like
Reactions: 1 user
Okay, and the advising in the future will be, "oh I see you have mostly honors on your rotations, you probably need a research year anyway".
I think we've had this conversation before, I have zero issues with academic programs using research years and productivity as a major criteria in lieu of step. I think it's probably a fantastic marker for who is more willing to walk the walk re: academic career. As much as the world needs another academically brilliant bone wizard, it needs someone who will conduct good bone wizardy research even more.
 
  • Love
  • Dislike
Reactions: 1 users
Members don't see this ad :)
I think we've had this conversation before, I have zero issues with academic programs using research years and productivity as a major criteria in lieu of step. I think it's probably a fantastic marker for who is more willing to walk the walk re: academic career. As much as the world needs another academically brilliant bone wizard, it needs someone who will conduct good bone wizardy research even more.

I disagree. Many physicians just want to practice bread-and-butter medicine and that's okay. Too often, we see med students publish just for the sake of ERAS and give up on research once they enter residency and attending-hood. Even academic attendings publish too much fluff that will never get read/cited to stay on the promotion treadmill.

Sub-I performance is a good benchmark and probably some sort of specialty-specific standardized exam.
 
  • Like
Reactions: 3 users
I sincerely doubt that's how it worked in 1995.
It was very similar. I was there.

I got a great Step 1 score. I was told I shouldn't waste it on IM, should do Ophthalmology which was the hot field then.

I wasted it in IM.

Is it more pronounced now? Yes, I think it is, but not to the extreme suggested. The fields affected have changed -- as mentioned, ortho was not as competitive back then.

Also, the reporting of scores by Charting Outcomes has made things much worse. When I applied, I was blissfully unaware of where my score fit in the crowd of applicants. But now you know. And once you know the average, no one wants to be below average. So the drive to study more / do better is strong. People score a bit better, the average goes up, and the cycle repeats.

But take scores away, and the whole process will just focus on something else.
 
  • Like
Reactions: 4 users
I disagree. Many physicians just want to practice bread-and-butter medicine and that's okay. Too often, we see med students publish just for the sake of ERAS and give up on research once they enter residency and attending-hood. Even academic attendings publish too much fluff that will never get read/cited to stay on the promotion treadmill.

Sub-I performance is a good benchmark and probably some sort of specialty-specific standardized exam.

Do community programs in competitive specialties require research? Because the assumption is mainly the academic/university programs.
 
Lol board scores were still important in the 90s and early 2000s. I’ve had multiple attendings tell me no one was getting something competitive with below average boards. The big difference between then and now was the best resource for a high board score used to be your curriculum.
 
  • Like
Reactions: 5 users
I disagree. Many physicians just want to practice bread-and-butter medicine and that's okay. Too often, we see med students publish just for the sake of ERAS and give up on research once they enter residency and attending-hood. Even academic attendings publish too much fluff that will never get read/cited to stay on the promotion treadmill.

Sub-I performance is a good benchmark and probably some sort of specialty-specific standardized exam.

Totally agree with everything you've said here. Personally, I'd be fine with aways and specialty-specific exams determining your match prospects. That way, you get to preserve some amount of objectivity and the info you're learning actually matters for the specialty you're pursuing.
 
  • Like
Reactions: 2 users
It was very similar. I was there.

I got a great Step 1 score. I was told I shouldn't waste it on IM, should do Ophthalmology which was the hot field then.

I wasted it in IM.

Is it more pronounced now? Yes, I think it is, but not to the extreme suggested. The fields affected have changed -- as mentioned, ortho was not as competitive back then.

Also, the reporting of scores by Charting Outcomes has made things much worse. When I applied, I was blissfully unaware of where my score fit in the crowd of applicants. But now you know. And once you know the average, no one wants to be below average. So the drive to study more / do better is strong. People score a bit better, the average goes up, and the cycle repeats.

But take scores away, and the whole process will just focus on something else.
Unfortunately I've got data on Ophtho which goes back to 90s when it first started getting reported on sfmatch.

1998: Ophtho at 213 vs 212 national
1999: Ophtho at 217 vs 215 national
(old national scores here)

Since you just said Ophthalmology was hot and their averages were right on the national mean...it sure looks different than modern competitive specialties to me.
 
Lol board scores were still important in the 90s and early 2000s. I’ve had multiple attendings tell me no one was getting something competitive with below average boards. The big difference between then and now was the best resource for a high board score used to be your curriculum.
Thoughts on the Ophtho mean being right on the national mean for the late 90s then? The match rate was ~80% so I def think it was a competitive specialty
 
Last edited:
What do we have to do so that you can fail boards and still moonwalk your way into ortho like they did in the 60s or whatever? Whatever that is, let's do that
 
  • Like
Reactions: 1 user
Do community programs in competitive specialties require research? Because the assumption is mainly the academic/university programs.

Probably less, but I doubt you can match anywhere in derm, ortho, plastics, ENT, neurosurgery without research, when averages for pubs are double digits.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Probably less, but I doubt you can match anywhere in derm, ortho, plastics, ENT, neurosurgery without research, when averages for pubs are double digits.

Ehh that should really change for something else since hard to see non academic programs with minimal research infrastructure having a strong requirement for research.

Also the averages are heavily inflated since people present the same thing at multiple places and include school posters
 
  • Like
Reactions: 1 users
Ehh that should really change for something else since hard to see non academic programs with minimal research infrastructure having a strong requirement for research.

Also the averages are heavily inflated since people present the same thing at multiple places and include school posters

I 100% agree, the publication creep has gotten out of hand, w/r/t phantom publications, milking projects, low impact fluff that will never get cited. For research, I think they should start taking into account (in decreasing order) - first authorships (involvement in project), impact factor of journals, and citations. Bye-bye case reports.
 
  • Like
Reactions: 2 users
Do community programs in competitive specialties require research? Because the assumption is mainly the academic/university programs.
Probably less, but I doubt you can match anywhere in derm, ortho, plastics, ENT, neurosurgery without research, when averages for pubs are double digits.

Yeah, people match without research every year in these fields, but very few manage to do it. According to Charting the Outcomes 2018, only 3.4% matched into ortho without any abstracts, presentations, or pubs.
 
Yeah, people match without research every year in these fields, but very few manage to do it. According to Charting the Outcomes 2018, only 3.4% matched into ortho without any abstracts, presentations, or pubs.

I'm thinking if someone likes ortho but doesn't want to do research and is happy working in private practice... why are they forced to do research? I feel like aways and sub Is play a stronger role in showing interest than research padding
 
  • Like
Reactions: 2 users
I'm thinking if someone likes ortho but doesn't want to do research and is happy working in private practice... why are they forced to do research? I feel like aways and sub Is play a stronger role in showing interest than research padding

Like I've said before, it's box checking, plain and simple. I totally agree with you, but the game is the game.
 
  • Like
Reactions: 1 users
Thoughts on the Ophtho mean being right on the national mean for the late 90s then? The match rate was ~80% so I def think it was a competitive specialty
Honestly not sure what to make of that. Maybe it’s just because it’s a small field and wasn’t as hot. I’m literally guessing here. I actually have had a really hard time finding data from before the early 2000s and even early 2000s is tough. Could you share data from other specialties? I’ve had multiple FM attendings tell me they didn’t have the grades/scores for derm or rads back then and even earlier hence my earlier comments. I’m not above admitting I’m wrong if you can prove that people were matching derm, ent, nsg, or whatever was competitive at the time with average or below scores on the regular because no one cared about boards back then. I’m just parroting what other docs who went through it told me.

Best I can find is this thread from 2003 where people are posting about board cutoffs and the need to be above average for classically competitive fields. See post #9. This thread is also a great example of how charting outcomes have really changed the game.

 
  • Like
Reactions: 1 users
Honestly not sure what to make of that. Maybe it’s just because it’s a small field and wasn’t as hot. I’m literally guessing here. I actually have had a really hard time finding data from before the early 2000s and even early 2000s is tough. Could you share data from other specialties? I’ve had multiple FM attendings tell me they didn’t have the grades/scores for derm or rads back then and even earlier hence my earlier comments. I’m not above admitting I’m wrong if you can prove that people were matching derm, ent, nsg, or whatever was competitive at the time with average or below scores on the regular because no one cared about boards back then. I’m just parroting what other docs who went through it told me.

Best I can find is this thread from 2003 where people are posting about board cutoffs and the need to be above average for classically competitive fields. See post #9. This thread is also a great example of how charting outcomes have really changed the game.

I only found it for Ophtho going that far back on their archived sfmatch pages, very lucky coincidence that it was a "hot specialty" with 80% matching. I also would love to see a charting outcomes type of thing for the 90s. I'm sure the online mythology has always been exaggerated just like it is now, where people talk about needing a 250+ and first authorship pub to stand a chance.
 
  • Like
Reactions: 1 user
Like I've said before, it's box checking, plain and simple. I totally agree with you, but the game is the game.

this I do think is a big issue. Just like university education has historically been organized at functioning best for ppl who want to become academics even though most absolutely don’t, Med Ed is organized around training academics even though most aren’t / won’t be interested. Cool if you want to be an academic. Sucks for everyone else.

Programs need to define themselves better, what kind of residents they hope to produce, and reward more diverse kinds of engagement and work than simply clinical or bench research.

this is a bigger and deeper problem in academia generally, which has been extremely slow in moving away from grants / publications as the sole currency of career advancement, as if it was all people had to offer
 
  • Like
Reactions: 2 users
I only found it for Ophtho going that far back on their archived sfmatch pages, very lucky coincidence that it was a "hot specialty" with 80% matching. I also would love to see a charting outcomes type of thing for the 90s. I'm sure the online mythology has always been exaggerated just like it is now, where people talk about needing a 250+ and first authorship pub to stand a chance.
Oh yeah. I wasn’t preaching that rando thread as gospel. Just the best example I could find. Although people in that thread do seem to know the stats of their specific specialty.
 
  • Like
Reactions: 2 users
this I do think is a big issue. Just like university education has historically been organized at functioning best for ppl who want to become academics even though most absolutely don’t, Med Ed is organized around training academics even though most aren’t / won’t be interested. Cool if you want to be an academic. Sucks for everyone else.

Programs need to define themselves better, what kind of residents they hope to produce, and reward more diverse kinds of engagement and work than simply clinical or bench research.

this is a bigger and deeper problem in academia generally, which has been extremely slow in moving away from grants / publications as the sole currency of career advancement, as if it was all people had to offer

But the thing is, what incentive is there for them to make these changes? Demand far outstrips supply, so they don't have to cater to anyone, really. Programs will get quality candidates regardless, because they know we have no recourse, unlike in the corporate world, for instance.
 
  • Like
Reactions: 2 users
Someone made a great point in the second thread posted. Besides being unfair to the class of 2023 applying to ERAS in 2022, it affects the classes above us that are doing research years, masters programs, and PhDs. None of us knew this was a possibility before we made our decisions, including school choice and how we choose to spend our time.


Spot on. People cant be asked to make a decision that impacts the course of their lives one set of rules to have the game changed part way through. This needs to be class of 2024 or later so everyone can make informed decisions.
People may have chosen a different school, not done research, reapplied (a year or 2 later) instead of doing DO or Carib etc.
 
  • Like
Reactions: 1 users
Spot on. People cant be asked to make a decision that impacts the course of their lives one set of rules to have the game changed part way through. This needs to be class of 2024 or later so everyone can make informed decisions.
People may have chosen a different school, not done research, reapplied (a year or 2 later) instead of doing DO or Carib etc.

And the people who started with class of 2023 but are doing MD/PhDs are in the same boat except they are not graduating until 2026, so making it for class of 2024 to 2026 affects them too. They certainly aren’t going to push this out to 2027. It’s going to be unfair to someone, so they’re probably just going to do whatever the easiest thing is.
 
And the people who started with class of 2023 but are doing MD/PhDs are in the same boat except they are not graduating until 2026, so making it for class of 2024 to 2026 affects them too. They certainly aren’t going to push this out to 2027. It’s going to be unfair to someone, so they’re probably just going to do whatever the easiest thing is.

Just push it out to the next millennium, problem solved. Wow, that was easy. What else do they need help with?
 
  • Haha
  • Like
Reactions: 2 users
And the people who started with class of 2023 but are doing MD/PhDs are in the same boat except they are not graduating until 2026, so making it for class of 2024 to 2026 affects them too. They certainly aren’t going to push this out to 2027. It’s going to be unfair to someone, so they’re probably just going to do whatever the easiest thing is.

Or function off the majority rather than what’s easy. Sure it’s going to be unfair to a very small % of people. But likely MDPhD students are applying to mostly different programs. And if they are competing with “just” MD students at least the PhD Makes them stand out. The majority of 2023 will be adversely effected with this plan. A small % of 2023 will be affected if it was pushed back a year. And 2024 would have known before they decided how to spend their years so it would impact them less as the most effected year.
 
  • Like
Reactions: 1 user
And the people who started with class of 2023 but are doing MD/PhDs are in the same boat except they are not graduating until 2026, so making it for class of 2024 to 2026 affects them too. They certainly aren’t going to push this out to 2027. It’s going to be unfair to someone, so they’re probably just going to do whatever the easiest thing is.

talking about affecting 70k students versus 600 students.
 
  • Like
Reactions: 1 users
But the thing is, what incentive is there for them to make these changes? Demand far outstrips supply, so they don't have to cater to anyone, really. Programs will get quality candidates regardless, because they know we have no recourse, unlike in the corporate world, for instance.

step 1 going P/F is precisely one of those incentives. Programs need to be able to differentiate candidates. And they *want* to be able to do so in a way that’s meaningful for that particular program. They do not as of yet have the tools to do that, but it’s not fantastical to say they can’t be invented. NotAProgDirector suggested specialties designing their own exam which I personally think is silly and I would just avoid a program that felt they needed to do that to believe I was teachable. At some point we just have to admit that the bar for entry into medicine is so high that virtually anyone could theoretically be capable of learning any job/specialty in medicine and any attempts to distinguish people based on some supposedly “objective” measure will just be unnecessarily splitting hairs.

SLOEs have completely changed the paradigm of applying EM. Im not a PD so dont pretend to have answers, but the conversation outside of SDN/Reddit where this decision means that anyone who didn’t go to a T5 MD school will be forced at gunpoint to do FM in Nebraska among public facing MedEd folks is to consider this an opportunity.
 
  • Like
Reactions: 3 users
talking about affecting 70k students versus 600 students.

Can you imagine studying thousands of hours for step, taking a leave of absence from medical school to do your PhD, continuing to work in lab for 3 years for 80 hours a week, and then hearing that people are considering retroactively converting your step score to pass fail?

Retroactive conversion of numerical step scores to p/f is at best highly unethical and at worst lawsuit-worthy. The ONLY equitable option is for NBME to pick a date far off enough in the future, for example 2023, and say that all examinees taking step after that point will only receive pass fail scores.
 
Last edited:
  • Like
Reactions: 5 users



Just saw this post on reddit. Personally am not class of 2023 but I feel like it makes sense for their scores to be converted just to have consistency for that application cycle. It sucks for those who have already put in a lot of studying for step 1 though. Thoughts?

Editing to add another post from reddit:

I watched the webinar, posted on USMLE.org, and I don't see any indication one way or another on a decision. I know some people have said they've called "reps," but in terms of verifiable information such as the webinar, I don't see a clear answer like what has been suggested. Maybe I missed something?
 
  • Like
Reactions: 1 user
Can you imagine studying thousands of hours for step, taking a leave of absence from medical school to do your PhD, continuing to work in lab for 3 years for 80 hours a week, and then hearing that people are considering retroactively converting your step score to pass fail?

Retroactive conversion of numerical step scores to p/f is at best highly unethical and at worst lawsuit-worthy. The ONLY equitable option is for NBME to pick a date far off enough in the future, for example 2023, and say that all examinees taking step after that point will only receive pass fail scores.

100% agree. But we know they wont, so we should at least pick an option that doesn't screw over 70k people.
 
Do you guys really think step 1 pass/fail is going to change a lot of people's outcome in medical admissions? It's a very slim minority that faces the scholarship vs prestige dilemma. And I really can't imagine many people turning down low-tier MD admits to try and reapply over this. It's not like someone who barely scrapes by in their first cycle is going to be getting a bunch of Top 40 NIH invites as a reapplicant.
 
  • Like
  • Dislike
Reactions: 5 users
Do you guys really think step 1 pass/fail is going to change a lot of people's outcome in medical admissions? It's a very slim minority that faces the scholarship vs prestige dilemma. And I really can't imagine many people turning down low-tier MD admits to try and reapply over this. It's not like someone who barely scrapes by in their first cycle is going to be getting a bunch of Top 40 NIH invites as a reapplicant.

I just don't get where this sky is falling mentality is coming from. MD/Phds and those doing research years benefit more from P/F. Not everyone doing a research year has a 260; many use a research year to make up for a lower score. Even for competitive specialties, a high step score alone is not enough to let you match. You have to have everything else - LORS, research, Step 2, and high Step 1 scorers are more likely to have these boxes ticked.
 
  • Like
Reactions: 2 users
Can you imagine studying thousands of hours for step, taking a leave of absence from medical school to do your PhD, continuing to work in lab for 3 years for 80 hours a week, and then hearing that people are considering retroactively converting your step score to pass fail?

Retroactive conversion of numerical step scores to p/f is at best highly unethical and at worst lawsuit-worthy. The ONLY equitable option is for NBME to pick a date far off enough in the future, for example 2023, and say that all examinees taking step after that point will only receive pass fail scores.

I think there is a difference here between current Md/Phd students in their Phd years and those in c/o 2023. I agree that Md/Phd students who have taken their test should not have their score retroactively masked. However, for c/o 2023, every student is at least 10-12 months out from taking Step 1.
 
I just don't get where this sky is falling mentality is coming from. MD/Phds and those doing research years benefit more from P/F. Not everyone doing a research year has a 260; many use a research year to make up for a lower score. Even for competitive specialties, a high step score alone is not enough to let you match. You have to have everything else - LORS, research, Step 2, and high Step 1 scorers are more likely to have these boxes ticked.
Yeah I also have zero concern for MSTP candidates. They're overwhelmingly located at the biggest centers and their match has never been as reliant on Step 1. Folks who walk the academia walk are too valuable.
 
I only found it for Ophtho going that far back on their archived sfmatch pages, very lucky coincidence that it was a "hot specialty" with 80% matching. I also would love to see a charting outcomes type of thing for the 90s. I'm sure the online mythology has always been exaggerated just like it is now, where people talk about needing a 250+ and first authorship pub to stand a chance.
Nice find, and I'm happy to be proven incorrect. I had an ophthal mentor, so likely some of my memories are biased.

I think we actually agree more than disagree here. Scores were important back in the 1990's. They are more important now. I think they are less important than people think here on SDN, and much of the hysteria is misplaced. The competitiveness of fields changes over time based on many factors. Removing scores from Step 1 is fine, but the hysteria will just shift elsewhere. If it's SLOE-equivalents, perhaps that's better. But SLOE's classify students into groups / quartiles. Get a SLOE that's 2nd or 3rd quartile, and your career in that field is over. If everyone is allowed to get as many SLOEs as they want, they can just shop around until they get a top 5% one - then everyone will be top 5%, and they will be useless.
 
  • Like
  • Hmm
Reactions: 5 users
Nice find, and I'm happy to be proven incorrect. I had an ophthal mentor, so likely some of my memories are biased.

I think we actually agree more than disagree here. Scores were important back in the 1990's. They are more important now. I think they are less important than people think here on SDN, and much of the hysteria is misplaced. The competitiveness of fields changes over time based on many factors. Removing scores from Step 1 is fine, but the hysteria will just shift elsewhere. If it's SLOE-equivalents, perhaps that's better. But SLOE's classify students into groups / quartiles. Get a SLOE that's 2nd or 3rd quartile, and your career in that field is over. If everyone is allowed to get as many SLOEs as they want, they can just shop around until they get a top 5% one - then everyone will be top 5%, and they will be useless.

How can the hysteria be resolved though? I thought application caps could help stop the flood of apps being sent but i'm not sure what the alternative is if capping is controversial.
 
  • Like
Reactions: 1 user
Nice find, and I'm happy to be proven incorrect. I had an ophthal mentor, so likely some of my memories are biased.

I think we actually agree more than disagree here. Scores were important back in the 1990's. They are more important now. I think they are less important than people think here on SDN, and much of the hysteria is misplaced. The competitiveness of fields changes over time based on many factors. Removing scores from Step 1 is fine, but the hysteria will just shift elsewhere. If it's SLOE-equivalents, perhaps that's better. But SLOE's classify students into groups / quartiles. Get a SLOE that's 2nd or 3rd quartile, and your career in that field is over. If everyone is allowed to get as many SLOEs as they want, they can just shop around until they get a top 5% one - then everyone will be top 5%, and they will be useless.
That's a good point, a lot of the psychological damage is self-inflicted. It's definitely a lot easier to fixate on your Step number than it is to fixate on interviewing skills or letters, even if we know the latter are more important for rank lists.

I share Lucca's concern that we're in a positive feedback loop with no end. It's not just boards that have gotten much higher, we're also seeing ridiculous amounts of low quality research entries, people doing 3+ away rotations, and schools inflating their grades so that everyone Honors. Admissions criteria just to get into med school are already much, much higher than they need to be to predict success.

So at some point we have to admit a system like SLOE makes no sense, because 9/10 US med students are capable of being fine ER docs, and torpedoing half their careers is a mistake. If this SLOE system became widespread it'll implode unless residencies become comfortable with the idea of matching 3rd-4th quartile students, knowing that a below average student in our system is still a competent trainee.
 
  • Like
Reactions: 1 user
How can the hysteria be resolved though? I thought application caps could help stop the flood of apps being sent but i'm not sure what the alternative is if capping is controversial.
Caps is the only idea I've heard of that could help the problem at its source. If we keep letting apps per capita snowball, then people are right, Step 1 is just going to get replaced with a ratrace in other areas like cranking out useless posters. You need both masked boards and caps to take steps back towards the 90s-2000s.
 
  • Like
Reactions: 1 user
Caps is the only idea I've heard of that could help the problem at its source. If we keep letting apps per capita snowball, then people are right, Step 1 is just going to get replaced with a ratrace in other areas like cranking out useless posters. You need both masked boards and caps to take steps back towards the 90s-2000s.

I was thinking reverting to preclinical grades + caps tbh
 
I was thinking reverting to preclinical grades + caps tbh
As much as I hated step 1 mania, having everyone fight for class rank based on their professors' slide decks is a step in the wrong direction. Caps all day. Especially this year, which is going to serve as a great example of what lies ahead for med students if caps aren't placed soon.
 
  • Like
Reactions: 3 users
As much as I hated step 1 mania, having everyone fight for class rank based on their professors' slide decks is a step in the wrong direction. Caps all day. Especially this year, which is going to serve as a great example of what lies ahead for med students if caps aren't placed soon.

if caps can reduce the mania, what's wrong with having scored steps other than misusing exams meant to be P/F by having scores?
 
I'm going to plug this in here because I think it's an interesting read and a good way of seeing how much USMLE + Honors dominates the interview invitation process. This is from a PD at a competitive radiology program (average matched Step of 250, so maybe a decent proxy for the typical surgical subspecialty too).

These are great questions. I'll try to tackle them in depth, but it's going to take a while for me.

I thought it might help to understand process for selection for interview at my place. You can glean what I'm looking for at this stage of the application process--it only applies to selection for interview, not ranking after interview.

Here's what I do:

1. I download the ERAS application data into an excel spreadsheet which allows me to create custom parameters and is easier for me to filter/sort and quickly review. ERAS allows you to download certain parameters. Each applicant is a row, and I type notes and create formulas into custom columns that I create.

2. I personally select the resident applicants that we will interview from the hundreds of applications we get. It’s just too slow and hard to do this with a committee. At my old institution, this was also done by essentially 1 person (not me back then), but I don’t know if they have changed that. I have a “2nd reviewer” for borderline cases, maybe I use that for 3-4 applications each year.

3. If an applicant has taken USMLE Step 2, I average that score with Step 1. If an applicant hasn’t’ taken USMLE Step 2, I add 5 points to their Step 1 score and create a “derived Step 2” score (edit 3/2018--for the 2018 Match, we just used the Step 1 score as the person's Step 2 score for our spreadsheet/formulas--this really does underestimate the Step 2 score, since most people do better on Step 2 than Step 1). In our applicant pool, the average applicant has a Step 2 score that is about 7-10 points higher than Step 1. So it hurts you a little if you haven’t taken Step 2 because my assumed score for you is not as high as it statistically would be if you are the average applicant.

4. Turns out I apply a formula to the “combined USMLE score” that discounts super-high achievement. Essentially, as your score on Step 1 or Step 2 gets higher from 250, you get less added points. For example, someone who gets 250 on Step 1 and Step 2 has a combined score of 500 in my system. However, someone who gets 265 on Step 1 and Step 2 has a combined score of 515 in my system, not 530 (and the score is capped there, meaning even higher scores don't add points). I don’t want super-high achievement on USMLE to dominate an applicant’s eventual “overall score”, or to make up for lower clinical grades and interview scores. This last year, in our program, our applicants had a mean combined USMLE score of 494, and a median of 498, with a standard deviation of 19. Remember, those scores have been adjusted to discount performance substantially above 250 in a formulaic way that progressively discounts numbers the farther above 250.

5. We filter out the applications based on USMLE scores, but we use a really low threshold—in our case, we use a “soft” 220 USMLE step 1 score, which generally means I will consider applicants in the 215-220 range based on strength of school and how well they did on step 2, as well as other factors. Right now the average Step 1 score for all medical students is about 228 or so, I believe, so allowing someone to be as low as 215 in our case means you don’t necessarily have to be book smart to get into our program. However, radiology boards is now a computerized test, and we don’t want to worry about residents who may become outstanding radiologists but who might struggle with tests. There is too much of a penalty for our program in future applicant perception if one of our residents fails the boards. (edit 3/2018: despite our willingness to look at applicants with lower than average board scores, the average Step 1 score for the applicants we matched in March 2018 was 250, and the average Step 2 score was 258).

6. Our residency essentially filters out applicants who are IMGs—to be honest, I think there are some outstanding candidates in this group particularly from those individuals who are not from the US, but the problem is that it is really difficult to find those for me because we don’t use test scores as much in our ranking process. Communication skills are very important to us, and that can be a sticking point for some IMGs who did not grow up in the US that we don’t understand until the interview, and I don’t want to waste people’s time with interviews if there is a low chance of success. On the other hand, we understand there are some life circumstances and other legitimate reasons why some applicants who are US based ended up doing medical school internationally. So I download these into my spreadsheet, dig deeper at maybe the candidates with board scores above 250 to see if I recognize the school, review publications, special experiences, special circumstances, etc. Sometimes a colleague will ask me to look carefully at a person that someone in their field has highlighted for them. We do interview a few IMGs every year (< 5), and some are quite good. However, they face hurdles all along the way in our ranking process—just being honest.

7. We do the same as #6 for DO candidates. We do interview a few every year (< 3), but we believe there is a penalty for our program in future applicants if we have a number of DO residents, mainly because there is the perception that we couldn’t attract the best MD candidates. It’s unfortunate for some DO students who are going to be great, but it is reality.

8. One of the filters I use to select who to interview is 3rd year core clerkship performance. This is a really tough metric, since the schools are all over the place. Believe it or not, I spend the time to create a “translation formula” that looks at the % of Honors/High Pass/Pass from each US medical school from which we receive an application, so I can compare the performance of students from different schools—even then, it’s not easy and likely not accurate. For example, just looking at my spreadsheet for this year, I see that for the core clerkships that we review, at the University of Central Florida 52% of students got Honors and 48% got High Pass (no one just passed), whereas at Florida International University 14% got Honors and 29% got High Pass. I have a convoluted formula that tries to “normalize” this data, so that the student at FIU that got High Pass is given the same number of points as the student at UCF that got Honors.

9. Since I have to actually open up the application to get the 3rd year clerkship performance and % honors/high pass/pass data, I do quickly jot down a few notes about the candidate at this time—I’ll jot down a few sentences about the candidate about their particular timeline, skimming the personal statement, looking quickly at the research history, etc. I quickly look at the Dean’s letter (if available) to look for red flags (repeat courses). I do put down what “quartile” the Dean’s letter says you are in. I don’t have time to review LORs at this point UNLESS I can tell that the candidate is probably going to be on my “borderline” for selecting for interview. For example, here is a typical fictional set of notes that I might jot down in my “comment” box for a candidate at this point: “Brown undergrad; 1 yr gap spent as research intern for startup and also doing volunteer work; PS specifically mentions us”. In the “Dean’s letter” box, I might put “2nd quartile”.

10. My spreadsheet combines the “3rd year clerkship score” with the USMLE average score (either real or derived) in a way that weights the clerkship score. This gives each applicant a “non-interview” score—that is, their score without consideration of the interview. As you will see later, the eventual evaluation of a candidate relies more on the interview than this score. But this is the metric that helps us decide who to interview.

11. I sort my spreadsheet using the “non-interview” score to decide who to interview. For about 67% of our interview slots, I just take them from the top. For the final 33%, I use a different lower threshold for interviewing applicants who are considered “local” (from our med school and schools within about 100 miles of our urban program), “regional” (about 500 miles), and “national” (everyone else). The reason we do this is because we find that applicants are more likely to not cancel interviews and match with us if they are local or regional. We also don’t want to piss off our school/the local schools and not interview their students—to a degree. We won’t interview if someone is really not going to be up to snuff based on performance. If a national candidate has ties to our area that are obvious in the application (I look at permanent address and undergraduate location), then they get put in the local pile. Similarly, if the applicant did a rotation with us, we consider them in the local pile even if they aren’t. However, we don’t take that many medical students outside our own medical school for rotations. Along with the varying thresholds based on geography, I look at my comments and the “Dean’s letter comments” to decide who to select for these final 33%.

12. If the applicant is AOA, they almost always get an interview. Turns out they always score above my threshold on the non-interview score anyway (makes sense, since AOA status is generally a function of traits that are well reflected in the USMLE scores and core clerkship grades). However, we sometimes have an AOA student who I end up not interviewing, because of something in the application that is a red flag that I can easily see from my spreadsheet (repeating a course, something in the Dean’s letter).

13. We slightly “overinterview” in our program—basically, interview about 10-14 applicants for every spot, even though we typically fill our spots at the 5-8 applicant/spot filled mark—and even then, about 3/4ths of our class is filled before the 5 applicant/spot mark. The reason we do this is because we don’t trust that our combined “USMLE + clinical clerkship” score is so precise that we can rely on it, and we sometimes find that applicants we end up ranking fairly high would not have been offered an interview with us if we had not “overinterviewed”. (note: in 2018, we ended up filling at the 4 applicant/spot mark, so we are reducing the number of people we interview).

So, a few things should be evident so far, regarding “how to get an interview” at my program:

1. Do well on USMLE tests, but no need to ace it. Does it help to get a 270 vs. a 250?—not really.

2. Do well on 3rd year core clerkships.

3. Be local or communicate your connection to my community in some way—it lowers the threshold for you getting an interview. Say it in the personal statement if it is true. Even then, you might want to email me in advance if that local connection isn’t obvious in the application.

4. I don’t have time to consider whether you decided not to do a rotation with us at the “select for interview” stage. I don’t have time or an easy way to consider the strength of your research record. I don’t have time to look at your Dean’s letter in depth beyond just trying to make sure there is no coded “red flag” and to understand your relative performance. I don’t have time to consider your extracurricular activities.

Once you get selected for interview, the selection metrics become more nuanced—another long discussion for another day.

TL;DR: Single person determines essentially all interview invitations. Big ways to win an interview are Step + Honors, 2/3rd of interviews are just given from the top down based on those. Being connected by location or audition helps you snag a spot in the remaining 1/3rd. Very little review of the rest of your app at this stage.

@NotAProgDirector how much overlap between his process for interviews and yours?
 
I'm going to plug this in here because I think it's an interesting read and a good way of seeing how much USMLE + Honors dominates the interview invitation process. This is from a PD at a competitive radiology program (average matched Step of 250, so maybe a decent proxy for the typical surgical subspecialty too).



TL;DR: Single person determines essentially all interview invitations. Big ways to win an interview are Step + Honors, 2/3rd of interviews are just given from the top down based on those. Being connected by location or audition helps you snag a spot in the remaining 1/3rd. Very little review of the rest of your app at this stage.

@NotAProgDirector how much overlap between his process for interviews and yours?

I'm curious to know their thoughts on how they rank
 
I'm going to plug this in here because I think it's an interesting read and a good way of seeing how much USMLE + Honors dominates the interview invitation process. This is from a PD at a competitive radiology program (average matched Step of 250, so maybe a decent proxy for the typical surgical subspecialty too).



TL;DR: Single person determines essentially all interview invitations. Big ways to win an interview are Step + Honors, 2/3rd of interviews are just given from the top down based on those. Being connected by location or audition helps you snag a spot in the remaining 1/3rd. Very little review of the rest of your app at this stage.

@NotAProgDirector how much overlap between his process for interviews and yours?

It's a she, actually.
 
  • Like
Reactions: 1 user
Do you guys really think step 1 pass/fail is going to change a lot of people's outcome in medical admissions? It's a very slim minority that faces the scholarship vs prestige dilemma. And I really can't imagine many people turning down low-tier MD admits to try and reapply over this. It's not like someone who barely scrapes by in their first cycle is going to be getting a bunch of Top 40 NIH invites as a reapplicant.
Dude, there are dozens of threads like this every year in School X versus Y; it's more common than you think. Intuitively I'm sure you can grasp that of those with multiple acceptances, many will have a more well-known choice, and some will decline that choice. People choose less prestigious schools all of the time for a variety of reasons, like money, proximity to a support system, or just overall fit. The prospect of P/F Step 1 is another factor worth considering, and being blind to this when making that decision isn't fair to the class of 2023.
 
  • Like
  • Love
  • Dislike
Reactions: 4 users
I'm going to plug this in here because I think it's an interesting read and a good way of seeing how much USMLE + Honors dominates the interview invitation process. This is from a PD at a competitive radiology program (average matched Step of 250, so maybe a decent proxy for the typical surgical subspecialty too).



TL;DR: Single person determines essentially all interview invitations. Big ways to win an interview are Step + Honors, 2/3rd of interviews are just given from the top down based on those. Being connected by location or audition helps you snag a spot in the remaining 1/3rd. Very little review of the rest of your app at this stage.

@NotAProgDirector how much overlap between his process for interviews and yours?

Very insightful look into a competitive program in mid-competitive field. However, I think these type of programs can be lumped with surgical subspecialties, but they still comprise a small proportion of all programs. Looking at that composite score, the average Step 1 for all applicants is at least 242, so not surprising that matched score is 250. That is not a representative mean applicant score for probably 75% of all programs.
 
It would be nice to know whether I should be putting this extra time into Step 1 this summer, as a member of c/o 2023. There are just so many more important things happening in the world right now that deserve my attention.
 
  • Like
Reactions: 2 users
Top