Will residency programs increase their intake of their home institution graduates next year due to Step 1 P/F?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
D

deleted1139416

According to recent data on the USMLE site, 9% of USMD students failed Step 1. This makes me think that MD schools with particularly high Step 1 fail rates will push their home residency program directors to take in their own graduates who failed Step 1.

Members don't see this ad.
 
Pretty sure this is already done due to COVID restrictions on travel for example. People prefer programs they have visited at some point physically.
 
Residency programs and medical schools are separate. Schools can't "push" programs to do anything. They don't have any leverage, usually.
 
  • Like
Reactions: 5 users
Members don't see this ad :)
According to recent data on the USMLE site, 9% of USMD students failed Step 1. This makes me think that MD schools with particularly high Step 1 fail rates will push their home residency program directors to take in their own graduates who failed Step 1.
High Step 1 failure rates in medical schools may encourage residency program directors to accept their own graduates who failed Step 1.
 
What motivation would the residency directors have to accept students who can't even pass Step 1? The people in the medical school administration are not in charge of residency PDs.

ETA: Residencies might be more likely to interview/highly rank students from their home program (and other students who rotated with them) because they have an opportunity to get to know them in person in the era of virtual interviews and p/f step 1, but I can't imagine it would be easy to convince a PD to highly rank someone who failed step 1 just because somebody at the med school said so.
 
  • Like
  • Love
Reactions: 3 users
To add to the discussion:

There are several residency programs in New Jersey, Hackensack Meridian for example, that have dedicated tracks that not only reserve spots for medical students but also shaves off one year off school. I’m not entirely familiar have ran into them occasionally. They also take both STEP 1/2 within the same couple of months.

It’s 100% common depending on the region. If you look at the different “tracks” that’s listed on NRMP you can see they have a special name, “3P” or something like that.
 
According to recent data on the USMLE site, 9% of USMD students failed Step 1. This makes me think that MD schools with particularly high Step 1 fail rates will push their home residency program directors to take in their own graduates who failed Step 1.
9% fail rate is pretty high for USMD schools. Kind of works against the whole point of making Step 1 P/F in the first place. Most residency programs tend to give med students from their home school at least a courtesy interview as long as they are interested in the respective specialty and applied to the program, no matter how unqualified they are. But that does not mean they will rank them highly or at all in the end. While medical school education and residency programs tend to be separate entities even at the same institution, Program directors may be pressured to if there are influential med school administrators at their institution, but obviously no guarantees as each department has its own internal politics. And if this happens, it is usually only for the least competitive specialties like FM or peds. It would be pretty hard to convince a derm, ENT, neurosurgery, or ortho PD to take one of their students who failed step 1 when there are more than enough strong applicants out there.
 
It’s all under one institution though
Yeah but what exactly do you think is the reporting structure that would lead to a school convincing it’s affiliated residency programs to accept students with deficiencies?

The school wants to have the most highly qualified applicants at every step of recruitment. The students who are least competitive out of the graduating class will be encouraged to apply for a specialty and programs which fit with their level of competitiveness.
 
  • Like
Reactions: 3 users
Won’t impact anything from the program side. May have some small impact from the applicant side as these external factors have more applicants ranking their home programs higher.

The s1 failure rate is likely a fluke that will settle out. Pandemic classes combined with the new pf paradigm likely led to more underprepared students. That’s certainly going to change as schools make adjustments, and the pandemic is over.

There’s simply no real leverage schools have on residency programs to rank subpar applicants. A truly strong student whose s1 failure was a fluke and they crushed everything else (rare) might have a shot matching their home program simply because they’re a known commodity, but not because anyone pressed the PD to do it.
 
  • Like
Reactions: 1 users
Yeah but what exactly do you think is the reporting structure that would lead to a school convincing it’s affiliated residency programs to accept students with deficiencies?

The school wants to have the most highly qualified applicants at every step of recruitment. The students who are least competitive out of the graduating class will be encouraged to apply for a specialty and programs which fit with their level of competitiveness.

I was more so referring to faculty who serve both the residency side and medical school side. For example, when I interviewed at an MD medical school in the midwest, the faculty member who interviewed me also taught ophthalmology residents and was on the selection committee of selecting and ranking 4th year medical students applying for ophthalmology. If you have faculty involved in selecting and teaching both medical students and residents, there is bound to be overlap and of course they would favor their own medical school graduates for their residency programs. But now that the step 1 fail rate for MD students has gotten much higher, they have even more reason to do so because their graduates may not get residency spots elsewhere.
 
While programs preferentially select for the best applicants, if the average student is lower in caliber, or in other words, all else being equal, then programs will tend to favor known over unknown.

IMO don’t underestimate the potential of a “known quality”. Most everyone will agree that doctors are risk adverse, and it is not unreasonable for programs to consider home applicants from either the medical school or the preliminary or transitional year interns for their main program. Applicants are themselves risk adverse and risk adverse groups love to stick to each other like glue.

Whether or not step 1 failure rate is higher isn’t the original question: it’s whether or not the fact it’s no longer scored once passing. Obviously if everyone has a pass then home programs will strongly favor their own students if nothing else has changed.
 
I was more so referring to faculty who serve both the residency side and medical school side. For example, when I interviewed at an MD medical school in the midwest, the faculty member who interviewed me also taught ophthalmology residents and was on the selection committee of selecting and ranking 4th year medical students applying for ophthalmology. If you have faculty involved in selecting and teaching both medical students and residents, there is bound to be overlap and of course they would favor their own medical school graduates for their residency programs. But now that the step 1 fail rate for MD students has gotten much higher, they have even more reason to do so because their graduates may not get residency spots elsewhere.
I think the risk of having a resident who cannot even manage the bare minimum of knowledge to be able to pass step 1 greatly outweighs the benefit in the situation you describe. What is the benefit to the faculty member who is theoretically advocating for a barely competent medical student to match at their program? How does that benefit change based on whether step 1 is p/f or not (presumably the student would have failed, or close to it, either way)?
While programs preferentially select for the best applicants, if the average student is lower in caliber, or in other words, all else being equal, then programs will tend to favor known over unknown.

IMO don’t underestimate the potential of a “known quality”. Most everyone will agree that doctors are risk adverse, and it is not unreasonable for programs to consider home applicants from either the medical school or the preliminary or transitional year interns for their main program. Applicants are themselves risk adverse and risk adverse groups love to stick to each other like glue.

Whether or not step 1 failure rate is higher isn’t the original question: it’s whether or not the fact it’s no longer scored once passing. Obviously if everyone has a pass then home programs will strongly favor their own students if nothing else has changed.
Good points that I agree with in general. But, OP is saying that residency programs will try harder to match students from their home institution who failed step 1, presumably over outside applicants who passed, not just home institution students in general. I don't think it helps to be "the known quantity" when the quantity is known to be bad. In my residency program we have had applicants who would have probably had a better shot at matching with us if they didn't rotate with us, because it turned out there were professionalism or fit issues that didn't show up on paper. Obviously not exactly the same thing, but the point is that it is being known to be average or better helps you over unknown students of a similar caliber on paper, and that is not the case for the students OP is talking about.
 
  • Like
Reactions: 2 users
I think the risk of having a resident who cannot even manage the bare minimum of knowledge to be able to pass step 1 greatly outweighs the benefit in the situation you describe. What is the benefit to the faculty member who is theoretically advocating for a barely competent medical student to match at their program? How does that benefit change based on whether step 1 is p/f or not (presumably the student would have failed, or close to it, either way)?

Good points that I agree with in general. But, OP is saying that residency programs will try harder to match students from their home institution who failed step 1, presumably over outside applicants who passed, not just home institution students in general. I don't think it helps to be "the known quantity" when the quantity is known to be bad. In my residency program we have had applicants who would have probably had a better shot at matching with us if they didn't rotate with us, because it turned out there were professionalism or fit issues that didn't show up on paper. Obviously not exactly the same thing, but the point is that it is being known to be average or better helps you over unknown students of a similar caliber on paper, and that is not the case for the students OP is talking about.
Hmm, that’s interesting. Referring to the bolded section:

Do you screen outside rotators based on STEP 1 scores / p/f status? It seems that several rotations (at least on VSLO) require STEP transcripts or scores prior to rotation.

Let’s say that you indeed screen rotators based on a failed STEP1. I wonder if the fact that outside rotators only could make their application less competitive for your program is due to the regression to the mean bias. By taking only average or better students, you’re more likely to see decreases in apparent competitiveness (due to soft skills or professionalism issues as mentioned) as opposed to increases in apparent competitiveness. Students often only spend a single month at a particular program and this is often extrapolated out 3-7 years depending on the specialty. There can be huge variability in apparent performance from one month to the next, especially when the stakes are high as in an audition rotation.

There was an famous argument between Israeli fighter pilot instructors and Nobel prize winning psychologist Dr. Daniel Kahneman, who won the 2002 Nobel prize in economic science for psychology of decision making and judgment as well as behavioral economics, who stated that the “carrot” or or positive reinforcement is superior to the “stick” or negative reinforcement. The Israeli flight instructor in question stated that whenever they rewarded a pilot for excellent marks on a particular task, they performed poorly the next time, and vice versa that whenever they punished a pilot for poor marks on a particular task they performed excellently the next time. In fact, it turns out it was more likely that day to day variations had a stronger effect on repeat performance on measurable tasks than whatever reward or punishment the flight instructors were using because they were holding the pilots to such a high standard in which even very small lapses in ability were recorded.

Likewise, it’s often stated that students who have rotated at their programs ended up receiving a negative evaluation during the time they rotated and probably would have been more competitive if they didn’t rotate. It runs counterintuitively to traditional job hiring where employers would literally pay potential employees to fly over to their workplace and try to get to know their future employee better so they get a better sense of how to best maximize the employee’s productivity and happiness. Most any employee manager would say they wish they knew more information about their employees not less prior to hiring. From the employee side, it’s an opportunity to practice the classic “elevator pitch” wherein classically the employee would be in an elevator with the manager for the brief time to travel between floors would try to ask for a job or a raise in a short and sweet way. Infamously the ‘extremely hardcore’ boss Elon Musk is known to do the reverse, wherein he would randomly interrogate employees and if they couldn’t answer why their job was needed they’ll be laid off.

Understandably, I can see why program managers don’t want to spend their time evaluating future residents. The vast majority of the time the residents who scored highly on STEP exams presumably are likely to at least be average and do well. Not only that, a significant amount of time spent away from clinical medicine evaluating candidates reduces productivity significantly as most residency program managers are paid separately for their clinical productivity versus their academic work of educating/evaluating residents. Thus the push for virtual interviews to remain so less time is spent and more time on clinical practice. Asking for a longitudinal evaluation over a period greater than a month seems to be beyond comprehension for most programs since medical schools are supposed to provide clerkships at a single site wherein students can be evaluated on their soft skills over an extended period.

TL;DR

Taking only above average prospective residents for a brief audition rotation is more likely to result in below average evaluations due to regression to the mean.

Is this bias accounted for?
 
  • Like
Reactions: 1 user
I was more so referring to faculty who serve both the residency side and medical school side. For example, when I interviewed at an MD medical school in the midwest, the faculty member who interviewed me also taught ophthalmology residents and was on the selection committee of selecting and ranking 4th year medical students applying for ophthalmology. If you have faculty involved in selecting and teaching both medical students and residents, there is bound to be overlap and of course they would favor their own medical school graduates for their residency programs. But now that the step 1 fail rate for MD students has gotten much higher, they have even more reason to do so because their graduates may not get residency spots elsewhere.
I don’t think it’s true that US MD students won’t get spots elsewhere even with a step 1 failure. It just means those applicants really need to adjust their target specialty and program.
While programs preferentially select for the best applicants, if the average student is lower in caliber, or in other words, all else being equal, then programs will tend to favor known over unknown.

IMO don’t underestimate the potential of a “known quality”. Most everyone will agree that doctors are risk adverse, and it is not unreasonable for programs to consider home applicants from either the medical school or the preliminary or transitional year interns for their main program. Applicants are themselves risk adverse and risk adverse groups love to stick to each other like glue.

Whether or not step 1 failure rate is higher isn’t the original question: it’s whether or not the fact it’s no longer scored once passing. Obviously if everyone has a pass then home programs will strongly favor their own students if nothing else has changed.
I think the “known quantity” thing is more or less already baked in and has been true ever since I was a med student. And I am just not confident that “nothing else has changed,” because almost everyone is taking and releasing their step 2 score before ROL time. So schools will still have one objective measure of their students.
 
Hmm, that’s interesting. Referring to the bolded section:

Do you screen outside rotators based on STEP 1 scores / p/f status? It seems that several rotations (at least on VSLO) require STEP transcripts or scores prior to rotation.

Let’s say that you indeed screen rotators based on a failed STEP1. I wonder if the fact that outside rotators only could make their application less competitive for your program is due to the regression to the mean bias. By taking only average or better students, you’re more likely to see decreases in apparent competitiveness (due to soft skills or professionalism issues as mentioned) as opposed to increases in apparent competitiveness. Students often only spend a single month at a particular program and this is often extrapolated out 3-7 years depending on the specialty. There can be huge variability in apparent performance from one month to the next, especially when the stakes are high as in an audition rotation.

There was an famous argument between Israeli fighter pilot instructors and Nobel prize winning psychologist Dr. Daniel Kahneman, who won the 2002 Nobel prize in economic science for psychology of decision making and judgment as well as behavioral economics, who stated that the “carrot” or or positive reinforcement is superior to the “stick” or negative reinforcement. The Israeli flight instructor in question stated that whenever they rewarded a pilot for excellent marks on a particular task, they performed poorly the next time, and vice versa that whenever they punished a pilot for poor marks on a particular task they performed excellently the next time. In fact, it turns out it was more likely that day to day variations had a stronger effect on repeat performance on measurable tasks than whatever reward or punishment the flight instructors were using because they were holding the pilots to such a high standard in which even very small lapses in ability were recorded.

Likewise, it’s often stated that students who have rotated at their programs ended up receiving a negative evaluation during the time they rotated and probably would have been more competitive if they didn’t rotate. It runs counterintuitively to traditional job hiring where employers would literally pay potential employees to fly over to their workplace and try to get to know their future employee better so they get a better sense of how to best maximize the employee’s productivity and happiness. Most any employee manager would say they wish they knew more information about their employees not less prior to hiring. From the employee side, it’s an opportunity to practice the classic “elevator pitch” wherein classically the employee would be in an elevator with the manager for the brief time to travel between floors would try to ask for a job or a raise in a short and sweet way. Infamously the ‘extremely hardcore’ boss Elon Musk is known to do the reverse, wherein he would randomly interrogate employees and if they couldn’t answer why their job was needed they’ll be laid off.

Understandably, I can see why program managers don’t want to spend their time evaluating future residents. The vast majority of the time the residents who scored highly on STEP exams presumably are likely to at least be average and do well. Not only that, a significant amount of time spent away from clinical medicine evaluating candidates reduces productivity significantly as most residency program managers are paid separately for their clinical productivity versus their academic work of educating/evaluating residents. Thus the push for virtual interviews to remain so less time is spent and more time on clinical practice. Asking for a longitudinal evaluation over a period greater than a month seems to be beyond comprehension for most programs since medical schools are supposed to provide clerkships at a single site wherein students can be evaluated on their soft skills over an extended period.

TL;DR

Taking only above average prospective residents for a brief audition rotation is more likely to result in below average evaluations due to regression to the mean.

Is this bias accounted for?
Eh, I think you're (1) thinking too hard about this and (2) making a lot of assumptions that are not necessarily fair.

I am not involved in selecting rotators for my program so I'm not sure how they are screened. But I am in family medicine, so the screening based on academic metrics I would imagine is at most "did you pass" anyway - our definition of an "above average" candidate is much more based on clinical evals, activities/experiences, letters of rec, etc. compared to other specialties. We also have a medical school that does multiple required core rotations with our program (FM, inpatient peds, also working with our residents on OB floor, ER, and ICU on those rotations), so we do see "all comers" academically.

I'm not sure where you're getting the idea that PDs don't want to spend their time getting to know future residents or that they don't want to know a lot about applicants. "sometimes we get to know applicants and we don't like them" =/= "we don't want to get to know candidates." I think if it was logistically possible to get to know and clinically work with all of our interviewees over a longer period of time, we would. But part of getting to know applicants is finding out that applicants are not a good fit for our program for various reasons. That's not common, but it does happen. We've had home students we got to know over multiple rotations/years at our program who look good on paper but had serious issues in person, and if they had been from another institution and were able to keep it relatively benign/neutral for interview day, they would have been much higher on our rank list. We've had many more who looked just okay on paper but were stellar in terms of their clinical skills, ability to work well with others, and fit, and rotating with us helped them. Even more frequently, the packet matches the person and rotating with us doesn't help or hurt.
 
Last edited:
  • Like
Reactions: 1 user
Eh, I think you're (1) thinking too hard about this and (2) making a lot of assumptions that are not necessarily fair.

I am not involved in selecting rotators for my program so I'm not sure how they are screened. But I am in family medicine, so the screening based on academic metrics I would imagine is at most "did you pass" anyway - our definition of an "above average" candidate is much more based on clinical evals, activities/experiences, letters of rec, etc. compared to other specialties. We also have a medical school that does multiple required core rotations with our program (FM, inpatient peds, also working with our residents on OB floor, ER, and ICU on those rotations), so we do see "all comers" academically.

I'm not sure where you're getting the idea that PDs don't want to spend their time getting to know future residents or that they don't want to know a lot about applicants. "sometimes we get to know applicants and we don't like them" =/= "we don't want to get to know candidates." I think if it was logistically possible to get to know and clinically work with all of our interviewees over a longer period of time, we would. But part of getting to know applicants is finding out that applicants are not a good fit for our program for various reasons. That's not common, but it does happen. We've had home students we got to know over multiple rotations/years at our program who look good on paper but had serious issues in person, and if they had been from another institution and were able to keep it relatively benign/neutral for interview day, they would have been much higher on our rank list. We've had many more who looked just okay on paper but were stellar in terms of their clinical skills, ability to work well with others, and fit, and rotating with us helped them. Even more frequently, the packet matches the person and rotating with us doesn't help or hurt.
Well that answers my question. Definitely can be hard to tell sometimes.

Specifically I did not think that having a longitudinal clerkship/experience was not uncommon. I think that’ll probably be the thing which will clarify a lot of next steps for me. Which is something that’s being looked into by NBME and other stakeholders into improving the residency selection processes.
 
Well that answers my question. Definitely can be hard to tell sometimes.

Specifically I did not think that having a longitudinal clerkship/experience was not uncommon. I think that’ll probably be the thing which will clarify a lot of next steps for me. Which is something that’s being looked into by NBME and other stakeholders into improving the residency selection processes.
There are a lot of double negatives here lol but I think you're saying it IS common to have an experience rotating with the program you match at, and especially in the COVID times I don't think that's the case necessarily. We also match a number of people every year who never rotated with us (I was one of them back in the day!).

My advice for applicants, again only speaking to FM where aways aren't really a "requirement," tacit or otherwise, would be: if there's a program you're really interested in, it's not inconvenient, AND you think you will leave a good impression based on your prior rotation experiences - try to rotate there. If you're missing one of those things, it's probably not worth it. (I would also say for FM specifically, if you are from a school where the FM experience is pretty much just outpatient only suburban docs, which is often the case at academic powerhouses, it's worth doing a rotation at a rigorous, community-based, unopposed program where you will see full scope FM which is a whole different animal.)
 
  • Like
Reactions: 2 users
There are a lot of double negatives here lol but I think you're saying it IS common to have an experience rotating with the program you match at, and especially in the COVID times I don't think that's the case necessarily. We also match a number of people every year who never rotated with us (I was one of them back in the day!).

My advice for applicants, again only speaking to FM where aways aren't really a "requirement," tacit or otherwise, would be: if there's a program you're really interested in, it's not inconvenient, AND you think you will leave a good impression based on your prior rotation experiences - try to rotate there. If you're missing one of those things, it's probably not worth it. (I would also say for FM specifically, if you are from a school where the FM experience is pretty much just outpatient only suburban docs, which is often the case at academic powerhouses, it's worth doing a rotation at a rigorous, community-based, unopposed program where you will see full scope FM which is a whole different animal.)
I agree with you and thinking about the importance of away rotations.

Obviously for specialities that are classically non competitive, away rotations are not necessary and vice versa for competitive specialities.

It’s something that came up recently from the NBME that’s investigating the relationship between student preferences and clinical rotations. Obviously if you never had a rotation in say ophthalmology as a student you’ll be extremely unlikely to choose it as your first choice.

However, one thing that I haven’t considered is a longitudinal clerkship at a single site. I didn’t even think or know to complete such a clerkship and believe it would have been tremendously helpful. Now I’m looking forward towards a such a clinical experience in shaping my future plans.
 
I was more so referring to faculty who serve both the residency side and medical school side. For example, when I interviewed at an MD medical school in the midwest, the faculty member who interviewed me also taught ophthalmology residents and was on the selection committee of selecting and ranking 4th year medical students applying for ophthalmology. If you have faculty involved in selecting and teaching both medical students and residents, there is bound to be overlap and of course they would favor their own medical school graduates for their residency programs. But now that the step 1 fail rate for MD students has gotten much higher, they have even more reason to do so because their graduates may not get residency spots elsewhere.
The situation you refer may be possible in non-competitive specialties like FM, peds, and EM, where the choice may be between a lower caliber in-house USMD applicant vs an okay outside DO or Caribbean IMG applicant. The pandemic and virtual interviews has only reinforced the trend of programs taking more of their in-house applicants, even those that would be weaker (as PDs still tend to be conservative and can favor someone they have met in person). However, for a competitive specialty like ophthalmology, where median Step 1 scores used to be in the mid-240s before it went P/F, taking someone who failed Step 1 the first time would be very unusual given the plethora of much more qualified applicants out there and not something most PDs would want to risk, even for in-house applicants.

In many specialty-specific board exams (taken down the line after residency), it has been found that low Step 1 scores (often cited as <220) can have a strong correlation with failing on the first attempt. And no residency program wants too many of their grads to fail their specialty-specific board exams either. For example, IM programs are required to have a 3-year average pass rate of at least 80% for the ABIM exams, or their program gets placed on probation.

On the flip side, picking applicants based on the highest Step scores isn't the best idea either. You can find examples (here on SDN and elsewhere) of programs in competitive specialties who have gotten burned by picking applicants based on the the traditional academic measures like high Step scores, AOA, having high numbers of research pubs, and being from a well-known name brand medical school. Some of these applicants looked good on paper, but ended being subpar residents down the line (usually not from a knowledge standpoint, but that they were lazy, didn't care about their work besides doing the base minimum, or would frequently dump work on their colleagues).
 
Top