I think the risk of having a resident who cannot even manage the bare minimum of knowledge to be able to pass step 1 greatly outweighs the benefit in the situation you describe. What is the benefit to the faculty member who is theoretically advocating for a barely competent medical student to match at their program? How does that benefit change based on whether step 1 is p/f or not (presumably the student would have failed, or close to it, either way)?
Good points that I agree with in general. But, OP is saying that residency programs will try harder to match students from their home institution who failed step 1, presumably over outside applicants who passed, not just home institution students in general. I don't think it helps to be "the known quantity" when the quantity is known to be bad. In my residency program we have had applicants who would have probably had a better shot at matching with us if they didn't rotate with us, because it turned out there were professionalism or fit issues that didn't show up on paper. Obviously not exactly the same thing, but the point is that it is being known to be average or better helps you over unknown students of a similar caliber on paper, and that is not the case for the students OP is talking about.
Hmm, that’s interesting. Referring to the bolded section:
Do you screen outside rotators based on STEP 1 scores / p/f status? It seems that several rotations (at least on VSLO) require STEP transcripts or scores prior to rotation.
Let’s say that you indeed screen rotators based on a failed STEP1. I wonder if the fact that outside rotators only could make their application
less competitive for your program is due to the regression to the mean bias. By taking only average or better students, you’re more likely to see decreases in apparent competitiveness (due to soft skills or professionalism issues as mentioned) as opposed to increases in apparent competitiveness. Students often only spend a single month at a particular program and this is often extrapolated out 3-7 years depending on the specialty. There can be huge variability in apparent performance from one month to the next, especially when the stakes are high as in an audition rotation.
There was an famous argument between Israeli fighter pilot instructors and Nobel prize winning psychologist Dr. Daniel Kahneman, who won the 2002 Nobel prize in economic science for psychology of decision making and judgment as well as behavioral economics, who stated that the “carrot” or or positive reinforcement is superior to the “stick” or negative reinforcement. The Israeli flight instructor in question stated that whenever they rewarded a pilot for excellent marks on a particular task, they performed poorly the next time, and vice versa that whenever they punished a pilot for poor marks on a particular task they performed excellently the next time. In fact, it turns out it was more likely that day to day variations had a stronger effect on repeat performance on measurable tasks than whatever reward or punishment the flight instructors were using because they were holding the pilots to such a high standard in which even very small lapses in ability were recorded.
Likewise, it’s often stated that students who have rotated at their programs ended up receiving a negative evaluation during the time they rotated and probably would have been more competitive if they didn’t rotate. It runs counterintuitively to traditional job hiring where employers would literally pay potential employees to fly over to their workplace and try to get to know their future employee better so they get a better sense of how to best maximize the employee’s productivity and happiness. Most any employee manager would say they wish they knew more information about their employees not less prior to hiring. From the employee side, it’s an opportunity to practice the classic “elevator pitch” wherein classically the employee would be in an elevator with the manager for the brief time to travel between floors would try to ask for a job or a raise in a short and sweet way. Infamously the ‘extremely hardcore’ boss Elon Musk is known to do the reverse, wherein he would randomly interrogate employees and if they couldn’t answer why their job was needed they’ll be laid off.
Understandably, I can see why program managers don’t want to spend their time evaluating future residents. The vast majority of the time the residents who scored highly on STEP exams presumably are likely to at least be average and do well. Not only that, a significant amount of time spent away from clinical medicine evaluating candidates reduces productivity significantly as most residency program managers are paid separately for their clinical productivity versus their academic work of educating/evaluating residents. Thus the push for virtual interviews to remain so less time is spent and more time on clinical practice. Asking for a longitudinal evaluation over a period greater than a month seems to be beyond comprehension for most programs since medical schools are supposed to provide clerkships at a single site wherein students can be evaluated on their soft skills over an extended period.
TL;DR
Taking only above average prospective residents for a brief audition rotation is more likely to result in below average evaluations due to regression to the mean.
Is this bias accounted for?