I fail to understand why step matters so much....

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I'm not certain I understand what you're asking.

I assume you're asking: what would the unintended consequences be if they changed the USMLE to have a raw score of 85% to pass? It would be similar to reporting a pass/fail score only. Fail would remain a negative as it is today. Pass would be uninterpretable other than knowing that you passed. Since most people would get 95-100% of the questions correct, there would be absolutely no discrimination at that level of performance. There would be a slight difference from just P/F, as those scoring 85-95% would likely be considered differently than those scoring >=95%. Perhaps students wouldn't bother studying very much for the exam - similar to concerns raised about S1 being P/F. Is that what you're getting at?

Again, not sure what you're asking. I'm saying that a USMLE score of 250 shows that you "know more as assessed on an MCQ test" than people with a 240, and those more than those with a 230. The NBME seems to think that I should just treat anyone with a score higher than passing the same? This makes no sense to me at all. Again, I completely agree that a higher score on the USMLE doesn't necessarily predict that someone will be a better doctor/resident. But to state that it doesn't represent anything seems incorrect.

Based on what? How certain are we that the SAT doesn;t have ranges like this? And the MCAT has a smaller range because the overall score range is smaller. We can fix that with the USMLE if we want -- simply divide the score by 10 and report that. Round it to a whole number if you wish. Now, scores will range from 16-28, pass will be a 20, and inter test variability will be 1.5. Does that make it better?

Who says that these predictive practice tests are actually reflective of the test? Honestly, I think this is the biggest scam of all. The NBME should not be in the business of selling practice exams for it's own high stakes exam. This is all sorts of wrong.
It’s always baffled me that people actually pretend students with higher board scores don’t know more than students with lower scores.

Members don't see this ad.
 
  • Like
Reactions: 4 users
It’s always baffled me that people actually pretend students with higher board scores don’t know more than students with lower scores.
That's not really what they are trying to say. The problem with comparing low and high board scores is that:

1) There is a range of error so that a 240 vs. 250 are not that different from each other due to the ranges of error overlapping, yet this range represents the 35 - 60 percentile and most residency PD's will absolutely treat those two scores differently
2) Certain school curricula have inherent advantages (6 week dedicated period, board study rotations, etc) versus schools where students only get a 2 week dedicated
3) Boards do not test on other skills, such as history taking, communication skills, teamwork, etc. All of these are important for patient outcomes.
4) Boards are broad, not deep. Amassing a large amount of knowledge needed to do well on Step doesn't necessarily translate to having critical thinking and problem solving skills. A friend of mine was the top bioengineering student in his undergraduate class and got a 521 on the MCAT, yet his step scores are average because he doesn't do well with memorizing every little detail. But he wiped the floor with me on rotations because he is BRILLIANT.

I do agree that boards are more than just a score, and serve as a proxy for work ethic and ability to think and reason and learn information quickly. All of these are important skills, so I am not saying that boards are worthless. But I still strongly believe that there is a lot of issues with how much of a role they play in residency selection.
 
Last edited by a moderator:
  • Like
Reactions: 1 users
I agree Step shouldn’t be everything, but IMO it definitely has value and should be valued highly. Doctors need a lot of knowledge and tests are probably the best way to objectively assess that, even if they’re not perfect
 
Members don't see this ad :)
Do you agree that the group of students entering this year's match cycle that have high numeric Step 1 scores (such as those who may have delayed a year for research or other reasons) will have an advantage over those students with Step 1 scores of "PASS"?
No. I don't think it will matter much, really. Programs that are focused on USMLE scores will just use Step 2.

Also, there's this theme here on SDN that somehow USMLE scores are the key factor in evaluating applicants. I doubt this is true for most programs. Some fields it likely will have a bigger impact. I expect that for most programs, they may have a score below which they don't invite people, a borderline range where they look at the rest of the application, and a high enough score where the USMLE is no longer a disqualifying feature and the decision to invite is based upon the rest of the application.
That's not really what they are trying to say. The problem with comparing low and high board scores is that:

1) There is a range of error so that a 240 vs. 250 are not that different from each other due to the ranges of error overlapping, yet this range represents the 35 - 60 percentile and most residency PD's will absolutely treat those two scores differently
It is true that the standard error of measurement of the USMLE is around 6, so a 240 and 250 "overlap" if you +/- the SE. But on average, the person getting the 250 has a better performance than the person getting the 240. Although it's possible that their actual performance is equal and the person with the 250 just had a "good day" and the 240 had a "bad day", it's more likely that the 250 represents a better performance. Programs are willing to accept much less than a 95% certainty.
2) Certain school curricula have inherent advantages (6 week dedicated period, board study rotations, etc) versus schools where students only get a 2 week dedicated
Certainly true, but there's 2-3 years to study for these exams. Theoretically, you're learning all the material all along.
3) Boards do not test on other skills, such as history taking, communication skills, teamwork, etc. All of these are important for patient outcomes.
Agreed. Presumably that's what clinical grades / performance are supposed to measure.
4) Boards are broad, not deep. Amassing a large amount of knowledge needed to do well on Step doesn't necessarily translate to having critical thinking and problem solving skills. A friend of mine was the top bioengineering student in his undergraduate class and got a 521 on the MCAT, yet his step scores are average because he doesn't do well with memorizing every little detail. But he wiped the floor with me on rotations because he is BRILLIANT.
Which is why USMLE should be part of application review.
I do agree that boards are more than just a score, and serve as a proxy for work ethic and ability to think and reason and learn information quickly. All of these are important skills, so I am not saying that boards are worthless. But I still strongly believe that there is a lot of issues with how much of a role they play in residency selection.
I think their importance is overstated here. Sure, if you get a 203 on S2, your chances of getting ortho are minimal. But for most applicants to most fields, a decent score is all you need. The step score insanity is driven mostly by student neuroticism, not reality.
 
  • Like
Reactions: 4 users
No. I don't think it will matter much, really. Programs that are focused on USMLE scores will just use Step 2.

Also, there's this theme here on SDN that somehow USMLE scores are the key factor in evaluating applicants. I doubt this is true for most programs. Some fields it likely will have a bigger impact. I expect that for most programs, they may have a score below which they don't invite people, a borderline range where they look at the rest of the application, and a high enough score where the USMLE is no longer a disqualifying feature and the decision to invite is based upon the rest of the application.

It is true that the standard error of measurement of the USMLE is around 6, so a 240 and 250 "overlap" if you +/- the SE. But on average, the person getting the 250 has a better performance than the person getting the 240. Although it's possible that their actual performance is equal and the person with the 250 just had a "good day" and the 240 had a "bad day", it's more likely that the 250 represents a better performance. Programs are willing to accept much less than a 95% certainty.

Certainly true, but there's 2-3 years to study for these exams. Theoretically, you're learning all the material all along.

Agreed. Presumably that's what clinical grades / performance are supposed to measure.

Which is why USMLE should be part of application review.

I think their importance is overstated here. Sure, if you get a 203 on S2, your chances of getting ortho are minimal. But for most applicants to most fields, a decent score is all you need. The step score insanity is driven mostly by student neuroticism, not reality.

I totally agree with what you said. It also seems like programs use a lot of other factors (IMG status, DO status, geographic status and preference signaling) to screen out applicants in lieu of/in addition to a usmle score cutoff.
 
That's not really what they are trying to say. The problem with comparing low and high board scores is that:

1) There is a range of error so that a 240 vs. 250 are not that different from each other due to the ranges of error overlapping, yet this range represents the 35 - 60 percentile and most residency PD's will absolutely treat those two scores differently
2) Certain school curricula have inherent advantages (6 week dedicated period, board study rotations, etc) versus schools where students only get a 2 week dedicated
3) Boards do not test on other skills, such as history taking, communication skills, teamwork, etc. All of these are important for patient outcomes.
4) Boards are broad, not deep. Amassing a large amount of knowledge needed to do well on Step doesn't necessarily translate to having critical thinking and problem solving skills. A friend of mine was the top bioengineering student in his undergraduate class and got a 521 on the MCAT, yet his step scores are average because he doesn't do well with memorizing every little detail. But he wiped the floor with me on rotations because he is BRILLIANT.

I do agree that boards are more than just a score, and serve as a proxy for work ethic and ability to think and reason and learn information quickly. All of these are important skills, so I am not saying that boards are worthless. But I still strongly believe that there is a lot of issues with how much of a role they play in residency selection.
So what? Cutoffs exist because it’s too much work to sift through apps otherwise. That doesn’t change unless you limit the amount of apps. And as I discussed previously, there’s very little difference in the majority of apps besides scores.

A 32 and a 35 were treated very differently on the old mcat and that could literally be 3 questions. Similarly, a 508 and 512 were treated differently when I applied and that could be the difference between 4 questions. People repeat years in med school because they failed by a question or two. A line has to be drawn somewhere.

If we’re using personal anecdotes then allow me to convey my own. It’s very common for the med students who form the best ddx and tx plans on wards to also happen to have top quartile class rank and high board scores. Are there exceptions to the rule? Sure. But for the most part knowing more stuff is generally a preferred trait.
 
  • Like
Reactions: 3 users
Based on what? How certain are we that the SAT doesn;t have ranges like this? And the MCAT has a smaller range because the overall score range is smaller. We can fix that with the USMLE if we want -- simply divide the score by 10 and report that. Round it to a whole number if you wish. Now, scores will range from 16-28, pass will be a 20, and inter test variability will be 1.5. Does that make it better?
On what earth am I saying that the absolute size of the score range is indicative of inter-test variability? Obviously absolute range based on an arbitrary scoring scale is irrelevant to inter-test variability. The USMLE very likely has higher % variability within the relevant scoring range.

The AAMC has done actual studies for the MCAT which predict +/- 1 on each section (~7% of the relevant range, buffered over four sections for smaller overall variability). That's actually quite tight and matches my anecdotal experience. When I took it, going from a 37 on practice tests to a 33 on the real deal was rare. Sure, some 38 hopefuls settled for 35s, and some people with 33 averages managed 35s, but people fell within the range you'd expect, especially in the more common 26-34 score range. You could safely bin test takers into at least 6-7 meaningful score ranges.

The NBME does not do these sorts of studies, so we can't say for sure how much scores vary. However, the inability of multiple companies, including the NBME, to reliably predict your score speaks volumes about the overall quality control. Also, anecdotes of wild score variations (vs. practice tests, preclinical grades, etc...) were far more rampant with the step exams. There were countless stories of people averaging in the 220s on NBMEs and UWorld SAs who wound up with a score in the 250s and vice-versa. A friend of mine scored a 250 on her last practice exam and got a 199 on the real deal. Step exams definitely are notoriously weird and variable. You can probably safely bin 99% of test takers into 3 groups, but further stratification would be meaningless.
A 32 and a 35 were treated very differently on the old mcat and that could literally be 3 questions. Similarly, a 508 and 512 were treated differently when I applied and that could be the difference between 4 questions. People repeat years in med school because they failed by a question or two. A line has to be drawn somewhere.
Eh, a 32 and a 35 were pretty different scores, and it definitely wasn't a 3 question difference. If people made big jumps, it was towards the top end of the test where there's more variability. Plenty of people went from 38 to 35 or from 36 to 39. In the 26-34 range, scores were pretty tight. Statistically all 39+ scores were indistinguishable, and 36-38 were pretty darn close, so you would see a lot of variability there. Step exams basically have the same issue near the top of their range. However, because they are designed to maximize predictability near 209 and not near 245, tons of test takers are near the top of the range and bouncing around wildly.
 
The AAMC has done actual studies for the MCAT which predict +/- 1 on each section (~7% of the relevant range, buffered over four sections for smaller overall variability)
The NBME does not do these sorts of studies, so we can't say for sure how much scores vary.
They estimate a SEM and Standard Error of Estimate of 5 and 8 (respectively) around an examinee's true score for Step 2. With a relevant score range of 210 to 275 (2nd to 99th%ile), that is 8% and 12% of the relevant range (based on SEM and SEE - honestly not sure which is more applicable here). At worst, about 50% more variability.

You can probably safely bin 99% of test takers into 3 groups, but further stratification would be meaningless.
The statistically meaningful distinction between examinees begins at a difference of 16 points for Step 2. Three or four meaningful groups sounds about right.

Step 2 is worse than the MCAT for purposes of stratification and probably leaves a bit more to luck than most of us would be comfortable with. However it is also a fine metric for roughly stratifying applicants (and far superior than P/F).

 
"The standard error of difference (SED) in scores is an index used to assess whether the difference between two scores is statistically meaningful. If the scores received by two examinees differ by two or more SEDs, it is likely that the examinees are different in their proficiency. Currently, the SED is approximately 9 points for Step 1, and 8 points for Step 2 CK, and Step 3."

And yet there's a 0% chance a PD isn't taking a 260 over a 245, or even a 250, or even a 255, despite what the NBME says.

When the SED covers almost FORTY percentile points it is an awful way to stratify people.

I am all for standardized testing. I would never have gotten into the school I attend without having the MCAT score I did. But Step 2 in the way it's currently designed just isn't doing the job. Having a 33% chance of scoring 19+ percentiles below or 19+ percentiles above your true score is just ridiculous.
 
Last edited:
  • Like
Reactions: 3 users
And yet there's a 0% chance a PD isn't taking a 260 over a 245, or even a 250, or even a 255, despite what the NBME says.

It's pretty obvious that PDs don't just create rank list in perfect synchrony with Step scores.
 
  • Like
Reactions: 1 user
They estimate a SEM and Standard Error of Estimate of 5 and 8 (respectively) around an examinee's true score for Step 2. With a relevant score range of 210 to 275 (2nd to 99th%ile), that is 8% and 12% of the relevant range (based on SEM and SEE - honestly not sure which is more applicable here). At worst, about 50% more variability.
I had never seen those NBME statistics, so thank you for that! Pretty much agree with your comment, but I'm less okay with the high variability given we tend to naturally group scores into decades (e.g., 240 vs. 250) but statistically we can really only rely on groups of 15-20 or so.

I'll add that if MCAT variability is 7% per section, the total exam is around 4.5%, which is functionally in line with how we tend to interpret scores (e.g., 525 > 520 > 515 > 511 > 508 > 505 > 502, etc...). You could stratify more finely if you trust the stats, and that's only 50% of test takers. So step 2 is about 3x more variable, which is in line with my original statement that step 2 CK can likely stratify into ~3 meaningful performance categories (e.g., 260 > 240 > 220) vs. the MCAT which could stratify into 6-7 performance categories in the meaningful range for admissions and 10-12 if we stratified all test takers.

The much bigger problem is the motivation behind doing away with objective testing. You'd never tell someone trying to fairly stratify applicants to collect less data. If the goal was to move away from the variability of step 1 or to move away from paper testing, the answer should be to add something not take it away. Further, if your goal was to ease the stress on medical students, you wouldn't have them complete 2 years with no objective data on how competitive they are. So you have to wonder about the motivations.
 
  • Like
Reactions: 1 users
It's pretty obvious that PDs don't just create rank list in perfect synchrony with Step scores.
I never said it's the only factor. But it is a very important factor, and I doubt you can show me data proving that a 245 doesn't have significantly worse results in the match than a 260, despite the scores not being statistically different according to the NBME

Again I am not arguing that we should do away with testing. I think it SHOULD be one of the most important factors as it is the only standardized metric we have. But that's not an excuse to improperly use a test beyond what it is designed to do.

And of course if I take step 2 next month and score wildly above my practice tests, I'll revise my opinion and say it's the best most accurate test ever and proves I'm a superior student lol...
 
Last edited:
  • Like
  • Haha
Reactions: 1 users
Straw man? I never said it's the only factor. But it is a very important factor, and I doubt you can show me data proving that a 245 doesn't has significantly worse results in the match than a 260

I don't want to misrepresent your statement. You said there is a zero percent chance a PD will take a lower score over a higher score and I am saying that's wrong. Other interpretations of your claim aren't obvious to me.

If you thought it was one of many important factors, I imagine you would have written that they are simply less likely to go with the lower scorer when all other factors are controlled for - which I would obviously agree with.
 
Members don't see this ad :)
I don't want to misrepresent your statement. You said there is a zero percent chance a PD will take a lower score over a higher score and I am saying that's wrong. Other interpretations of your claim aren't obvious to me.

If you thought it was one of many important factors, I imagine you would have written that they are simply less likely to go with the lower scorer when all other factors are controlled for - which I would obviously agree with.
It's quite obvious my statement was hyperbolic, as it would be ridiculous to assume the rank list was just a rank of step scores. You're either being intentionally difficult, or we need a CARS section on step 2 to help people out with language interpretation...
 
Last edited:
  • Like
Reactions: 2 users
On what earth am I saying that the absolute size of the score range is indicative of inter-test variability? Obviously absolute range based on an arbitrary scoring scale is irrelevant to inter-test variability. The USMLE very likely has higher % variability within the relevant scoring range.

The AAMC has done actual studies for the MCAT which predict +/- 1 on each section (~7% of the relevant range, buffered over four sections for smaller overall variability). That's actually quite tight and matches my anecdotal experience. When I took it, going from a 37 on practice tests to a 33 on the real deal was rare. Sure, some 38 hopefuls settled for 35s, and some people with 33 averages managed 35s, but people fell within the range you'd expect, especially in the more common 26-34 score range. You could safely bin test takers into at least 6-7 meaningful score ranges.

The NBME does not do these sorts of studies, so we can't say for sure how much scores vary. However, the inability of multiple companies, including the NBME, to reliably predict your score speaks volumes about the overall quality control. Also, anecdotes of wild score variations (vs. practice tests, preclinical grades, etc...) were far more rampant with the step exams. There were countless stories of people averaging in the 220s on NBMEs and UWorld SAs who wound up with a score in the 250s and vice-versa. A friend of mine scored a 250 on her last practice exam and got a 199 on the real deal. Step exams definitely are notoriously weird and variable. You can probably safely bin 99% of test takers into 3 groups, but further stratification would be meaningless.

Eh, a 32 and a 35 were pretty different scores, and it definitely wasn't a 3 question difference. If people made big jumps, it was towards the top end of the test where there's more variability. Plenty of people went from 38 to 35 or from 36 to 39. In the 26-34 range, scores were pretty tight. Statistically all 39+ scores were indistinguishable, and 36-38 were pretty darn close, so you would see a lot of variability there. Step exams basically have the same issue near the top of their range. However, because they are designed to maximize predictability near 209 and not near 245, tons of test takers are near the top of the range and bouncing around wildly.
If you’re a question away in each section, it literally is 3 questions. Again, a line has to be drawn somewhere. Most schools still have a hard no interview below X score rule. One point to low? Hope one of the buildings is named after your dad because otherwise it sucks to suck.

Also, do you have a source for this high score discrepancy on USMLE? Everyone I’ve encountered has performed very close to their final practice test scores except for the people who were google searching with FA open. I honestly thought step 1 was the most fairly written and straightforward test I’ve ever taken.
 
  • Like
Reactions: 3 users
If you’re a question away in each section, it literally is 3 questions
Yeah, but chances are very low. For most scores, there is a range of ~2-5 questions between each numerical score change (e.g., getting anywhere from 41-44 questions correct gives a 127 for a particular section). If we say it's 3 questions/numerical score, the probability of being one question away in at least 3/4 sections is 1/9. The probability of being 1 away 3 times and also not getting lucky and getting just above the cutoff for your other section is 4/81 (~5%). The chances of downgrading yourself stochastically are slim, and if you feel you were the statistical anomaly, you can retake it.
Also, do you have a source for this high score discrepancy on USMLE? Everyone I’ve encountered has performed very close to their final practice test scores except for the people who were google searching with FA open. I honestly thought step 1 was the most fairly written and straightforward test I’ve ever taken.
I mean we fleshed it out above in agonizing detail with the actual data provided by the NBME. You're welcome to take a look. Also, I haven't talked to a single person IRL who thought step 1 was "the most fairly written and straightforward test I've ever taken." Every time I've exited an NBME/USMLE exam, the universal consensus upon exiting is, "How do they manage to make these tests so... weird? They're so different from the practice tests and UWorld." I scored quite high on step 1 (near 260), and at no point did I think it was straightforward. I think sometimes people do get relatively straightforward forms for a step/shelf exam, but that variability in difficulty is part of the overall problem.
 
  • Like
Reactions: 1 user
Yeah, but chances are very low. For most scores, there is a range of ~2-5 questions between each numerical score change (e.g., getting anywhere from 41-44 questions correct gives a 127 for a particular section). If we say it's 3 questions/numerical score, the probability of being one question away in at least 3/4 sections is 1/9. The probability of being 1 away 3 times and also not getting lucky and getting just above the cutoff for your other section is 4/81 (~5%). The chances of downgrading yourself stochastically are slim, and if you feel you were the statistical anomaly, you can retake it.

I mean we fleshed it out above in agonizing detail with the actual data provided by the NBME. You're welcome to take a look. Also, I haven't talked to a single person IRL who thought step 1 was "the most fairly written and straightforward test I've ever taken." Every time I've exited an NBME/USMLE exam, the universal consensus upon exiting is, "How do they manage to make these tests so... weird? They're so different from the practice tests and UWorld." I scored quite high on step 1 (near 260), and at no point did I think it was straightforward. I think sometimes people do get relatively straightforward forms for a step/shelf exam, but that variability in difficulty is part of the overall problem.
I get your points about the mcat and will agree that overall it is better for stratification. But still cutoff scores exist same as the step exams.

I also scored high on steps know plenty who share mine and your feelings about these exams. I honestly thought step 2 was weirder. But I scored extremely close to my practice scores on both and so did everyone I know so I guess I’m just surprised.
 
I get your points about the mcat and will agree that overall it is better for stratification. But still cutoff scores exist same as the step exams.

I also scored high on steps know plenty who share mine and your feelings about these exams. I honestly thought step 2 was weirder. But I scored extremely close to my practice scores on both and so did everyone I know so I guess I’m just surprised.
I joked about it before but people who score high are more likely to believe the test is good as it is validating (I haven't taken it yet so unbiased). But NBME's own data shows that if you theoretically had a 260 and took it multiple times a whopping 33% of your scores would fall outside of a 252-268 window which is ridiculous. A lot of people make BIG drops or gains from their practice tests, you're just unlikely to hear from someone who thought they were getting a 260 and ended up in the 240's
 
  • Like
Reactions: 3 users
Yes, it's the new normal. Step 2 is the new Step 1.
It was the predictable outcome of making Step 1 P/F.
Now there is only one bite at the apple and it comes too late to change course if there is an unfortunate score.
That just shifts the pressure that led to Step 1 going P/F to make Step 2 P/F in the future. And this is just the interim
 
  • Like
Reactions: 1 user
There are several things being discussed here so i’ll summarize my thoughts briefly

1. Having standardized tests is important for PDs to have some objective metric to evaluate applications.

2. The USMLEs are not best suited for this purpose because the point of those exams is to test for minimum competency not to be used a screening tool. You need a separate, g-loaded exam with low standard errors that tests for critical thinking like the MCAT but this is understandably hugely controversial

3. The reason why Step 1 went P/F has very largely to do with the overapplication problem that PDs are facing, so Step 1 got overexamphasized beyond its intended purpose that led to a lot of frustration among everyone. Changing that to P/F while keeping Step 2 scored just shifts the mania to only 1 exam with no backup without resolving the underlying problems

There are no easy answers. Capping applications will resolve much of the problems but raises issues concerning logistics and who exactly serves as the centralized authority because there are multiple stakeholders involved with competing interests.

But what I think is going to happen is Step 2 will head towards becoming P/F as pressure mounts up and we’ll be facing an uncomfortable reality of a no-exam world + ever increasing overapplication
 
  • Like
Reactions: 3 users
So what? Cutoffs exist because it’s too much work to sift through apps otherwise. That doesn’t change unless you limit the amount of apps. And as I discussed previously, there’s very little difference in the majority of apps besides scores.
There are no easy answers. Capping applications will resolve much of the problems but raises issues concerning logistics and who exactly serves as the centralized authority because there are multiple stakeholders involved with competing interests.
P/F step 1 will only exacerbate overapplication. Self-selection was a huge force keeping applications to competitive specialties down. A lot of less competitive ENT/ortho hopefuls will not switch focus during 3rd year, thinking they are still in the running. The result will be mass applications to the specialty of choice AND apps to a backup specialty.

The arguments against hard application caps have some validity. Some applicants really do need to apply to all 150 ortho programs to match. Some PDs really do need to fill their class with below average students who applied en masse to a backup specialty. It's a logistical nightmare.

I think the real answer is application/interview tiers on both sides. Applicants should be able to submit limited "preferred" applications and unlimited "general" applications. 5 would be a good starting number. I would also apply a cap to "preferred" interviews on the part of the programs. Preferred interview slots could be a choice of virtual vs. in person while the rest are done virtually. Ideally applicants would have ~3 "preferred" interviews and several more general interviews. If applicants strongly prefer a program after a general interview, they should also be allowed to switch. This allows a few scenarios to play out based on program competitiveness:

1) Top-tier programs: Eliminate anyone who did not submit as "preferred." Set up "preferred" interviews for most they plan to rank-to-match and don't waste much time with "general" interviews.

2) Mid-tier programs: Holistically review "preferred" apps, apply filters to those who submitted under "general." Set up "preferred" interviews for applicants they are most serious about, and interview some "general" applicants as a backup.

3) Lower-tier programs: Holistically review "preferred" apps and interview basically all of them. Filter "general" apps more aggressively (knowing the "preferred" apps are more likely to rank them highly), but keep plenty of "general" interviews in the pipeline as a backup.

Overall everyone filters less, interviews less, and focuses more on their top choices. We can also drop most of the BS surrounding "signaling" to a PD that you are interested. By giving them one of the 5 coveted "preferred" spots, you're sending the most honest signal possible. Personally, I wouldn't mind doing 2-4 in-person interviews to determine the place I'll spend the next 3-7 years. The problem with in-person interviews now is that you spend thousands on airfare for 10+ interviews and don't even know how serious the programs are about you.

Also, we need either a new stratification exam, or we need the NBME to tighten up its statistics and create a more reliable test. We also need them to be far more transparent about exam statistics to help PDs understand the true differences between scores.
 
  • Like
Reactions: 1 users
Ok, so here’s a scary thought. How about we make all steps P/F but then only make the score available to PDs? That way students will never know their scores and PDs get what they want. Win-win

Edit: just like schools that are P/F still release a class rank to PDs.
 
  • Haha
  • Hmm
Reactions: 2 users
Ok, so here’s a scary thought. How about we make all steps P/F but then only make the score available to PDs? That way students will never know their scores and PDs get what they want. Win-win

Edit: just like schools that are P/F still release a class rank to PDs.

This is what CASPER used to do and it was beyond annoying lol
 
Ok, so here’s a scary thought. How about we make all steps P/F but then only make the score available to PDs? That way students will never know their scores and PDs get what they want. Win-win

Edit: just like schools that are P/F still release a class rank to PDs.
What? Applicants are still told their class rank….
 
The arguments against hard application caps have some validity. Some applicants really do need to apply to all 150 ortho programs to match. Some PDs really do need to fill their class with below average students who applied en masse to a backup specialty.
Wait why? help me understand

In a capped world, why would some applicants be still compelled to apply everywhere as opposed to dividing their apps targeting ortho + backup?

I’ll admit i’m not entirely clear what happens if programs go unfilled though

I agree it’s a logistical nightmare + there’s the issue on who is going to enforce the cap.
 
Ok, so here’s a scary thought. How about we make all steps P/F but then only make the score available to PDs? That way students will never know their scores and PDs get what they want. Win-win

Edit: just like schools that are P/F still release a class rank to PDs.
I think this will lead to unintended consequences of schools with deep ties with programs having an informational advantage by relaying this to students
 
  • Like
Reactions: 1 user
If competitive residencies 100% fill anyway, I don't understand the argument for NOT capping apps. Maybe my logic is flawed here but if you're 100% filling before capping and 100% filling after capping, is anyone really getting screwed over? All it's doing is removing the randomness/arbitrary screens and interview bloat, but the same amount of people are matching at the end of the day. When everyone applies to every program, which is almost occuring in highly competitive specialties, the entire purpose of the application system is defunct. PD's are forced to guess who is most interested in actually going there which may or may not align with what you are trying to portray which is worse for both parties.

In fact now that I'm thinking about it MORE people would match at programs they want to be at because the randomness of a PD guessing if you want to be there would be removed. If you have 600 people applying to 400 spots with 30 apps a piece, you would know with 100% certainty that all 400 that match would have matched in their top 30. However, if you have 600 people applying to 400 spots with 90 apps a piece, a good chunk of the 400 will be matching at their 30th-90th favorite program.
 
Last edited:
If competitive residencies 100% fill anyway, I don't understand the argument for NOT capping apps. Maybe my logic is flawed here but if you're 100% filling before capping and 100% filling after capping, is anyone really getting screwed over? All it's doing is removing the randomness/arbitrary screens and interview bloat, but the same amount of people are matching at the end of the day. When everyone applies to every program, which is almost occuring in highly competitive specialties, the entire purpose of the application system is defunct. PD's are forced to guess who is most interested in actually going there which may or may not align with what you are trying to portray which is worse for both parties.

In fact now that I'm thinking about it MORE people would match at programs they want to be at because the randomness of a PD guessing if you want to be there would be removed. If you have 600 people applying to 400 spots with 30 apps a piece, you would know with 100% certainty that all 400 that match would have matched in their top 30. However, if you have 600 people applying to 400 spots with 90 apps a piece, a good chunk of the 400 will be matching at their 30th-90th favorite program.

This is highly specialty dependent. Some specialties (like neuro, psych, and gen surg) have 40-50% of applicants per program coming from IMG, which is mainly what is driving up the numbers. Highly competitive specialties like Derm and Ortho have most applicants come from MD schools who are applying to every single program to have a single shot. I guess putting application number caps can start accusations of xenophobia or as being unfair to those with low step scores. :shrug:
 
If I spent half as much time actually studying for Step 2 as I did being on this thread, I wouldn't have had to make this thread in the first place :smack:
 
  • Haha
  • Like
Reactions: 10 users
I think this will lead to unintended consequences of schools with deep ties with programs having an informational advantage by relaying this to students
I was thinking only making the scores available during application cycle when the score report is transmitted through ERAS.
 
Last edited:
Wait why? help me understand

In a capped world, why would some applicants be still compelled to apply everywhere as opposed to dividing their apps targeting ortho + backup?

I’ll admit i’m not entirely clear what happens if programs go unfilled though

I agree it’s a logistical nightmare + there’s the issue on who is going to enforce the cap.
If competitive residencies 100% fill anyway, I don't understand the argument for NOT capping apps. Maybe my logic is flawed here but if you're 100% filling before capping and 100% filling after capping, is anyone really getting screwed over? All it's doing is removing the randomness/arbitrary screens and interview bloat, but the same amount of people are matching at the end of the day. When everyone applies to every program, which is almost occuring in highly competitive specialties, the entire purpose of the application system is defunct. PD's are forced to guess who is most interested in actually going there which may or may not align with what you are trying to portray which is worse for both parties.

In fact now that I'm thinking about it MORE people would match at programs they want to be at because the randomness of a PD guessing if you want to be there would be removed. If you have 600 people applying to 400 spots with 30 apps a piece, you would know with 100% certainty that all 400 that match would have matched in their top 30. However, if you have 600 people applying to 400 spots with 90 apps a piece, a good chunk of the 400 will be matching at their 30th-90th favorite program.
Hard capping applications would likely result in SOAPing for a much larger percentage of mediocre applicants, which I'd argue is significantly worse outcome than overwhelmed PDs (for everyone involved).

Basically there's a whole swath of applicants, probably a majority of applicants, that have no red flags but also no standout features, and there's a whole swath of corresponding low-tier academic/community programs that look the same on paper. Within this group, no one really knows who's competitive. It's a touch-and-go dance of "who do we like?" or "where do I fit?". Within this group there are quite a few "ugly duckling" applicants and programs that just aren't as popular as the others for some subjective reason. These applicants fall way down their list for obtaining interviews. These programs fall way down their rank list. Through broad application they find each other.

Obviously P/F step 1 will make the above worse, as most applicants will now look effectively the same and PDs will have even fewer criteria to filter on.

This is why a tiered/preference system is ideal. You get a shot at guessing where you're competitive and signaling to PDs where you want to be, but you still get to cast a wide net in case you completely misjudge yourself. Top programs get an easy way to slash application burden, and bottom programs can significantly reduce burden while still casting a wide net to fill their class. If either party gets the process wrong and winds up as an ugly duckling, at least they still get to rectify things through the normal match on a reasonable timescale (where they'll retain some ability to choose their own destiny in a logical way vs. SOAPing into the first program they can find).
 
  • Like
Reactions: 2 users
Basically there's a whole swath of applicants, probably a majority of applicants, that have no red flags but also no standout features, and there's a whole swath of corresponding low-tier academic/community programs that look the same on paper. Within this group, no one really knows who's competitive. It's a touch-and-go dance of "who do we like?" or "where do I fit?". Within this group there are quite a few "ugly duckling" applicants and programs that just aren't as popular as the others for some subjective reason. These applicants fall way down their list for obtaining interviews. These programs fall way down their rank list. Through broad application they find each other.

Obviously P/F step 1 will make the above worse, as most applicants will now look effectively the same and PDs will have even fewer criteria to filter on.
This is exactly the problem with application caps. It relies on people accurately judging their own competitiveness. Once you're dealing with,

"DO, 225 Step 2 CK, 1 case report, non-descript LOR" vs.
"Low-tier MD, 231 Step 2 CK, no research, non-descript LOR" vs.
"DO, 238 Step 2 CK, a bunch of abstracts, non-descript LOR" vs.
"Low-tier MD, 215 Step 2 CK, 2 papers (1 FA), non-descript LOR" vs.
"Mid-tier MD, 222 Step 2 CK, a few local abstracts, non-descript LOR"

then it's like... who knows? You could throw a mid-tier MD with a 240 and some research into that mix and I'm still not sure I'd immediately jump on them over the applicants listed if I were a PD. At that point you want to just see if they're interested and if they vibe with the program.
In fact now that I'm thinking about it MORE people would match at programs they want to be at because the randomness of a PD guessing if you want to be there would be removed. If you have 600 people applying to 400 spots with 30 apps a piece, you would know with 100% certainty that all 400 that match would have matched in their top 30. However, if you have 600 people applying to 400 spots with 90 apps a piece, a good chunk of the 400 will be matching at their 30th-90th favorite program.
And this is exactly the problem with allowing people to apply super broadly. PDs are essentially guessing who's serious and dictating who goes where arbitrarily.

I like the tiered/"preferred application" idea, but I'd increase the number to 10. It basically creates a 10 application limit with a built-in SOAP that isn't SOAP. Some programs will be able to completely ignore anyone who doesn't list them as "preferred." Others will get to substantially cut down on the apps they review.

My real ideal though is to get rid of the match altogether and just make October-June hiring season. Let's see what residents are worth when they can bargain for a salary.
 
  • Like
Reactions: 1 user
My real ideal though is to get rid of the match altogether and just make October-June hiring season. Let's see what residents are worth when they can bargain for a salary.

This is how the medical school admissions process works, except with applicants bargaining for scholarships instead of salaries. It's way worse than the current match system.
 
  • Like
Reactions: 1 users
This is how the medical school admissions process works, except with applicants bargaining for scholarships instead of salaries. It's way worse than the current match system.
It's way worse than the match because there are very few scholarships available for medical schools and med schools have all the power (like 0.5-3% acceptance rates across the board). If medical school admissions were a match process, tuition would be $100K/year and no one would get a scholarship.
 
  • Like
Reactions: 1 user
My real ideal though is to get rid of the match altogether and just make October-June hiring season. Let's see what residents are worth when they can bargain for a salary.

In this ideal, how do you propose preventing the issues that could arise from this, akin to those issues which occurred before the Match system, which led to creation of the Match in the first place?
 
  • Like
Reactions: 1 users
It's way worse than the match because there are very few scholarships available for medical schools and med schools have all the power (like 0.5-3% acceptance rates across the board). If medical school admissions were a match process, tuition would be $100K/year and no one would get a scholarship.

Nah man, the current match system is way better because everything is done by March and you get 3 months to get your life and sanity together before starting residency.
 
  • Like
Reactions: 3 users
My real ideal though is to get rid of the match altogether and just make October-June hiring season. Let's see what residents are worth when they can bargain for a salary.
This is how it was before the match and it was way worse than the match.
 
  • Like
Reactions: 6 users
My real ideal though is to get rid of the match altogether and just make October-June hiring season. Let's see what residents are worth when they can bargain for a salary.
Yeah this isn’t a new idea, it’s a regression back to how it used to be. It is quite literally the reason we have the Match, because of how bad it was.

Imagine the program that would be last on your rank list offering you a contract day 1 and giving giving you 7 days to say yes or no. You haven’t heard from any of the other programs you applied to yet, do you take the offer? If you think applicants are getting screwed now…..
 
  • Like
  • Haha
Reactions: 8 users
Yeah this isn’t a new idea, it’s a regression back to how it used to be. It is quite literally the reason we have the Match, because of how bad it was.

Imagine the program that would be last on your rank list offering you a contract day 1 and giving giving you 7 days to say yes or no. You haven’t heard from any of the other programs you applied to yet, do you take the offer? If you think applicants are getting screwed now…..
You left out the part where the PD's got together and divided up the spoils.
 
  • Like
  • Wow
  • Haha
Reactions: 6 users
Yeah this isn’t a new idea, it’s a regression back to how it used to be. It is quite literally the reason we have the Match, because of how bad it was.

Imagine the program that would be last on your rank list offering you a contract day 1 and giving giving you 7 days to say yes or no. You haven’t heard from any of the other programs you applied to yet, do you take the offer? If you think applicants are getting screwed now…..
This is how all other job searches work, and it is definitively in the applicant's favor. At a certain point, you have to accept a little bit of risk and independence in your career. The inability to negotiate offers suppresses salaries.

You're catastrophizing. You're imagining the bad things that could happen without acknowledging the upside. Doctors are prone to this as it's the profession that will most attract this personality type (extremely risk averse). What happens is that you say NO to that offer because it's day 1 and they're being hostile. Any PD who cares about the quality of their class and takes that approach will realize they can't get the applicants they want that way. Eventually they'll realize that to get the residents they want, they need to give flexibility and higher salaries. As a profession we'll stop putting so much weight on prestige, because we'll know some people took a little drop for some extra pay, just like how we don't assume someone is stupid because they went to their state school on a fat scholarship. That alone would be a wildly healthy development.

Literally all other elite recruiting processes manage to do this without a match. The applicants absorb a few months of stress/risk/negotiation and come out with livable wages. Every year consulting firms fight for the best MBA students, and firms pay new consultants $175-250K with principals making $350K+. Big law firms fight over top law students, and they're all getting dragged into offering Cravath Scale compensation ($235-530K for PGY1-8) to recruit who they want. VC firms fight over top MBA and PhD students and start associates at $150K+ with principals making $400K+. They fight for good applicants by throwing large sums of money at those applicants. MDs are the only professionals who start their career after a 4 year doctorate getting paid less per hour than a circus clown.

Defending this system because you're afraid negotiating would be hard is laughable. The bean counters in the hospitals are certainly laughing.
 
  • Like
Reactions: 1 user
This is how all other job searches work, and it is definitively in the applicant's favor. At a certain point, you have to accept a little bit of risk and independence in your career. The inability to negotiate offers suppresses salaries.

You're catastrophizing. You're imagining the bad things that could happen without acknowledging the upside. Doctors are prone to this as it's the profession that will most attract this personality type (extremely risk averse). What happens is that you say NO to that offer because it's day 1 and they're being hostile. Any PD who cares about the quality of their class and takes that approach will realize they can't get the applicants they want that way. Eventually they'll realize that to get the residents they want, they need to give flexibility and higher salaries. As a profession we'll stop putting so much weight on prestige, because we'll know some people took a little drop for some extra pay, just like how we don't assume someone is stupid because they went to their state school on a fat scholarship. That alone would be a wildly healthy development.

Literally all other elite recruiting processes manage to do this without a match. The applicants absorb a few months of stress/risk/negotiation and come out with livable wages. Every year consulting firms fight for the best MBA students, and firms pay new consultants $175-250K with principals making $350K+. Big law firms fight over top law students, and they're all getting dragged into offering Cravath Scale compensation ($235-530K for PGY1-8) to recruit who they want. VC firms fight over top MBA and PhD students and start associates at $150K+ with principals making $400K+. They fight for good applicants by throwing large sums of money at those applicants. MDs are the only professionals who start their career after a 4 year doctorate getting paid less per hour than a circus clown.

Defending this system because you're afraid negotiating would be hard is laughable. The bean counters in the hospitals are certainly laughing.
No one is imagining anything. There are people in medicine right now who went through residency selection before the match and they all universally say it was worse.

Medicine is not like other fields, because if you do not complete a residency, you are basically unemployable.

The match was designed, and does a fairly good job at, protecting applicants.
 
  • Like
  • Love
Reactions: 5 users
You're catastrophizing. You're imagining the bad things that could happen without acknowledging the upside. Doctors are prone to this as it's the profession that will most attract this personality type (extremely risk averse). What happens is that you say NO to that offer because it's day 1 and they're being hostile. Any PD who cares about the quality of their class and takes that approach will realize they can't get the applicants they want that way. Eventually they'll realize that to get the residents they want, they need to give flexibility and higher salaries. As a profession we'll stop putting so much weight on prestige, because we'll know some people took a little drop for some extra pay, just like how we don't assume someone is stupid because they went to their state school on a fat scholarship. That alone would be a wildly healthy development.

Literally all other elite recruiting processes manage to do this without a match. The applicants absorb a few months of stress/risk/negotiation and come out with livable wages. Every year consulting firms fight for the best MBA students, and firms pay new consultants $175-250K with principals making $350K+. Big law firms fight over top law students, and they're all getting dragged into offering Cravath Scale compensation ($235-530K for PGY1-8) to recruit who they want. VC firms fight over top MBA and PhD students and start associates at $150K+ with principals making $400K+. They fight for good applicants by throwing large sums of money at those applicants. MDs are the only professionals who start their career after a 4 year doctorate getting paid less per hour than a circus clown.

Defending this system because you're afraid negotiating would be hard is laughable. The bean counters in the hospitals are certainly laughing.
It’s not catastrophizing, it’s literally what has already happened. You saying that just shows a lack of understanding of the history of medical training.

In other fields your biggest bargaining chip is the power to walk away for a better opportunity. Guess what happens when you walk away in your scenario… you are unemployable, and the program just moves onto the next of the literal thousands of people lining up for a US residency position. The match is actually designed to protect applicants.
 
  • Like
Reactions: 8 users
I joked about it before but people who score high are more likely to believe the test is good as it is validating (I haven't taken it yet so unbiased). But NBME's own data shows that if you theoretically had a 260 and took it multiple times a whopping 33% of your scores would fall outside of a 252-268 window which is ridiculous. A lot of people make BIG drops or gains from their practice tests, you're just unlikely to hear from someone who thought they were getting a 260 and ended up in the 240's
Fascinating. UWSA2 was 4 points off for step1 and 0 points off for step2 in my case.

I don’t think they’re good tests because I did well. I just thought they were good because the info being tested and my scores in this very highly variable range were very predictable.

It seems I’ve fallen victim to anecdata though.

I’d actually be in favor of a different test to better stratify applicants because something has to do it. I just wonder if it would really change anything considering how much these scores are correlated to studying. And it’s not like they could change the material aside from getting rid of stupid stuff like memorizing the names of genes or which chromosome is associated with which disorder. The NBMEs attempt to add more relevant questions has resulted in pointless ethics questions.
 
This is how all other job searches work, and it is definitively in the applicant's favor. At a certain point, you have to accept a little bit of risk and independence in your career. The inability to negotiate offers suppresses salaries.

You're catastrophizing. You're imagining the bad things that could happen without acknowledging the upside. Doctors are prone to this as it's the profession that will most attract this personality type (extremely risk averse). What happens is that you say NO to that offer because it's day 1 and they're being hostile. Any PD who cares about the quality of their class and takes that approach will realize they can't get the applicants they want that way. Eventually they'll realize that to get the residents they want, they need to give flexibility and higher salaries. As a profession we'll stop putting so much weight on prestige, because we'll know some people took a little drop for some extra pay, just like how we don't assume someone is stupid because they went to their state school on a fat scholarship. That alone would be a wildly healthy development.

Literally all other elite recruiting processes manage to do this without a match. The applicants absorb a few months of stress/risk/negotiation and come out with livable wages. Every year consulting firms fight for the best MBA students, and firms pay new consultants $175-250K with principals making $350K+. Big law firms fight over top law students, and they're all getting dragged into offering Cravath Scale compensation ($235-530K for PGY1-8) to recruit who they want. VC firms fight over top MBA and PhD students and start associates at $150K+ with principals making $400K+. They fight for good applicants by throwing large sums of money at those applicants. MDs are the only professionals who start their career after a 4 year doctorate getting paid less per hour than a circus clown.

Defending this system because you're afraid negotiating would be hard is laughable. The bean counters in the hospitals are certainly laughing.
While I agree that the match does eliminate the negotiating power for the best applicants, it’s a massive benefit to the rest of them. As has been rehashed a couple times already, ~80% of med students are honestly pretty interchangeable. Look how many “top” EM programs just plugged in some warm bodies from the SOAP this past cycle. That’s all they need at the end of the day.

Really think about it. We’re not employable without residency. The powers that be could pass a bill tomorrow that says we have to pay our residency programs for the training they provide and we’d honestly have no options aside from sucking it up or leaving. And no one would care because they’d happily replace us with another warm body. We don’t have real negotiating power.
 
  • Like
Reactions: 6 users
Without the match, applicants have no leverage.Many ortho applicants would pay to have the chance to be an ortho. Look at dentistry, where people literally pay to do fellowships. Penn state as an example charges 60k a year for OMFS, and i'm sure they fill every year.
 
  • Like
  • Love
Reactions: 3 users
Without the match, applicants have no leverage.Many ortho applicants would pay to have the chance to be an ortho. Look at dentistry, where people literally pay to do fellowships. Penn state as an example charges 60k a year for OMFS, and i'm sure they fill every year.

Isn't dental school already way more expensive than medical school? Are these people graduating with 600+ in debt or something? If I'm not mistaken, OMFS clinics are mostly for-profit so they won't qualify for PSLF either.
 
Literally all other elite recruiting processes manage to do this without a match. The applicants absorb a few months of stress/risk/negotiation and come out with livable wages. Every year consulting firms fight for the best MBA students, and firms pay new consultants $175-250K with principals making $350K+. Big law firms fight over top law students, and they're all getting dragged into offering Cravath Scale compensation ($235-530K for PGY1-8) to recruit who they want. VC firms fight over top MBA and PhD students and start associates at $150K+ with principals making $400K+. They fight for good applicants by throwing large sums of money at those applicants. MDs are the only professionals who start their career after a 4 year doctorate getting paid less per hour than a circus clown.
Thing is MBA and JD grads from top programs can start making money for their employers (and therefore themselves) almost immediately. Residency training is basically an apprenticeship, and until you're BE/BC and can bill for your services you aren't worth a whole lot in the grand scheme of things. Whatever you get paid as a resident, there are literally thousands of IMGs and FMGs who would do the same work for less money. That's not leverage.

Also remember that the large majority of GME in this country is federally funded through CMS to the tune of over $16 billion a year. That's the cost of getting new physicians to the point of independent practice. Residency salaries aren't glamorous, but they'll provide a roof and corn flakes, and as a trainee you don't have to worry about malpractice, life insurance, or disability coverage, and your health insurance is typically well subsidized. I strongly doubt there is a better deal for all residents and fellows hiding behind door #2.
 
  • Like
  • Love
Reactions: 3 users
This is how all other job searches work, and it is definitively in the applicant's favor. At a certain point, you have to accept a little bit of risk and independence in your career. The inability to negotiate offers suppresses salaries.

You're catastrophizing. You're imagining the bad things that could happen without acknowledging the upside. Doctors are prone to this as it's the profession that will most attract this personality type (extremely risk averse). What happens is that you say NO to that offer because it's day 1 and they're being hostile. Any PD who cares about the quality of their class and takes that approach will realize they can't get the applicants they want that way. Eventually they'll realize that to get the residents they want, they need to give flexibility and higher salaries. As a profession we'll stop putting so much weight on prestige, because we'll know some people took a little drop for some extra pay, just like how we don't assume someone is stupid because they went to their state school on a fat scholarship. That alone would be a wildly healthy development.

Literally all other elite recruiting processes manage to do this without a match. The applicants absorb a few months of stress/risk/negotiation and come out with livable wages. Every year consulting firms fight for the best MBA students, and firms pay new consultants $175-250K with principals making $350K+. Big law firms fight over top law students, and they're all getting dragged into offering Cravath Scale compensation ($235-530K for PGY1-8) to recruit who they want. VC firms fight over top MBA and PhD students and start associates at $150K+ with principals making $400K+. They fight for good applicants by throwing large sums of money at those applicants. MDs are the only professionals who start their career after a 4 year doctorate getting paid less per hour than a circus clown.

Defending this system because you're afraid negotiating would be hard is laughable. The bean counters in the hospitals are certainly laughing.

Graduating medical students require at least one year of residency in order to become licensed in most states. You can not bill for services without a license. All graduates need residency to become trained in a specialty and most institutions will not hire someone with even just 1-year of residency. It’s fine to say The Match is a suboptimal system you want changed, but to pretend it’s a free market situation where prospective trainees will have the upper hand and be able to negotiate for larger salaries ignores the history why The Match was created and need for residency training programs. If you’re going to advocate for a different system, I would expect you to have better solutions formulated for potential pitfalls. “Top” training program could easily solicit that wealthy applicants pay a fee to train there and many would do so. There are already scams and schemes aplenty which promise to get applicants into medical schools, it would not surprise me in the least if some hospitals ultimately charged a fee for the privilege of training there. Students have essentially no leverage without The Match. It has been tweaked over the years, I graduated med school in The Scramble era and watching a few friends go through that seemed like a throwback to the old days as described in the article below - they took the first job they were offered which was usually the first place they were able to get through the phone line without a busy signal. The SOAP is imperfect but seems a sight better than The Scramble.

In addition, watch how quickly large institutions would just hire a bunch of mid-levels who could bill for the work if The Match disappeared and the expected chaos ensued. I know everyone on SDN thinks that trainees are indispensable to most institutions, but changes in healthcare law in many states and scope creep has totally changed the paradigm in recent years.

This article is very short but does give some information the reason for creation of The Match. The Origins, History, and Design of the Resident Match
 
Last edited:
  • Like
  • Love
Reactions: 2 users
Top