I fail to understand why step matters so much....

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
D

deleted1139416

I say this in regards to comparison to how much the MCAT matters in getting into medical school. Although both exams are important, people act as though USMLE is the end all be all in getting into residency, whereas the MCAT is "one of many factors that matter". It just doesn't make sense, especially when you consider that programs get 500-600 applications compared to medical schools which get 10,000+ applications. This also doesn't make sense when you compare it to what med students and residents do. Medical school is still school, so test taking matters. But residency is a JOB, so I still don't get why a test matters so much.

Members don't see this ad.
 
You take tests in residency too.
 
  • Like
Reactions: 8 users
Members don't see this ad :)
It just doesn't make sense, especially when you consider that programs get 500-600 applications compared to medical schools which get 10,000+ applications.
Residencies also have like 5-20x smaller class sizes.
 
  • Like
Reactions: 4 users
The USMLE Steps aren’t meant to determine who will be a better doctor. It’s really a tool to help determine risk for residency programs. Someone who scores barely passing step exams is at a higher risk of failing their speciality boards, and also at higher risk of failing out of residency for other issues.

Now, the exams only go so far with that utility. There probably isn’t any difference in risk when taking someone who scores a 260 over a 240 on step 2. The benefit lies in the taking the 260 over the 215. But people fail out of residency with 260’s, and people with 215’s go on to be amazing doctors. Again, it’s a tool to determine risk so the score helps paint a picture to a residency program how hard they will need to work to make sure you succeed as a resident.

why is it the number 1 thing residencies use to stratify applicants? Mostly because the other parts of the application don’t stratify that risk as much. Amount of research, volunteer experiences, subjective clerkship evaluations, and letters of Rec don’t carry the same objective weight to do that. But they certainly help, which is why they are also important.
 
  • Like
Reactions: 11 users
I've always felt that part of the reason is generating rank lists. When applying an algorithm to rank applicants, data that is 1) objective and 2) that offers more discrete points, is helpful in establishing a rank list that is somewhat justifiable if placed under scrutiny.

For example, if the applicant pool for step 1 scores ranges from 200 - 240 for a relatively non-competitive specialty, this portion of the ranking algorithm has 41 discrete points. Conversely, something like the number of volunteer experiences might only range from 1 - 10, offering far fewer points to differentiate applicants. Additionally, those data points are much more subjective since volunteer activities can vary greatly. On the other hand, a 5-point difference in a step 1 score offers a more objectively measurable difference in candidates.

The TLDR is that I think at least one reason for the emphasis is that there are so many applications for so few spots.
 
The biggest reason IMO is that for residency there really are fewer and fewer objective measures of applicants' ability. With schools becoming P/F, and a progressive push to make every single comment on every single student's MSPE be super enthusiastic to help ensure a great match list for the school, the program director is left with little useful data to decide who to bring for an interview, let alone how to rank them. Step 2 is really the only normalized number that can compare students from across all schools, and then you get into squishy criteria like med school prestige, quality of research, etc. Ideally yes, a school would just give an honest assessment of an applicant's ability, but that just is not happening.

Compare to the med school application process. An undergrad GPA is certainly variable depending on which school you went to, but it is still an objective measure. You don't get a 3.7+ without excelling in your studies. Similarly you can add up the number of EC hours and tell who has more clinical experience, volunteering, research, etc. There are real ways to differentiate med school applicants aside from the MCAT.
 
  • Like
Reactions: 8 users
The biggest reason IMO is that for residency there really are fewer and fewer objective measures of applicants' ability. With schools becoming P/F, and a progressive push to make every single comment on every single student's MSPE be super enthusiastic to help ensure a great match list for the school, the program director is left with little useful data to decide who to bring for an interview, let alone how to rank them. Step 2 is really the only normalized number that can compare students from across all schools, and then you get into squishy criteria like med school prestige, quality of research, etc. Ideally yes, a school would just give an honest assessment of an applicant's ability, but that just is not happening.

Compare to the med school application process. An undergrad GPA is certainly variable depending on which school you went to, but it is still an objective measure. You don't get a 3.7+ without excelling in your studies. Similarly you can add up the number of EC hours and tell who has more clinical experience, volunteering, research, etc. There are real ways to differentiate med school applicants aside from the MCAT.
It'll be really fun to see what P/F on Step 1 does for residency applications. I suspect that for most specialties it won't alter much. But for the ultra competitive ones, PDs no longer even have a screen to pare down hundreds of applicants to something manageable. PDs are human - and most are lazy - so they're not going to whittle down applicants on a holistic basis by reading all application in their entirety and then judging them against all other applicants also evaluated in their entirety. There will be some sort of entry barrier to weed out a set number of applicants. In my era, most residency applicants didn't have Step 2 scores when they applied, unless that's the new normal.

Personally, and I've said as much before on here, I think that it's going to start to matter a whole lot more what medical school you go. As we've already stated, PDs are going to be inundated with applicants that they can't sort out efficiently and they're certainly not going to spend entire full weeks going through applicants deciding who to interview. What they will do however is look at what they have in their own backyard and just fill the spots with that. It of course poses an interesting dilemma with ERAS. ERAS says that you either have all spots in the match, or none at all. That of course is irrelevant for competitive specialties as they'll always fill, match or no match.
 
  • Like
Reactions: 1 users
In my era, most residency applicants didn't have Step 2 scores when they applied, unless that's the new normal.
Yes, it's the new normal. Step 2 is the new Step 1.
It was the predictable outcome of making Step 1 P/F.
Now there is only one bite at the apple and it comes too late to change course if there is an unfortunate score.
 
Last edited:
  • Like
Reactions: 13 users
Yes, it's the new normal. Step 2 is the new Step 1.
It was the predictable outcome of making Step 1 P/F.
Now there is only one bite at the apple and it comes too late to change course if there is an unfortunate score.
So basically a high-stakes test was replaced by an even higher-stakes test for which there isn't even a chance of showing programs you've improved upon your previous mistakes. Wasn't the STEP 1 change to P/F supposed to make things less stressful for med students :unsure:
 
  • Like
Reactions: 1 users
Wasn't the STEP 1 change to P/F supposed to make things less stressful for med students :unsure:
That was the cover story.
It was actually designed to reduce stress for Student Affairs Deans.
They kicked the can into the years when students are supposed to be focused on learning actual clinical skills.
Brilliant.
 
Last edited:
  • Like
  • Haha
Reactions: 10 users
I am sympathetic to the arguments for P/F Step1 - S1 content had clearly strayed far from clinical relevance, likely due to attempts to combat score hyperinflation due to emergence of 3rd party materials. However, I think proponents of P/F S1 failed to understand what I believe to be the underlying issue, which is over application for residency positions. As long as programs receive 100's of applications per spot, PDs will continue to need metrics by which they can more easily filter and stratify candidates.
Was S1 a bad exam? I would say so. But I believe that without addressing the underlying issue of over application with application caps, we have simply kicked the can down the road to Step 2. In some ways, I feel fortunate amongst the first class to take S2 in the P/F S1 era - in the future, I can see improved S2 resources, score creep, and resulting increased in exam difficulty to combat runaway scores.
 
  • Like
Reactions: 2 users
Members don't see this ad :)
Was S1 a bad exam? I would say so.
Step one is a good exam for what it was designed to test.
Step 2 might be a slightly better test (for clinically relevant info), but the timing makes it problematic for medical students trying to put together a realistic application.
 
Last edited:
  • Like
Reactions: 18 users
That will happen (more) when Step 2 becomes P/F

In that case, the USMLE would essentially get replaced by the MCAT since the MCAT determines what tier of med school you go to.
 
  • Like
Reactions: 1 users
In that case, the USMLE would essentially get replaced by the MCAT since the MCAT determines what tier of med school you go to.
To some degree, you are correct.
However, some exceptional students go to (perfectly ok) public schools in their own state.
 
Last edited:
  • Like
  • Haha
Reactions: 4 users
To some degree, you are correct.
However, some exceptional students go to (perfectly ok) public schools in their own state.
But I think to the larger point, you won't be able to find those exceptional students after they go to the public schools because there will be no means by which to objectively stratify them from the rest of the (perfectly ok but not exceptional) students.
 
  • Like
Reactions: 5 users
It should also be noted that pass/fail isn’t unique to step. Devaluation of standardized testing is pretty commonplace. These days most Ivy League undergrads don’t even require SAT or ACT
 
PDs are human - and most are lazy
It's completely impractical for a program to sort through 1000's of applications. Medical schools get similar numbers of apps - but have whole teams of people whose job it is to review applications and get paid to do so. Programs often have just a few people. It's not just that PD's are too lazy to do this -- it's that it's essentially impossible with the resources available.

If S2 were to go P/F, I expect each specialty would end up hosting their own exam. One could argue that would be a "better" solution than using the steps. Almost all specialties have an in training exam - that's what would probably be used.
 
  • Like
  • Love
Reactions: 7 users
If S2 were to go P/F, I expect each specialty would end up hosting their own exam. One could argue that would be a "better" solution than using the steps. Almost all specialties have an in training exam - that's what would probably be used.
And it would just be one more exam to worry about. I don’t anticipate this to go well if it were to happen.
 
Last edited:
  • Like
Reactions: 1 user
It's completely impractical for a program to sort through 1000's of applications. Medical schools get similar numbers of apps - but have whole teams of people whose job it is to review applications and get paid to do so. Programs often have just a few people. It's not just that PD's are too lazy to do this -- it's that it's essentially impossible with the resources available.

If S2 were to go P/F, I expect each specialty would end up hosting their own exam. One could argue that would be a "better" solution than using the steps. Almost all specialties have an in training exam - that's what would probably be used.

Orthopods will be happy they don’t have to learn ekgs…
 
It's completely impractical for a program to sort through 1000's of applications. Medical schools get similar numbers of apps - but have whole teams of people whose job it is to review applications and get paid to do so. Programs often have just a few people. It's not just that PD's are too lazy to do this -- it's that it's essentially impossible with the resources available.

If S2 were to go P/F, I expect each specialty would end up hosting their own exam. One could argue that would be a "better" solution than using the steps. Almost all specialties have an in training exam - that's what would probably be used.
What's more likely to happen if both Steps become P/F is that PDs are just going to pluck the most promising candidates from their institution's medical school class. They would only have to look outside their institution when they have limited in house prospects. Its the most logical, efficient, and low-effort way to fill your incoming residency pool with the added benefit of knowing what you're getting, especially if they've already rotated in the department and were liked.
 
Last edited:
I say this in regards to comparison to how much the MCAT matters in getting into medical school. Although both exams are important, people act as though USMLE is the end all be all in getting into residency, whereas the MCAT is "one of many factors that matter". It just doesn't make sense, especially when you consider that programs get 500-600 applications compared to medical schools which get 10,000+ applications. This also doesn't make sense when you compare it to what med students and residents do. Medical school is still school, so test taking matters. But residency is a JOB, so I still don't get why a test matters so much.
Residency programs would like to know that you have an adequate base level of medical knowledge, so you are able to continue adding to your knowledge in the diagnosis and treatment realms of medicine, and can safely care for patients as a resident physician.
I certainly want doctors that take care of me to know more than the bare minimum and to keep learning throughout their careers.
 
  • Like
  • Love
Reactions: 2 users
And it would just be one more exam to worry about. I don’t anticipate this to go well of it were to happen.

Although they may be more suited for stratifying candidates than the Step exams, which have a higher standard deviation (15 IIRC) than most of us act like.

Edit: Standard error of difference is about 8 points for Step 2 and SEM for individual test takers is 5 points.

 
Last edited:
  • Like
Reactions: 3 users
One thing I’ve noticed since step 1 went P/F is an marked increase in emails I get for research opportunities, up from maybe 1 every few months from people genuinely interested in subject matter to 1-2 per week essentially requesting a publication or an abstract in time for ERAS. I think it will actually worsen the disparity created by step 1
 
  • Like
Reactions: 1 user
If S2 were to go P/F, I expect each specialty would end up hosting their own exam. One could argue that would be a "better" solution than using the steps. Almost all specialties have an in training exam - that's what would probably be used.
This would be a disaster. It’s one thing for IM or FM since med school actually prepares you for that. But rads, anesthesia, surgical subs, rad onc, etc don’t even get touched in med school. The rads inservice gives you multiphase MRI and expects you to diagnose conditions you’ve never heard of as a med student. There’s no time to learn that stuff either.

I’m sure many other specialties would be similar in difficulty for med students.
 
  • Like
Reactions: 2 users
One thing I’ve noticed since step 1 went P/F is an marked increase in emails I get for research opportunities, up from maybe 1 every few months from people genuinely interested in subject matter to 1-2 per week essentially requesting a publication or an abstract in time for ERAS. I think it will actually worsen the disparity created by step 1

The Step 1 P/F announcement happened within weeks of the start of the COVID-19 pandemic. People at my school who matriculated in 2020 have had a very hard time finding research the first two years when everything was virtual so they are spending a lot of time in the 3rd year getting publications for ERAS. So this might be a COVID thing, not a Step 1 thing.
 
Last edited by a moderator:
  • Like
Reactions: 1 user
The Step 1 P/F announcement happened within weeks of the start of the COVID-19 pandemic. People at my school who matriculated in 2020 have had a very hard time finding research the first two years when everything was virtual so they are spending a lot of time in the 3rd year getting publications for ERAS. So this might be a COVID thing, not a Step 1 thing.
That’s a valid point, but most of the emails I’m getting are from 1st and 2nd years and not many from 3rd and 4th
 
It's completely impractical for a program to sort through 1000's of applications. Medical schools get similar numbers of apps - but have whole teams of people whose job it is to review applications and get paid to do so. Programs often have just a few people. It's not just that PD's are too lazy to do this -- it's that it's essentially impossible with the resources available.

If S2 were to go P/F, I expect each specialty would end up hosting their own exam. One could argue that would be a "better" solution than using the steps. Almost all specialties have an in training exam - that's what would probably be used.
Specialty specific exams would be really bad. You would need to decide very early on in med school what you want to do. If you decide too late, then you would be behind you peers who were gunning from day 1 of med school and have already spent 100hrs studying for the exam. These tests are likely to be stratified by percentile. Decided late 3rd yr you going to apply to Ortho and have a strong app? Will need to decide to delay to the next year for the sole purpose of needing to study for the test.

If the goal of p/f USMLE was to have students show up for lectures and learn medicine/do
more research, then will be out the window with speciality specific exams. Students will learn the bare minimum to school exams/ usmle and will spend time on the specialty tests instead.
If you don’t match in the speciality you spent time studying, then your screwed b/c you fund of knowledge of regular medicine/other specialties will be really really poor.
 
  • Like
  • Hmm
Reactions: 3 users
Specialty specific exams would be really bad. You would need to decide very early on in med school what you want to do. If you decide too late, then you would be behind you peers who were gunning from day 1 of med school and have already spent 100hrs studying for the exam. These tests are likely to be stratified by percentile. Decided late 3rd yr you going to apply to Ortho and have a strong app? Will need to decide to delay to the next year for the sole purpose of needing to study for the test.

If the goal of p/f USMLE was to have students show up for lectures and learn medicine/do
more research, then will be out the window with speciality specific exams. Students will learn the bare minimum to school exams/ usmle and will spend time on the specialty tests instead.
If you don’t match in the speciality you spent time studying, then your screwed b/c you fund of knowledge of regular medicine/other specialties will be really really poor.
Yeah really don’t want to get admitted by someone who studied ortho for 4 years who’s now begrudgingly in IM.
 
  • Like
Reactions: 2 users
Specialty specific exams would be really bad. You would need to decide very early on in med school what you want to do. If you decide too late, then you would be behind you peers who were gunning from day 1 of med school and have already spent 100hrs studying for the exam. These tests are likely to be stratified by percentile. Decided late 3rd yr you going to apply to Ortho and have a strong app? Will need to decide to delay to the next year for the sole purpose of needing to study for the test.

If the goal of p/f USMLE was to have students show up for lectures and learn medicine/do
more research, then will be out the window with speciality specific exams. Students will learn the bare minimum to school exams/ usmle and will spend time on the specialty tests instead.
If you don’t match in the speciality you spent time studying, then your screwed b/c you fund of knowledge of regular medicine/other specialties will be really really poor.
I don't disagree with you, and I also admit that for some specialties (such as rads) this type of methodology would be more complicated.

But let's say S2 becomes P/F, preclinicals are all P/F, clerkships are P/F (which many schools are), and 4th year rotations are graded with 95% of everyone getting "Honors". With that, and 1000 applications, how is a program supposed to decide which 120 people to invite for an interview?
 
I don't disagree with you, and I also admit that for some specialties (such as rads) this type of methodology would be more complicated.

But let's say S2 becomes P/F, preclinicals are all P/F, clerkships are P/F (which many schools are), and 4th year rotations are graded with 95% of everyone getting "Honors". With that, and 1000 applications, how is a program supposed to decide which 120 people to invite for an interview?
Minimum competency should be good enough. Better start reading those applications. There plenty of other things you can use to startify applicants. We’re acting like board scores are the only things on the application.
 
Last edited:
  • Like
Reactions: 2 users
I don't disagree with you, and I also admit that for some specialties (such as rads) this type of methodology would be more complicated.

But let's say S2 becomes P/F, preclinicals are all P/F, clerkships are P/F (which many schools are), and 4th year rotations are graded with 95% of everyone getting "Honors". With that, and 1000 applications, how is a program supposed to decide which 120 people to invite for an interview?

AAMC should create a standardized SLOE/MSPE with certain criteria and limit how many students can fit into each category. Each students gets a composite score based on how well they get along with others, how receptive they are to feedback, etc. These scores are based on the compilation of data from all of their rotations and the scores for the entire class for any particular school are on a bell curve.

The way it is right now, clinical evaluations are extremely subjective and vary too much from school to school. This is why attendings often just give all 5/5s or 3/5s no matter what. If AAMC creates a standardized comprehensive evaluation which is completed for every student for every rotation, then that can be very powerful.

Every year, AAMC can run a statistical analysis for every medical school in order to monitor schools who are inflating scores and flag schools who are inflating scores and report this on student's residency applications. In about 8-10 years, AAMC can do a retrospective study finding the correlation of performance scores with residency success parameters.

Will this be a hassle? Sure. But way better than relying on a single test.
 
Last edited by a moderator:
  • Okay...
  • Like
  • Hmm
Reactions: 2 users
Minimum competency should be good enough. Better start reading those applications. There plenty of other things you can use to startify applicants. We’re acting board scores are the only things on the application.
Residencies are typically looking for three basic things in people

1) Don't fail boards, don't struggle with in-service exams, meet all explicit milestones, and be quantifiably productive day-to-day.
2) Be decent to work with and be able to navigate the political environment of the residency program and beyond.
3) Have high potential for being a future leader in the field and a prestigious alum of the program.

Prestigious programs want all three, and community programs are concerned with 1 > 2 >>>> 3. Step exams are the only universal, quantified test of the ability to set an explicit goal and hit that goal. Statistical variance on those exams is garbage, unfortunately, but it still roughly stratifies risk of for #1 and acts as an indicator of your overall ability to meet primary, quantified goals. Clinical grades and LORs stratify risk for #2. ECs and extraordinary achievements (e.g., amazing research, founding a successful company/non-profit, winning highly prestigious awards like Rhodes/Fulbright/Marshall Scholarships) stratify for #3.

None of these metrics are perfect, but you can roughly group people into thirds. People in the top 1/3rd of each category with 250+, mostly honors, and some notable extracurricular achievement are extremely rare and should find themselves at top residencies. This is a far more reliable method for both the programs to find top candidates and for the students define themselves as top candidates. It allows for the necessary wiggle room on step (due to unacceptably high statistical variance between scores) and the necessary wiggle room on clinical grades (due to unacceptably high randomness of attendings).

People looking to make all steps and all grades P/F are willfully ignorant. They just want the power to choose who they want for their residency program without objective factors interfering. Then they can choose whoever they want without justification, be it alums of your alma mater, students from a school the dean is placating, attractive people, students who satisfy DEI goals, whoever your buddy wrote a LOR for, etc...
 
  • Like
Reactions: 4 users
Minimum competency should be good enough. Better start reading those applications. There plenty of other things you can use to startify applicants. We’re acting board scores are the only things on the application.
I can't think of a worse situation than being ranked solely on subjective clinical grades that vary widely among schools (and I have seen a PD admit on here that they don't adjust for % of students get honors because it takes too much time), volunteering, and low quality publications

I suspect the reality is most PD also wouldn't want that situation, and instead of stratifying based on board scores would instead use school prestige as a replacement (if they don't already)
 
  • Like
Reactions: 1 user
Specialty specific exams would be interesting because people would need to know what kind of doctor they want to be before they get to medical school, which I'm all for. I think medical school being an "open field of opportunity" leads to people optimizing for money and prestige because "being a doctor" is not very specific. It's like saying "I want to have a job."

That said, every solution we present will be subject to Goodhart's law. The answer really is holistic evaluation. Hopefully AI makes this easier for us because it's just unreasonable to expect the small residency selection teams to do this unless that is their only job, which would make them less able to have a good sense of what makes an applicant a good doctor.
 
AAMC should create a standardized SLOE/MSPE with certain criteria and limit how many students can fit into each category. Each students gets a composite score based on how well they get along with others, how receptive they are to feedback, etc. These scores are based on the compilation of data from all of their rotations and the scores for the entire class for any particular school are on a bell curve.

The way it is right now, clinical evaluations are extremely subjective and vary too much from school to school. This is why attendings often just give all 5/5s or 3/5s no matter what. If AAMC creates a standardized comprehensive evaluation which is completed for every student for every rotation, then that can be very powerful.

Every year, AAMC can run a statistical analysis for every medical school in order to monitor schools who are inflating scores and flag schools who are inflating scores and report this on student's residency applications. In about 8-10 years, AAMC can do a retrospective study finding the correlation of performance scores with residency success parameters.

Will this be a hassle? Sure. But way better than relying on a single test.
The AAMC is a service organization, not the Politburo. It has no authority to dictate grading criteria to individual schools.

Alas, there probably isn't some magical standardization fix that has eluded everyone up until now. Evaluations have an inescapable trade-off. You can make them short and sweet but they give less information. Or you can make them lengthy and comprehensive, and they will give you more information from the small minority of faculty who are willing to complete them thoughtfully, and less information from the large majority of faculty who look at them as a nuisance (or even an obstacle to RVUs). If you can design an evaluation form that resolves this dilemma then you will become quite famous in medical education. Most schools just try to find some middle ground that works well enough for them.
 
  • Love
  • Like
Reactions: 1 users
Orthopods will be happy they don’t have to learn ekgs…

Well, I guess it depends on who you ask. Personally, I think most orthopods prefer having their application evaluated based on merit rather than nepotism, personal connections, and medical school prestige. It's the harsh reality of making step 1 (and possibly step 2) P/F.
 
  • Like
Reactions: 1 user
Minimum competency should be good enough. Better start reading those applications.
So, just for fun. We get 2000 applications. How long would it take to do a holistic review of an application? Anything less than 15 minutes involves skimming through material which inevitably misses things. So let's say 15 minutes - or 0.25 hours.

0.25 x 2000 = 500 hours of time. Let's say I did this full time, 10 hours a day, 5 days a week. That's 50 hours per week. So it would take me 10 weeks of doing nothing else. And, my brain would melt.

But of course it wouldn't all be one person. There should be a team doing this. The PD in our program has 0.5 FTE for the program (the rest is clinical work which continues). Each APD has 0.3. After that, funding becomes very thin. Let's say that most average sized programs have one PD and 2 APD's -- that's only 0.9 total FTE - so still the same 10-12 weeks.

Applications open in the begining of September. So we would be into December before we would have even reviewed every application -- and that's before we make decisions, invite people, etc. And the holidays will slow things down (both thanksgiving and the winter holidays). And we can't run a program where all we do for 3 months, with all of our time, is read applications.

So, sure, let's start "reading those applications". Trust me, we want to do a holistic review, we really do. It's just practically impossible. SOmething needs to trim the pile to something manageable. At present, that's some combination of signals, geo preference, USMLE scores, applicant type (MD/DO/IMG), and school attended.

If you're answer is "hire more people", we have no budget to do so, and no leverage to get more.

There plenty of other things you can use to startify applicants. We’re acting board scores are the only things on the application.
This is the problem - there isn't. There's no standardization of grades across schools - so you have some that give 90% H / 10% HP / 1-2 students Pass, and others that are 15% H / 85% Pass. How am I supposed to stratify people with that fairly? And more and more schools are going P/F for clerkships. So what do you suggest I use to stratify people - again, without having to read the entire application (which isn't feasible)?

hopefully AI makes this easier for us
Is this really the answer? I guess it could work, if we could rate how important various parts of the application are to our program and the AI could really extract out the important bits. But I then worry that students will want the AI to "run their application through" and there will be endless gaming of the application to generate higher and higher AI scores.
 
  • Like
Reactions: 8 users
Without any form of examinations the alternative is judging applicants based off their school name, who they know, or severely subjective evaluations.

Examinations have their issues, but the former is far far worse.



And yes, there are people with 525+ MCAT who go to "low ranked schools" and vice versa. And there are applicants from "low rank schools" who do better clinically and on Step than those from more prestigous schools. So while defaulting to school name is safe for the PDs in the no-exam scenario, it isn't at the benefit of the students.
 
  • Like
  • Love
Reactions: 3 users
Minimum competency should be good enough. Better start reading those applications. There plenty of other things you can use to startify applicants. We’re acting board scores are the only things on the application.
Not really. Being on this side of the curtain now I can honestly say 10% of applicants are truly stellar in every category, 10% are truly terrible, and the other 80% all look exactly the same.

The standardized test score quite literally the only common denominator between any of them….
 
  • Like
Reactions: 9 users

Highly recommend this 6 part series on the history of the match, changes to the algorithm, and speculation on how we might make it better. Gives some excellent historical context to why we have this system. It’s long but well-worth the watch imo.
 
  • Like
Reactions: 1 user
Not really. Being on this side of the curtain now I can honestly say 10% of applicants are truly stellar in every category, 10% are truly terrible, and the other 80% all look exactly the same.

The standardized test score quite literally the only common denominator between any of them….
QFT.

When I started reviewing apps and interviewing med students last year I realized they’re all basically the same. No clue what a “holistic review” even means.

Everybody has the same effusive letters letters of recommendation, the same enthusiastic MSPE comments, the same amount/quality of scholarly activity as the average for their field. The only thing that makes a letter stand out is if it’s from some big wig in the field, but that’s just a function of where you went to school or your personal connections. It doesn’t tell me if you’re any good at doing a job.

Oh wow you did some medical mission trips and started a charity? Congrats on having rich parents. But I don’t care.

Most medical students have canned statements prepared for every interview question. No one answers questions like a real human. I gave top interview scores to an otherwise average non-trad once because it seemed like he’d had conversations with adults before.

Most of ours were ranked on steps scores and geography. Every now and then a bad interview knocks someone all the way down. Diversity is a good tie breaker between equivocal candidates.

One personal statement about a rotting possum corpse dropped someone to the bottom. But other than that they don’t matter and it’s pretty clear most med students have been slightly editing the same one since they applied to undergrad.

So yeah. It’s pretty much step scores (score now). These tests aren’t perfect. But while there are some geniuses out there that can fall backwards into a 95th+ percentile score, most can’t. These high scores come from the willingness to work harder than your peers. That counts for something. It’s not a bad idea to select for the more studious aspiring physicians and it seems silly that we act like that’s so problematic.
 
  • Like
  • Love
Reactions: 12 users
I suspect the reality is most PD also wouldn't want that situation, and instead of stratifying based on board scores would instead use school prestige as a replacement (if they don't already)
Spoiler alert, they do already (and for good reason).

The reality is that nothing real matters. Success begets success. From the perspective of an academic PD the best thing you can be is a future bigwig academic, given you aren't an outright liability. Might as well start with the person who already has a star-studded academic resume. Harvard Med School + Harvard Residency looks better when introducing the new department chair compared to mid-tier med school + Harvard residency. If T10 schools keep it all in house, they're that much more likely to be training all the future department heads.

The only other consideration is really personality and outside accomplishments. Personality might get you higher clinical grades. Outside accomplishments might signal that you're more likely to be a leader in the field.
Most medical students have canned statements prepared for every interview question. No one answers questions like a real human. I gave top interview scores to an otherwise average non-trad once because it seemed like he’d had conversations with adults before.
The biggest advantage of being an MD/PhD student is that you act and talk like an adult. You've mentored people at that point and you know exactly what sort of behaviors look good vs. rub the wrong way. You see just how unpalatable it is to train a know-it-all. You're the same age as the residents and the attendings don't see you as a child. Patients also tend to give you a bit more respect. 26 year old me would have whiffed on some of the patients I've built strong alliances with, and I'm sure I would have come across as a brat to some of the residents.
 
  • Like
  • Haha
Reactions: 2 users
Top