- Joined
- Feb 3, 2014
- Messages
- 956
- Reaction score
- 2,244
Other than platform fights with giant Q-tips. We would all choose that.
Hmmmmm how would I evaluate applicants without a single test score that shows no correlation with residency performance and where the scaling error alone means there’s no difference between a 230 and a 245??
Grades, shelves, letters, research, CK, away rotation performance, etc. It will be more time consuming on the front end for some that screen but should mean little difference for those that don’t. I’m trying to think of the last time someone’s step one score came up during our rank meeting and I can’t remember one. We end up talking about their research and story and especially letters and personal discussions with people we know.
So yes more struggle on the front end especially for big programs that get many applications, but should be pretty minor.
At rank meetings academics may be less important, but getting the interview is necessary too. Being in ENT at a good program leads me to believe that most, if not all, of your interviewees are excellent academic candidates or they wouldnt be there. Your post probably isnt generalizable to other fields.Hmmmmm how would I evaluate applicants without a single test score that shows no correlation with residency performance and where the scaling error alone means there’s no difference between a 230 and a 245??
Grades, shelves, letters, research, CK, away rotation performance, etc. It will be more time consuming on the front end for some that screen but should mean little difference for those that don’t. I’m trying to think of the last time someone’s step one score came up during our rank meeting and I can’t remember one. We end up talking about their research and story and especially letters and personal discussions with people we know.
So yes more struggle on the front end especially for big programs that get many applications, but should be pretty minor.
How refreshing it is to hear from someone with actual knowledge and who is not blindly hyperventilating! Many thanks, colleague!I'll begin this response by noting that I agree that the effective use of USMLE exams as a standardized comparison tool was never how the exams were intended to be applied. Officially it is a licensing exam, setting a bare minimum standard to become a licensed physcian and nothing more. I wish there was something better that was still universal to all applicants. Maybe we need to start a new exam - the "Graduate Medical Education Aptitude Test" - like the SAT / MCAT but for residency application... like a combination of medical knowledge, critical thinking skills, and emotional intelligence testing, and designed specifically to be used to compare applicants. (that is, of course, /s...but my joke proves a point regarding the challenges of evaluating applicants).
Definitely Step 2 CK will replace as the initial screening tool. When you have >1000 applications there is simply no realistic way to carefully examine every application in depth. Shelf scores, or even the MCAT, could be looked at more heavily, I suppose. In the past we've never really paid attention to those. But there needs to be some objective metric - something that all applicants have in common external to their individual schools - that helps standardize students against one another. USMLE exams are definitely correlated with ability to pass board exams, at least above a certain threshold. A 235 vs 260 actually wasn't given as much weight due to these diminishing returns. But a 215 or below applicant, no matter how much we loved them otherwise, is definitely considered "high risk" for academic difficulty in residency.
Letters, personal statements, and even third year clerkship grades (thanks to wild distribution variability between schools) are honestly not very helpful in this initial screening. You can thank grade inflation, P/F schools, and the explosion of new medical schools of dubious quality for the erosion of helpfulness of clerkship grades. Once we've narrowed our applicant pool down to the 130-150 we actually want to interview then we can more thoroughly dive into those more subjective things. But after having read all the applications, PS's, and LoRs this last cycle for applicants we interviewed, these are frankly really hard to evaluate. It's kind of a game. They all sound the same. Maybe 10% of letters or PS's really stand out as "something special", and maybe another 10% stand out as unusually weak (letter writers have a special code language they use to gently warn us about applicants they don't love but are politely writing a letter for). But 80% of them are basically about the same. The interview itself is not particularly helpful for most applicants either, again maybe weeding out 1/10th who come across poorly in person, and helping another 1/10th stand out. Unfortunately there is no really great way to confidently evaluate that many applicants based on what's in their application alone. It's kind of just a combination of all the above plus subjective "gut feelings" from everyone who interacted with them or is on the rank list committee. The only time we are ever solidly confident about applicants is when they are either students from our own medical school or did an away rotation with us.
Hmmmmm how would I evaluate applicants without a single test score that shows no correlation with residency performance and where the scaling error alone means there’s no difference between a 230 and a 245??
Grades, shelves, letters, research, CK, away rotation performance, etc. It will be more time consuming on the front end for some that screen but should mean little difference for those that don’t. I’m trying to think of the last time someone’s step one score came up during our rank meeting and I can’t remember one. We end up talking about their research and story and especially letters and personal discussions with people we know.
So yes more struggle on the front end especially for big programs that get many applications, but should be pretty minor.
Step 1 score affects whether you get the interview. If you dont have that, suddenly you can't even talk about the story or personality before you start eliminating people.
There is a certain subconscious wow factor that primes people to talk positively about someone's intelligence once they see a high score. You may not mention it, but the knowledge that someone scores high on step 1 will change slightly the tone of the conversation.
Hmmmmm how would I evaluate applicants without a single test score that shows no correlation with residency performance and where the scaling error alone means there’s no difference between a 230 and a 245??
Grades, shelves, letters, research, CK, away rotation performance, etc. It will be more time consuming on the front end for some that screen but should mean little difference for those that don’t. I’m trying to think of the last time someone’s step one score came up during our rank meeting and I can’t remember one. We end up talking about their research and story and especially letters and personal discussions with people we know.
So yes more struggle on the front end especially for big programs that get many applications, but should be pretty minor.
YeahIs it true that someone going to a low tier MD is basically screwed now out of a top residency spot since it will be much harder for them to differentiate themselves over the people at T20s who want those spots (and will now likely get them)?
Honest and to the point.letters from people I know.
every program is different but at least at ours I think many people would be surprised at how little impact step scores have. We’ve been chatting about this recently since the score change came out and quite a few people have admitted they don’t even look at step scores when doing the first pass. As more people have become aware of how bad the test is at discerning between applicants and how meaningless it is as a predictor of success, we are all caring about it less and less.
Obviously some of this is our program since we get generally stellar applicants overall and a relatively small pool so we have that luxury. An IM or peds program that gets thousands of apps probably has to do more broad screening simply to manage the workload.
I think you can also gauge someone’s overall ability by their overall application. Someone who has the chops to score a 90+ percentile step score likely has the chops to do well academically while also juggling research and interesting ECs. Someone struggling to hit that 50th percentile probably doesn’t have the available bandwidth to do all the other things that make an app stand out. The knowledge base of a strong step score likely translates into clinical grades too. The junior AOA folks in my class all of whom were in the 260+ range all tended to be strong on the wards as well. While there may be exceptions out there, I think a wholistic view of an application can probably tell me plenty.
Is it true that someone going to a low tier MD is basically screwed now out of a top residency spot since it will be much harder for them to differentiate themselves over the people at T20s who want those spots (and will now likely get them)?
Meanwhile you wouldn't have even been talking about those students at rank meetings who scored 198 on the Step 1 to begin with. But hey, they got their name on two papers and had grade inflation at their institution.
So the problem comes from the way that ERAS is designed. If you want to apply for a job, you can't just apply to every company ever with a click of a button. There are different job posting websites, you have to make cover letters for each company, it's just logistically much harder to do. Now take ERAS, people put in their junk, check boxes, overdraft their bank account and apply to 60 programs. So you've taken a group of highly competitive, highly risk-averse people the ability to send limitless amounts of applications. They're going to assume that if they send more applications than average because they assume that will give them an above average change to match. Now think about that...
The trend you get is that with each generation of pokemon, the average applicant sends about 7 more applications. So where I think we're at now is PD's have to get a PhD in astrophysics to figure out of the 3,000 people who applied to their program, how many of them would rank them highly enough to be worth their time. When in reality they should just be getting a sane amount of applications from the start from people that actually want to go to their program.
Matchmaking systems don't really work when have everyone is ranking everyone. If you're asked to list your preferences of a sufficient number of things, there will come a point where your preferences are no longer rational. The entire system is designed to match you with the programs/applicants you like best, it defeats the purpose if people are applying to and ranking programs they don't really care for.
There has to be strict application limits, fees will not stop people from applying to an ever-increasing number of programs. Statistically most medical students come from very wealthy families, money is no object. Furthermore, if you're already 200K in the hole, was 20k extra debt really gonna stop you? Asking med students to stop sending so many applications is essentially telling them not to worry. It's not going to work, and you look silly for trying.
This will force applicants to be more thoughtful about where they apply, and free programs to actually look beyond a three digit score.
I think it would be reasonable to put in place hard limits to applications and potentially interviews attended. However, I don't know how one would even go about setting these limits for different fields, as they all seem like completely different worlds. And what about couples matchers? I also think that considerably more information and transparency would be needed from the programs so that applicants could actually figure out what a "target program" is for them. I used FREIDA and residency explorer when applying which are great tools in theory but both had outdated data (like, Step 1 cutoffs being listed as previous passing scores that are now failing scores). Residency Explorer is difficult to use practically, it compares you to applicants who matched to each program from 2014-2018 which is great because you can definitely see if you would be an extreme outlier one way or another at any program. But it doesn't show trends and if you yourself are an average applicant in terms of scores and quantity of experiences, you are just going to see a lot of "middle 50% of matched applicants" which doesn't help you narrow down which programs to apply to.
Maybe one of the best ways of imposing limitations would be a brief "why us" prompt at each program. The only additional cost is time. So many people are writing letters of interest anyway, why not standardize it and make it the first screening tool? It would allow applicants to write about specific professional interests and location preferences without having to write completely separate personal statements. I'm not sure if anyone would actually like this idea but I do think it would impose limits in a fair way.
If everyone agrees that a passing score is all that is needed to become a doc in any field, and the pass rate is > 90% then why (other than $$$), doe the organisations insist on putting students through that.
With such a high pass rate is it really a weed out test?
I have sat on resident selection comittees and my number one metric is have we had this person on or service and can I teach them? That's my job. We know if they are hard workers and how they interact with faculty, residents and staff. Some really bright residents have been unteachable. Never rotated with us but had good scores, letters, and good schools. Scores will get you noticed, but auditioning allows us to see the whole package,
I have sat on resident selection comittees and my number one metric is have we had this person on or service and can I teach them? That's my job. We know if they are hard workers and how they interact with faculty, residents and staff. Some really bright residents have been unteachable. Never rotated with us but had good scores, letters, and good schools. Scores will get you noticed, but auditioning allows us to see the whole package,
could you expound on this a little bit. I was under the impression the residents and faculty dont want to send invites to robots either who are constantly in interview mode for the entire month as well.I think around 50% of our residents are either home students or great rotators. Our rank lists typically have the well known students listed very high. There’s just no substitute for the personal knowledge of how someone works.
Granted we all know they’re in audition mode for the month so they are probably showing something better than their average effort, but at least we know what’s possible. That said, we probably offer interviews to 10-20% of rotators because many people manage to slip out of audition mode pretty quickly.
Side note: I’ve looked back at rotators we didn’t like and most all of them have gone on to be successful residents in other programs.
Away rotations should be against LCME rules. Why should wealthier kids be able to pay money to rotate at a top program to subsequently be ranked higher?I have sat on resident selection comittees and my number one metric is have we had this person on our service and can I teach them? That's my job. We know if they are hard workers and how they interact with faculty, residents and staff. Some really bright residents have been unteachable. Never rotated with us but had good scores, letters, and good schools. Scores will get you noticed, but auditioning allows us to see the whole package,
First off, I always enjoy reading you posts and I agree with the above. Residents can slip from audition mode for sure. My experience was more with non rotaters who had good scores, letters and from good schools. They are certainly greater unknowns. Successfully completing the residency is a rather low bar IMO. We have residents who finished, but with great drama and energy drain from our department. With certain residents, a frequent occurrence would be an attending or nurse, red faced stating " Do you know what YOUR resident, said, didnt say, did, didnt do, etc..." My personal experience with these types was more common with residents who didnt rotate with us as students. Just my opinion. It is why I believe having rotated on our service and doing well is my most important metric, but certainly not the only one.I think around 50% of our residents are either home students or great rotators. Our rank lists typically have the well known students listed very high. There’s just no substitute for the personal knowledge of how someone works.
Granted we all know they’re in audition mode for the month so they are probably showing something better than their average effort, but at least we know what’s possible. That said, we probably offer interviews to 10-20% of rotators because many people manage to slip out of audition mode pretty quickly.
Side note: I’ve looked back at rotators we didn’t like and most all of them have gone on to be successful residents in other programs.
most specialties do not require them. Its not really like buying an election, more like being able to intern at a place and getting a job there because people know and like you.Away rotations should be against LCME rules. Why should wealthier kids be able to pay money to rotate at a top program to subsequently be ranked higher?
It almost sounds like a politician buying an election.
Away rotations should be against LCME rules. Why should wealthier kids be able to pay money to rotate at a top program to subsequently be ranked higher?
It almost sounds like a politician buying an election.
At some newer schools, all rotations are aways. There aren't many months as a 4th yr to audition before interviews begin, only 3 or 4 at my school. Some services wont allow you to do an elective as a 3rd yr. By your screename I would gather you know many anesthesia programs are like that. Could it get expensive? Sure. Life is full of choices and med students have to choose carefully how to advance their career. How do they envision themselves in 10 yrs? Academics? Clinical Med? Where do you want to live?People from Hopkins get paid the same by insurers as people from the Caribbean . If your next 30 yrs are determined by WHERE you do your residency, then you have to do whatever you need to do. I agree with your election analogy. Matching,IMO, is like getting elected. You have to have all the boxes checked. Scores, letters, auditions, research in many cases, networking, etc.Away rotations should be against LCME rules. Why should wealthier kids be able to pay money to rotate at a top program to subsequently be ranked higher?
It almost sounds like a politician buying an election.
they already exist. I took histo, anatomy,micro, pharm, and a handfull of others.Do you think it would be possible to have "Shelf" like exams for the major pre-clinical subject matter?
1) it would help stratify students and residency program directors can focus more on the subject matter more important for that particular specialty
2) NBME would be happy cause they'd make a lot more money
3) I know each schools curriculum is different but just make it required to take the pre-clinical shelves sometime before starting rotations and leave it up to the students to take it when they feel most prepared
I think the question assumes there needs to be a singular metric to judge applicants to begin with. Step 1 wasn't supposed to be a metric in the first place, its a board exam. No employer looks or wants to look at a single metric to decide if they're going to hire someone or not. At least I don't. The long short of it is the AAMC is screwing over residency programs and applicants.
I imagine from the perspective PD's and adcoms, there are a handful of applicants where they're like "Yeah, we want this person.", a few that they definitely don't want, but most fall into the category of "fine, but I don't have a real preference". We suffer from the same problem where I work, however we don't get thousands of applications mostly from people who would go 20 other places before coming here. I think PD's have this problem. So what's a good way to filter everyone? Numbers.
So the problem comes from the way that ERAS is designed. If you want to apply for a job, you can't just apply to every company ever with a click of a button. There are different job posting websites, you have to make cover letters for each company, it's just logistically much harder to do. Now take ERAS, people put in their junk, check boxes, overdraft their bank account and apply to 60 programs. So you've taken a group of highly competitive, highly risk-averse people the ability to send limitless amounts of applications. They're going to assume that if they send more applications than average because they assume that will give them an above average change to match. Now think about that...
The trend you get is that with each generation of pokemon, the average applicant sends about 7 more applications. So where I think we're at now is PD's have to get a PhD in astrophysics to figure out of the 3,000 people who applied to their program, how many of them would rank them highly enough to be worth their time. When in reality they should just be getting a sane amount of applications from the start from people that actually want to go to their program.
Matchmaking systems don't really work when have everyone is ranking everyone. If you're asked to list your preferences of a sufficient number of things, there will come a point where your preferences are no longer rational. The entire system is designed to match you with the programs/applicants you like best, it defeats the purpose if people are applying to and ranking programs they don't really care for.
There has to be strict application limits, fees will not stop people from applying to an ever-increasing number of programs. Statistically most medical students come from very wealthy families, money is no object. Furthermore, if you're already 200K in the hole, was 20k extra debt really gonna stop you? Asking med students to stop sending so many applications is essentially telling them not to worry. It's not going to work, and you look silly for trying.
This will force applicants to be more thoughtful about where they apply, and free programs to actually look beyond a three digit score.
Do you think it would be possible to have "Shelf" like exams for the major pre-clinical subject matter?
1) it would help stratify students and residency program directors can focus more on the subject matter more important for that particular specialty
2) NBME would be happy cause they'd make a lot more money
3) I know each schools curriculum is different but just make it required to take the pre-clinical shelves sometime before starting rotations and leave it up to the students to take it when they feel most prepared
The standard errors on the test are much better. 2 points vs 6 for step 1.The problem with the bolded is that it disproportionately harms the people at borders (borderline high to mid-tier person, borderline non-matching people). People who are on the edge for competitive programs would be forced to choose "trying for their dream program" or increasing the likelihood of matching with more safeties.
I actually think hard limits on applications could work, but you can't simply restrict applicants and leave the risk completely up to them. They are already more vulnerable than the programs to which they're applying. If you're going to place hard limits on applications, then you should also provide standardized (i.e. easily identifiable) and very transparent minimum interview requirements. If your program isn't going to interview IMGs or DOs, you should say so. If you're not going to consider anyone with a <215 (or 220, or 230, etc) or below, a fail in a board, less than X number of research experiences for interview, then it should be clear on the website or in the application (ERAS could even have a pop-up that asks if you want to continue with an app whose minimum requirements you don't meet). If those filters were made transparent, then I guarantee applications would drop to manageable levels, especially in combination with a limit on application numbers.
Whenever we talk about these issues, I feel like we always blame the applicants and expect them to change when we create a high risk system where not matching results in a loss of 1 year, loss of income, likely deferment of loans, and possibly even a loss of their entire medical career. Programs can take some responsibility too.
These already exist. There's one for Physiology, Pharm, Anatomy, etc. I think they're worse than just having Step 1 to be honest.
The standard errors on the test are much better. 2 points vs 6 for step 1.
could you expound on this a little bit. I was under the impression the residents and faculty dont want to send invites to robots either who are constantly in interview mode for the entire month as well.
They will never release the minimum board score filters because they don’t want transparency in the process. Transparency will show what every suspects - I.e. there are different qualification criteria depending on your educational and gender/race.The problem with the bolded is that it disproportionately harms the people at borders (borderline high to mid-tier person, borderline non-matching people). People who are on the edge for competitive programs would be forced to choose "trying for their dream program" or increasing the likelihood of matching with more safeties.
I actually think hard limits on applications could work, but you can't simply restrict applicants and leave the risk completely up to them. They are already more vulnerable than the programs to which they're applying. If you're going to place hard limits on applications, then you should also provide standardized (i.e. easily identifiable) and very transparent minimum interview requirements. If your program isn't going to interview IMGs or DOs, you should say so. If you're not going to consider anyone with a <215 (or 220, or 230, etc) or below, a fail in a board, less than X number of research experiences for interview, then it should be clear on the website or in the application (ERAS could even have a pop-up that asks if you want to continue with an app whose minimum requirements you don't meet). If those filters were made transparent, then I guarantee applications would drop to manageable levels, especially in combination with a limit on application numbers.
Whenever we talk about these issues, I feel like we always blame the applicants and expect them to change when we create a high risk system where not matching results in a loss of 1 year, loss of income, likely deferment of loans, and possibly even a loss of their entire medical career. Programs can take some responsibility too.
These already exist. There's one for Physiology, Pharm, Anatomy, etc. I think they're worse than just having Step 1 to be honest.
That may be true, but I wonder what the correlation is between those and specialty board pass rates for example, let alone those and being a good clinician/resident. In that scenario, I think the Steps might actually be a better indicator.
How do you judge if a person is teachable before actually interacting with them over a long period of time? Can you tell from an applicant's ERAS application + interview?I have sat on resident selection comittees and my number one metric is have we had this person on our service and can I teach them? That's my job. We know if they are hard workers and how they interact with faculty, residents and staff. Some really bright residents have been unteachable. Never rotated with us but had good scores, letters, and good schools. Scores will get you noticed, but auditioning allows us to see the whole package,