[...]

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

fa21212

Full Member
10+ Year Member
Joined
Feb 19, 2010
Messages
68
Reaction score
32
[...]

Members don't see this ad.
 
Last edited:
Why do Step 1 scores vary so much at schools that have similar caliber students? For example, Baylor has an average of 246 while Johns Hopkins has an average of 238. One could argue that students going to Johns Hopkins are likely more competitive than Baylor students yet they have a lower average Step 1 score by almost half of a standard deviation. That seems to me to be very significant. If you look at studies around getting into competitive specialties, Step 1 scores are usually the first or second most important factor cited by residency program directors. If you want to get into a competitive specialty with an average step 1 score of >240, would it make sense to give preference to schools with higher average Step 1 scores? It seems like those high step 1 score schools are doing something right rather than just simply picking bright students to be in their student body. Thanks for the thoughts!


Where do they publish step 1 scores by school now? When I was applying they didn't do that.
 
There’s probably a trade off between the testing potential of a student body (which is weakly correlated with that group’s MCAT distribution) and the extent to which a student from that same body receives a return on investment for scoring higher on Step 1. Everyone at Hopkins is going to match exceedingly well with a 230, 40, or 50. But, knowing this, probably don’t feel the same pressure to push their scores.

Compare the match lists at Harvard/JHU to Baylor. Baylor does very well, obviously, but they are not in the same tier as the former. Just speculation.

Here’s an old but still relevant graph.
 
  • Like
Reactions: 2 users
Members don't see this ad :)
I guess it's not just Hopkins and Baylor that I'm focused on. There are others as well. For example, Case has an average of 241, one of the highest, even though it's USNWR ranking is not as high. The trend of schools by Step 1 does not match MCAT scores, ranking from USNWR or other parameter. I'm assuming there are perhaps features of the curriculum or other factors that allow some students to be much more competitive than others. I'm just trying to figure out what those factors are.

If I had to guess, since I haven’t really looked into this very hard, I suspect there will be 2 to 3 critical factors here by order of importance:

1. Timeline to Step 1 at that school with schools where students finish their core clinical clerkships before Step 1 beating those that where they don’t.
2. Curriculum grading scheme.
3. Amount of time students spend in mandatory activities.

Just guessing
 
  • Like
Reactions: 1 user
Well, it can also be due to how schools select people. If you select for people who test well, e.g. by selecting for high MCAT score and having high MCAT score cutoffs, then you're probably more likely to get higher scoring people on Step 1. Compared to a school that has a lower MCAT cutoff but looks for other things like research, volunteerism, etc. Each school has a different mission and different strengths. I think it's important to look at match lists in conjunction with Step scores because Step scores also don't tell the whole story .
 
Why do you think it's silly?

They don’t reallt tell you any information that you might not already know, it’s self reported so there’s no standardization and thus ways to inflate the statistic, and isn’t something that anyone should really use to evaluate the quality of a school anyway. It’s just a chest beating competition.
 
  • Like
Reactions: 1 user
Why can’t you use it to evaluate the quality of a school?


Sent from my iPhone using SDN mobile

In my opinion step 1 shouldn't be used because medical school is not about making sure you get a step 1 score. It is about giving you the learning opportunities and experience to be a good physician. If Step 1 was the focus of teaching then you could cancel all the classes and just hand out copies of first aid, uWorld, and pathoma and meet up again in 6 months to take the test. Or you can do what the Caribbean schools do and exclusively teach to the boards, then make people take practice boards and prohibit them from taking the real thing until they do well on the practice. Now, some people want that. Maybe even a lot of people. But teaching to the test makes it very hard to see the bigger picture.
 
Why can’t you use it to evaluate the quality of a school?


Sent from my iPhone using SDN mobile
Once you are there, you may find that your specialty doesn't require a high score. After 2 years of tests, you figure out how to accurately evaluate your abilities. My friend wanted the mean score for child neurology. During dedicated, he studied enough to get a 234, pretty much exactly what he needed. I wanted a 250. I studied harder and hit my goal. It had nothing to do with the quality of our school.
 
Why can’t you use it to evaluate the quality of a school?


Sent from my iPhone using SDN mobile

Because medical schools teach you the basics to be a functional physician not to pass STEP1. That's on you as the student. Going to a particular medical school does not guarantee you their average STEP1 score.
 
  • Like
Reactions: 1 user
The standard deviation on Step 1 Scores is huge: 10-15+ points just about everywhere. There is no meaningful difference between the scores at Baylor vs JHU. There is nothing to gain by looking at average step scores.

That's not even getting into the fact that Step 1 is a very individualized studying. Everyone uses UFAP. Some people use other resources like Rx, Golijan, Kaplan, etc... Case in point, your board prep is going to primarily come from nationally available resources regardless of where you go to school. Hence why its not a good way to evaluate a school's quality.
 
  • Like
Reactions: 1 users
PDs always cite step 1, but it's likely not in a linear fashion. It's more with cutoffs and then a little bit stepwise as they increase but with diminishing returns. And each cutoff is different for each specialty. And letters in the field are always cited as being important. Huge concentration of big names in fields at places like Hopkins. And research opportunities. And top residencies have experience with students from the usnwr highly ranked schools and trust that product a lot. I think rankings are kinda silly, but using step 1 to determine where to rank wouldn't make a ton of sense.
 
Top