Medical school rankings

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

jumpbean2

Full Member
10+ Year Member
Joined
Jan 12, 2013
Messages
23
Reaction score
1
All too often I read posts where people are asking about if they could "get into a top 20 school." Is there any specific ranking that they are referring to? Is there an official med school rankings list?

Members don't see this ad.
 
All too often I read posts where people are asking about if they could "get into a top 20 school." Is there any specific ranking that they are referring to? Is there an official med school rankings list?

It would've taken less time to Google this than it did to create this thread. US News isn't really "official" though, it's more pseudo-official.
 
All too often I read posts where people are asking about if they could "get into a top 20 school." Is there any specific ranking that they are referring to? Is there an official med school rankings list?

They are referring to the US/News. It is anything but official.
 
Members don't see this ad :)
There are no official rankings. Some people are talking about US News, and others are talking about public research funding. In the end, just go to where you want to.
 
They are referring to the US/News. It is anything but official.

Well, schools tend to brag about receiving a high US News ranking... not that that's at all surprising.

I think the fact that so many people acknowledge its existence gives it at least some sort of pseudo-official status, ridiculous as that may be.
 
US News is what a lot of people refer to when they try to rank medical schools. However US News ranking criteria is very specific and is actually ranking medical schools on only two criteria, the amount if NIH research funds they get and their rank as primary care.

As pre-meds, these two things are not the only thing that we are interested in when it comes to deciding between schools. For example, the rankings do not put into account specialty matching, tuition, faculty to student ratio, or how well their students do on average on the USMLE (even though that is not publicly released).

But if you wanted a reference for a top20 then US News is your best bet.
 
A lot of people will use "top 20 school" to mean "schools that are very research oriented," since that's really all that separates them from other "tiers" of schools.
 
All too often I read posts where people are asking about if they could "get into a top 20 school." Is there any specific ranking that they are referring to? Is there an official med school rankings list?

http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools

US News is the most common source for "medical school rankings". Keep in mind that it's not like undergrad where school ranking is actually somewhat relevant. Basically any US accredited MD or DO school will get you where you want to be (a doctor).
 
http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools

US News is the most common source for "medical school rankings". Keep in mind that it's not like undergrad where school ranking is actually somewhat relevant. Basically any US accredited MD or DO school will get you where you want to be (a doctor).

Some residents have posted that it does in fact matter where you go to medical school.

I wish I had the quotes but I don't.
 
Of course it matters, the program directors themselves acknowledged that it does in their survey. You can look it up yourself.
However, your grades and USMLE scores matter much more across the board.
So crush the USMLE and write your own ticket.
 
To state school ranking as being irrelevant is delusional. In before I get called out by the DO brigade.

US News is what a lot of people refer to when they try to rank medical schools. However US News ranking criteria is very specific and is actually ranking medical schools on only two criteria, the amount if NIH research funds they get and their rank as primary care.

As pre-meds, these two things are not the only thing that we are interested in when it comes to deciding between schools. For example, the rankings do not put into account specialty matching, tuition, faculty to student ratio, or how well their students do on average on the USMLE (even though that is not publicly released).

But if you wanted a reference for a top20 then US News is your best bet.

Though it's not a great ranking (no college ranking system is), it's got more going into than just NIH dollars and primary care. It takes into account peer assessment score, residency director assessment score, research activity (total and per faculty member), student selectivity, acceptance rate, average MCAT, average GPA, and faculty resources (i.e. full time clinical/science faculty to student ratio).

The primary care rankings include most of the same, with some things added and subtracted (and all of them weighted differently). On the whole, it's all still not that useful as currently constructed. I do think one could create a ranking that is useful in creating tiers of schools (Tier 1, Tier 2, etc) rather than discrete ranks (1 vs 3 vs 7, etc). However, this ranking methodology isn't it.
 
Though it's not a great ranking (no college ranking system is), it's got more going into than just NIH dollars and primary care. It takes into account peer assessment score, residency director assessment score, research activity (total and per faculty member), student selectivity, acceptance rate, average MCAT, average GPA, and faculty resources (i.e. full time clinical/science faculty to student ratio).

The primary care rankings include most of the same, with some things added and subtracted (and all of them weighted differently). On the whole, it's all still not that useful as currently constructed. I do think one could create a ranking that is useful in creating tiers of schools (Tier 1, Tier 2, etc) rather than discrete ranks (1 vs 3 vs 7, etc). However, this ranking methodology isn't it.

Ya I'll agree that US news ranking is crap. I'll also agree that ranking probably matters very little in comparison to other applications factors. But to say your schools reputation doesn't matter at all, that I won't agree with.
 
Members don't see this ad :)
US News is what a lot of people refer to when they try to rank medical schools. However US News ranking criteria is very specific and is actually ranking medical schools on only two criteria, the amount if NIH research funds they get and their rank as primary care.

As pre-meds, these two things are not the only thing that we are interested in when it comes to deciding between schools. For example, the rankings do not put into account specialty matching, tuition, faculty to student ratio, or how well their students do on average on the USMLE (even though that is not publicly released).

But if you wanted a reference for a top20 then US News is your best bet.
This is wrong. They base the research rankings on 8 different criteria and the primary care rankings on 7. You can take a look at the specifics here if you like:

http://www.usnews.com/education/bes...012/03/12/methodology-medical-school-rankings
 
Some residents have posted that it does in fact matter where you go to medical school.

I wish I had the quotes but I don't.

It definitely matters in academia, but I highly doubt it matters significantly elsewhere.
 
Though it's not a great ranking (no college ranking system is), it's got more going into than just NIH dollars and primary care. It takes into account peer assessment score, residency director assessment score, research activity (total and per faculty member), student selectivity, acceptance rate, average MCAT, average GPA, and faculty resources (i.e. full time clinical/science faculty to student ratio).

The primary care rankings include most of the same, with some things added and subtracted (and all of them weighted differently). On the whole, it's all still not that useful as currently constructed. I do think one could create a ranking that is useful in creating tiers of schools (Tier 1, Tier 2, etc) rather than discrete ranks (1 vs 3 vs 7, etc). However, this ranking methodology isn't it.

Ya I'll agree that US news ranking is crap. I'll also agree that ranking probably matters very little in comparison to other applications factors. But to say your schools reputation doesn't matter at all, that I won't agree with.

How would you improve upon it?
 
I don't know how useful the rankings are in general, but I think looking at the scores for specific criteria can be somewhat useful. I really only care about the residency director scores for competitive specialties, since it seems like the school you attend might give you a slight edge.
 
How would you improve upon it?

That depends largely on the kind of data that I'd have access to. I don't know that so I'll comment merely on the obvious things I would change with just these data points.

First of all I would never publish them with incremental rankings as if I actually have the ability to distinguish a third ranked school from a fourth ranked school or a 25th ranked school from a 26th ranked school. It's ridiculous and is begging for over-interpretation. Universities and the education they provide are very complex and difficult to compare with such precision. These sorts of rankings should be produced for "clusters" of universities (Tier 1, Tier 2) and the size of the Tier should not be predetermined, it should expand or shrink to accommodate whatever quantity of schools are sufficient to be a part of that cluster (no producing arbitrary cutoffs just to have a cutoff).

With this in mind, I would create hundreds of different models that change the weighting & the variables included. For example, Acceptance rate is weighted 0.01 in the research ranking but 0.0075 in the primary care ranking. I would then create a way for clustering the schools that consistently perform at the top despite different weightings/variables used. Then you'd have clusters for schools at other levels that perform in consistently at the bottom or consistently at the upper middle, etc.

Finally, I would actually expect some kind of justification for every metric being used. Is there any data that suggests that it is a mark of a good "primary-care" education if many of a school's students go into primary care? How is that an indication of quality, not quantity? Is it important enough to be weighted at 0.30?

Given the prestige behind these rankings, I'd expect a substantially more quantitatively and scientifically rigorous methodology. As it is, it's just not good and you can make the different schools' ranks jump and dance around if you play with the weightings even a little bit.
 
Peer and expert voting just like the hall of fame.

Peers such as medical school deans as well as residency directors already vote in the current system, who else would you ask to rank schools?
 
That depends largely on the kind of data that I'd have access to. I don't know that so I'll comment merely on the obvious things I would change with just these data points.

First of all I would never publish them with incremental rankings as if I actually have the ability to distinguish a third ranked school from a fourth ranked school or a 25th ranked school from a 26th ranked school. It's ridiculous and is begging for over-interpretation. Universities and the education they provide are very complex and difficult to compare with such precision. These sorts of rankings should be produced for "clusters" of universities (Tier 1, Tier 2) and the size of the Tier should not be predetermined, it should expand or shrink to accommodate whatever quantity of schools are sufficient to be a part of that cluster (no producing arbitrary cutoffs just to have a cutoff).

With this in mind, I would create hundreds of different models that change the weighting & the variables included. For example, Acceptance rate is weighted 0.01 in the research ranking but 0.0075 in the primary care ranking. I would then create a way for clustering the schools that consistently perform at the top despite different weightings/variables used. Then you'd have clusters for schools at other levels that perform in consistently at the bottom or consistently at the upper middle, etc.

Finally, I would actually expect some kind of justification for every metric being used. Is there any data that suggests that it is a mark of a good "primary-care" education if many of a school's students go into primary care? How is that an indication of quality, not quantity? Is it important enough to be weighted at 0.30?

Given the prestige behind these rankings, I'd expect a substantially more quantitatively and scientifically rigorous methodology. As it is, it's just not good and you can make the different schools' ranks jump and dance around if you play with the weightings even a little bit.

Some interesting suggestions here, although I can definitely envision the tier system as being more problematic than the numeric ranking. Any tier system would have subjective cutoffs, and there would inevitably be controversy surrounding the gray-area schools relegated to one or another. I agree that the weights assigned to each category are arbitrary and playing with them would change the rankings, so the difference between #5 and #8 really could be completely meaningless. A better ranking system would be more consistent, but I'm not convinced a significantly better one exists.

As another poster said, looking at standings in each category individually is likely more useful than looking at the calculated rankings.

Also, what do we think about the hypothetical situation in which board scores were incorporated into rankings? Board scores are more indicative of individual effort and may bias schools with a focus on less competitive specialties, but it is one "standardized" way of measuring medical school quality, to some extent. Thoughts?
 
Peers such as medical school deans as well as residency directors already vote in the current system, who else would you ask to rank schools?

on what basis would the dean of uvm judge the quality of creighton? the rankings are inane self perpetuating statistical tautologies
 
Peers such as medical school deans as well as residency directors already vote in the current system, who else would you ask to rank schools?

SDNers with the most posts per day.
 
I'm sure it does matter. A little. But what matters more is your ability.

I have a cousin who went to a caribbean school. She matched and now is in a radiology residency, one of the toughest to get into. She impressed in her USLME and especially in person when she did her rotations.
 
If no one has seen this yet, they should

http://www.nrmp.org/data/programresultsbyspecialty2012.pdf

It is the results from the survey of residency directors conducted by the National Resident Matching Program.

Near the top are things like USMLE Step scores, clerkship grades and letters. The school you came from is a factor, but does not even break the top 20 as far as factors to consider goes.

Also of note is how important being a graduate of a US MD school is.
 
I'm sure it does matter. A little. But what matters more is your ability.

I have a cousin who went to a caribbean school. She matched and now is in a radiology residency, one of the toughest to get into. She impressed in her USLME and especially in person when she did her rotations.

These stories are not going to be around for too much longer, I fear.
 
on what basis would the dean of uvm judge the quality of creighton? the rankings are inane self perpetuating statistical tautologies

I never vouched for the validity of the current system, I merely stated what it involves.
 
Some interesting suggestions here, although I can definitely envision the tier system as being more problematic than the numeric ranking. Any tier system would have subjective cutoffs, and there would inevitably be controversy surrounding the gray-area schools relegated to one or another. I agree that the weights assigned to each category are arbitrary and playing with them would change the rankings, so the difference between #5 and #8 really could be completely meaningless. A better ranking system would be more consistent, but I'm not convinced a significantly better one exists.

As another poster said, looking at standings in each category individually is likely more useful than looking at the calculated rankings.

Also, what do we think about the hypothetical situation in which board scores were incorporated into rankings? Board scores are more indicative of individual effort and may bias schools with a focus on less competitive specialties, but it is one "standardized" way of measuring medical school quality, to some extent. Thoughts?

Actually it wouldn't have a bunch of subjective cutoffs. I don't know what your background is, but there are quite a few sophisticated clustering analysis approaches that basically leverage these sorts of small-to-large differences to group items together. It's seen often in proteomics, genomics, and even in social network analysis. The majority of my research involved clustering analysis and even that was fairly basic compared to the type of sophisticated tools that are out there. More importantly, this would require schools being close together even after hundreds of different weightings and variables were tested, a hard thing to accomplish--any school that was actually close enough after all that would simply be a part of the same cluster.

If you have a ranking that can't distinguish #5 from #8, then why on earth would you publish a list doing just that (and worse)?? Even if we were to assume your assertion that creating tier cutoffs would require an equally subjective decision (which it doesn't), at least it is still creating, at worst, 4 or 5 subjective cutoffs, rather than 138 incremental rankings which represent only a single weighting scheme (which has not been justified or proven in any fashion). It's like taking a random hypothesis, creating assumptions without justification, and then applying some sort of model that itself has no prior explanation or evaluation, and not only expecting people to take the results seriously, but to present these results without any "error bars" but with complete precision.

Whether a better ranking system exists is hardly an argument that a substantially better ranking system could be produced. More importantly, it should prompt us to ask whether it even makes sense to rank the schools in this manner. At the end of the day, if they are going to attempt to give us a quantitative ranking of schools down to the incremental level of a single school, we should expect an actual quantitative analysis with actual justification and methodology, rather than some hand waving and jiggling of weightings to produce the kind of ranking list that people think they want to see (if one of Harvard, Stanford, or Johns Hopkins was not in the top 5 do you think they would even keep these weightings or change them until it produced a more "intuitive" top list?).
 
Last edited:
Actually it wouldn't have a bunch of subjective cutoffs. I don't know what your background is, but there are quite a few sophisticated clustering analysis approaches that basically leverage these sorts of small-to-large differences to group items together. It's seen often in proteomics, genomics, and even in social network analysis. The majority of my research involved clustering analysis and even that was fairly basic compared to the type of sophisticated tools that are out there. More importantly, this would require schools being close together even after hundreds of different weightings and variables were tested, a hard thing to accomplish--any school that was actually close enough after all that would simply be a part of the same cluster.

If you have a ranking that can't distinguish #5 from #8, then why on earth would you publish a list doing just that (and worse)?? Even if we were to assume your assertion that creating tier cutoffs would require an equally subjective decision (which it doesn't), at least it is still creating, at worst, 4 or 5 subjective cutoffs, rather than 138 incremental rankings which represent only a single weighting scheme (which has not been justified or proven in any fashion). It's like taking a random hypothesis, creating assumptions without justification, and then applying some sort of model that itself has no prior explanation or evaluation, and not only expecting people to take the results seriously, but to present these results without any "error bars" but with complete precision.

Whether a better ranking system exists is hardly an argument that a substantially better ranking system could be produced. More importantly, it should prompt us to ask whether it even makes sense to rank the schools in this manner. At the end of the day, if they are going to attempt to give us a quantitative ranking of schools down to the incremental level of a single school, we should expect an actual quantitative analysis with actual justification and methodology, rather than some hand waving and jiggling of weightings to produce the kind of ranking list that people think they want to see (if one of Harvard, Stanford, or Johns Hopkins was not in the top 5 do you think they would even keep these weightings or change them until it produced a more "intuitive" top list?).

As my friend Wiz says, it's all about the Benjamins.
 
Top