A study from the 70s:
http://www.jstor.org/pss/2577462 "The Reputations of American Medical Schools"
Old but a few thoughts may ring true today:
An introduction: "General reputation has much to do with the actual quality of a medical school." Generalizations (not data-based):
- Reputation affects whether students and faculty apply to or choose one school over another. This affects the quality of students and faculty and perpetuates the reputation.
- Medical school is a first but critical stepping stone in your career; reputation can affect subsequent career mobility.
- Reputation influences students' self-esteem and self-perception within reference groups.
- Reputation affects visibility and perceived ability of the faculty in the medical community.
- Reputation can enhance or hinder ability to get grants or obtain resources/facilities to carry out research.
A side note: there is significant self-aggrandizement:
- On a scale of 0-7, faculty members rated their own (current affiliated) school 0.67 higher than do others in the medical community.
- Faculty members rated their alma mater 0.73 higher than others who did not receive their MD training at the school in question.
- Over-rating was least likely in the highest-ranked schools; most likely in the lowest-ranked schools. Statisically significantly different.
- But self-aggrandizement doesn't distort the rankings. There's extraordinary consensus about the relative standing of the 94 medical schools rated in this study in 1977.
- Past research shows there is differential association: physicians tend to associate with people who are from similarly ranked schools. This may reinforce self-aggrandizement.
Rank-order of perceived quality of the faculty, top 11 schools: The margin of error for the entire set is 0.16.
My interpretation is that the tiers (in 1977) are 1) Harvard, 2) Hopkins, Stanford, UCSF, Yale, Columbia, 3) Everyone else in a smooth gradation.
School quality score, and visibility score (percentage of faculty who feel they know enough about the school to appraise it)
Harvard 5.71 87.3
Johns Hopkins 5.11 84.7
Stanford 5.11 81.2
California, San Francisco 5.01 75.1
Yale 5.00 82.0
Columbia 4.93 79.2
Duke 4.77 82.4
Michigan 4.74 76.2
Cornell 4.71 76.9
Washington, St. Louis 4.68 80.3
U.of Pennsylvania 4.66 75.6
What variables predict reputation?
In a zero-order correlation, 75% of the variance of perceived quality can be accounted for by faculty productivity (papers published in one year). Also in a zero-order correlation, 70% of the variance can be accounted for by NIH research and development (as opposed to total, including training) funding.
There is a halo effect: in general, schools that are part of universities with national reputations are rated higher than productivity predicts. Schools not part of universities (e.g. Sinai?) are rated lower.
Schools in the south are rated a bit lower than predicted (Duke, UVA, Emory, Vanderbilt). Schools in the northeast and private schools have a nearly equal opposite/positive correlation with quality (r=.20). Older schools have weakly higher ratings (r=.16).
Note that the above results are of the "quality of the medical faculty", but this is correlated with "the effectiveness of the medical training program" ratings with r=0.99, so the paper never discussed the latter.
=== Bringing it to 2011.
If we take USNWR reputation ratings as an indicator of current reputation, there are no longer very significant differences between adjacent schools on a rank order list. It's impossible to say without knowing the margin of error though.
We might say, for instance, that Harvard and Yale are statistically significantly different in peer/residency ratings, but it's probably less true both that Harvard and WashU are significantly different AND that WashU and Yale are significantly different. So there wouldn't be a basis for making "tiers" because we wouldn't be sure whether to group WashU with the first tier or the second tier.
====
tl;dr - Back in 1977 there were clearer tiers separating the very top. Today there isn't evidence for it. But there's still a statistically significant difference between something like rank 2 and rank 15.
===
That said, many people place undue significance on the "top 10" because it's a nice round number. Northwestern has an institutional goal of being in the top 10 medical schools and top 10 hospitals by 2020. A dean of something at Pitt told some applicants during my interview day that yea, they're aware they're not in the top 10; they're strong and have aspirations but who in the top 10 is going to get kicked out?