Okay let's see if I can add any sort of insight to this discussion.
If we look at the top schools by USNWR (which, although is a flawed ranking system, still correlate well with a) prestige and b) what people generally consider when they think of top medical schools), we see that most top medical schools are affiliated with USNWR honor roll hospitals (i.e. top hospitals, same premise of flawed but "good enough" ranking system)
1. Harvard - MGH (3), BWH (13)
2. Stanford - Stanford Hopsital (14)
3. Hopkins - JHH (4)
4. UCSF - UCSF Medical Center (7)
5. Penn - HUP (9)
6. WashU - BJH (11)
7. Columbia - NYP (6)
8. Duke - Duke UH (16)
9. Yale -
NONE
10. University of Washington -
NONE
11. NYU - NYU Langone (10)
12. Michigan - UMich Hospital (18)
13. UChicago -
NONE
14. UCLA - UCLA (5), Ceders-Sinai (17)
15. Vanderbilt -
NONE
16. Pitt - UPMC (12)
17. Northwestern - NMH (8)
18. Cornell - NYP (6)
19. UCSD -
NONE
20. Baylor - Houston Methodist (19)
21. Mt. Sinai - Mt. Sinai Hospital (15)
So of the top 20(ish), only 5 are not affiliated with a top hospital. Two of those are state schools (UWash, UCSD) with regional preferences, so we'll exclude those just to make things less complicated for now. Yale, Vanderbilt, and UChicago all have good-not-great hospitals. In terms of residency rankings, Doximity gives Yale top 10 spots in derm and psych, UChicago a top 10 spot in PM&R, and Vanderbilt top 10 spots in ENT, general surgery, urology, and med/peds combined residency.
So then as a test case, let's take UChicago, a top 10 med school without a top hospital and with only a single residency program making into the top 10 for any specialty (by doximity which isn't a great way to go about it but I'm not familiar with who's who in every single specialty so it was the easiest way to get a general list).
Theoretically, according to your argument, students at UChicago will get inferior training in certain specialties than at schools that have top departments in them. However, if you look at their 2016 match list (
https://pritzker.uchicago.edu/sites/pritzker.uchicago.edu/files/uploads/2016 Match Results Website.pdf), they match very well in essentially every specialty.
Let's look at Yale, a top 10 med school without a top hospital and only two top residency programs. Their 2016 match list is stellar (
https://forums.studentdoctor.net/threads/match-list-2016.1189265/#post-17544866). If you look at IM, literally only 3 people did not match to a top of the top IM residency. Looking at other specialties yields similar results.
So from this, we can conclude (or informedly speculate) that the clinical training at top medical schools doesn't depend too much on how strong your home department is to the extent that strong residencies will take students from schools without strong programs in that specialty. We can also conclude (or speculate) that the strength of the medical school doesn't necessarily depend on the strength of the affiliated hospital, though there certainly exists a general correlation.
Now, ignore everything I said above because it really doesn't matter. The following is what matters.
When you're applying into a specialty, you have really little exposure to that specialty in medical school for the most part. Someone at Harvard applying into pediatrics will have 6-8 weeks of time on their clinical year rotating at Boston Children's + a few months their 4th year doing elective rotations in pediatrics + a few months at SubIs + probably a bit of research. That's probably less than a year total of experience in peds, and in that time you are just creating a foundation for what you're going to learn during residency. You learn how to do H&Ps, how to write notes, how to present, how to communicate effectively with other specialties or hospitals, how to create a ddx, assessment, and plan, and maybe how to do a few procedures as well. But you do that at
every school - someone going into intern year should be at least marginally competent at all or most of those tasks. Two incoming residents at Boston Children's who come from Harvard and East Dakota State College of Caribbean Medicine will likely exit the program just as prepared as the other to practice pediatrics. The fact that they both interviewed and matched at BCH means that they were both qualified enough to start as interns there regardless of how strong their home programs were.
NOW - that is not to say that having a strong home program doesn't have advantages. It gives you access to strong research opportunities, it gives you access to strong mentorship from big names in the field, it gives you access to well known and well respected letter writers, it gives you the opportunity to have a strong home program (statistically you are most likely to match to your home institution in most specialties), and it helps you make connections early.
But it doesn't mean you had better training than someone at a school with a "worse" program. To put it in perspective one last time, give me an intern at 6 months at the worst pediatrics residency program in the US and I will essentially guarantee you that they are more capable than a graduating 4th year med student at Penn who rotated at CHOP.
On Rankings
To add some of my own nuance to this discussion, there are so many underutilized ways to slice this pie. The obsession with USNWR indicates a near-complete dismissal of other 3rd party hospital safety/quality rankings. Examples include Truven Health Analytics (
http://100tophospitals.com/Portals/2/assets/TOP_17235_1016_100 Top Hospitals Study_WEB.pdf) and Leapfrog's Hospital Safety Grade (
http://www.hospitalsafetygrade.org/), among others. Most of these rankings provide quantifiable, data-driven, and transparent ratings methodologies for their rankings. Many are based primarily measures of clinical standard of care and utilization of appropriate measure/procedures for patient care.
None of these metrics are perfect, and none are completely representative of an institution's capacity for clinical care. Yet, they serve as data points for building a nuanced picture of actual health quality at many hospital systems across the nation. Many of these rankings have come under intense scrutiny from the medical community, for both good and bad reasons. In many instances, such rankings systems fail to match what a particular institution's personal methodology for appropriate care. In other circumstances, many hospitals received a wake up call as to deficiencies in their current system (search Cleveland Clinic and Leapfrog for such a story). Regardless,
a critical mind must be used to analyze and determine which of such ranking systems are useful. As an example, I am not a fan of LeapFrog's methodology as it heavily relies on hospital self reporting, rather than a third party safety management system.
Selecting Medical Schools
As far as developing an optimum strategy for medical school choice, here are my thoughts. I think there are 2 clear approaches that most applicants will fall into:
1. Going in with a strong desire for a particular field (that will most likely change during medical school)
Say as a naive premed, I have my heart set out on orthopedics. Knowing this, I can actually structure my medical school list/choices based on the following metrics (not in any particular order):
1. Research productivity in orthopedics
2. # of students matching into orthopedics, averaged over number of years of match list data available
3. Clinical ranking of school, as determined by one or multiple hospital safety rankings (see above)
4. USNWR ranking (or just PD/Peer score) to serve as "prestige" factor
5. Other key requirements of medical school choice (location, climate, proximity to family, cost of living, curriculum, etc.)
Now you'll ask, how do I measure and compare productivity in my given field of interest? Thankfully in the US, large amounts of research funding are funneled and directed by the NIH. As such, medical schools receiving NIH funding are reported based off of speciality (see:
http://www.brimr.org/NIH_Awards/2016/NIH_Awards_2016.htm). Total research funding can serve as a
loose metric for research productivity in that particular specialty. Some research-oriented specialties actually have published academic articles analyzing research productivity across hospitals/programs. These serve as the gold standard to measure research output for a particular school.
Each of the above factors can be weighed differently to form a weighted average for each school, based on how "sure" one is about the particular specialty. If all I want to do in life is ortho, then I can weigh factors 1 and 2 30% each. More conservatively, each factor can have an even 20%, or even have a strong bias toward factor 5. Finally, as cost remains a major factor, dividing the weighted average by the average student indebtedness (see MSAR) or cost of attendance will provide you with a
value score for each school. I personally developed my school list using a similar methodology.
This ranking system
serves as an optimization for a particular specialty of interest. The key components of a successful application still remain on you. You will still have to work hard, learn a lot, get a competitive step 1 score, garner good clinical grades, have strong sub-I experiences, do related research and have strong interviews in order to earn a spot in said residency.
2. Blank slate in terms of specialty choice (most matriculants, even if they think otherwise)
One piece of advice I received from a former medical student, now resident was this: Going into M1, if there is a chance that you may want to pursue a "competitive" specialty, prepare yourself, as a student, for that specialty from day 1. Now this is another nuanced point. He was not saying lock down on ortho as soon as you set foot into medical school. Rather, he was imparting the common sense that having the stats (Step 1) and requirements handled for a more challenging residency automatically has carry over, in terms of prep, for a less competitive residency. Yes, you will have to completely change your research focus, letters of recommendation, etc. if you decide not to pursue the competitive specialty. The fundamentals of a strong residency application (strong step 1 == strong clinical grades > preclinical grades/rank/AOA), however, will still carry over.
Here we come to the adage "know yourself". There is no clear cut answer to how/when to decide a particular specialty. Some of us may clearly know we would like to pursue a primary care field. In that case, doing a bit of research say into medicals schools affiliated public hospitals (LA County, Grady, etc.) may be a worthwhile experience for someone leaning toward emergency medicine. Others (as most are encouraged to be), will want to truly explore all aspects of medicine before honing in on a particular field during 3rd/4th year of medical school.
This will be a point of trade offs. As one chooses to remain truly open and unbiased toward a specialty, one incurs the opportunity cost of early preparation for a residency application. The time spent "uncommitted" will reduce time/opportunity for research and publication in that field, shadowing and building of strong connections within one's medical school, and a general sense of purpose that may translate into stronger academic/board performance. This may go as far back as making a "sub-optimal" school choice in terms of research productivity/match history into that specialty (arguable point).
Statistically, this trade off will be a non-factor for most medical students, as most students, by definition, will not pursue/match into a competitive residency. As such, this is a trade off most will be willing to make. This risk of unnecessarily committing to a field that they later end up hating, just because they felt they needed to commit early is completely irrational.
Yet for those who decide during late 3rd year to pursue neurosurgery, time will be limited and the door may have closed if step 1 score performance was not up to par. As you can seen the trade off may be quite big for someone without the foresight that the
may want to pursue an competitive specialty. We come back again to the point: know yourself.
Now for the rubber meets the road for those truly undecided going into medical school. Of the 5 metrics I mentioned before that are factored into school choice, the first two are irrelevant for most entering medical students. So beyond the incorporation of hospital quality data rather than relying entirely on USNWR, what can they do? I recommend some introspection.
Think back to your clinical experiences. Shadowing, volunteering, scribing, EMT, etc. We all surely will have some. Some of us will have a wealth of diversity in such experiences. It is from here that we can begin to get an inkling for our interests. It will be difficult to shut out the voices of family and friends, mentors and others, who may be influencing our path. Their input and impact is crucial, but there is something to be said for "intuition". The greater the diversity of clinical experience, the better idea each of us may have about interests. Granted we are not experiencing the entirety of clinical practice so our preferences are at best an incomplete picture. Yet they are still preferences. And if you sense there is a leaning in one direction or another, investigate it. If it is happens to be a competitive residency, then you can choose to optimize based choice 1 above. If it is rural medicine, structure a school list to that. I am clearly imparting common sense.
Final Thoughts
At the end of the day, it does not matter what some non-trad premed thinks about school choice. What matters are core principles:
introspection, embracing complexity/nuance/grey areas, courage (to make decisions and see them through), thinking for yourself, seeking wisdom from those farther along the path. All of the above are much easier said than done. I hope most of us will not allow a singular ranking system to rule us and relinquish our own critical judgement to it.