The problem with USNWR, and most school ranking systems for that matter, is that they attempt to be too broad. The criteria are too comprehensive. When you want to rank schools based on their faculty-student ratio, their MCAT average, their reputation with residency directors, you have to make an arbitrary decision about how much each of those factors will play into your ranking of which medical schools are "best."
If, however, you looked at the ratings for each individual criterion and evaluated them according to how important YOU think they are, then the data are somewhat valuable. Each criterion may not be able to encompass all that makes a med school "good", but an attempt to be so comprehensive will fail because the value judgments that go into factor weights are far from universal.
I beg everyone to stop looking at year-to-year variations in USNWR ranks. They happen largely either because of changes in methodology (arbitrary!) or changes in things you don't care about (maybe a bunch of NIH grants happened to expire together in one year -- it's cyclical and doesn't matter; why do you care what the acceptance rate of a school is -- there are too many ways to manipulate it). They want ranks to change a little from year to year in order to sell magazines and subscriptions.
What you should care about is the metrics that you say matter to you.
Many posters in this thread have alluded to the importance of reputation.
So let's look at reputation in isolation. If you take 2011-2013 reputation data from peer schools (i.e. deans) and residency program directors, average them together, you get a clean split between the top 8 and everyone else:
- Harvard
- Johns Hopkins
- UCSF
- Stanford
- WashU
- Duke
- U Penn
- U Michigan
Really no surprises.
If you wanted to see the overlap with, say, the top 10 by step 1 score, you would find these schools remain: Harvard, Hopkins, WashU, Penn.