US News Results - Statistically Significant?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Oak

Member
10+ Year Member
5+ Year Member
15+ Year Member
Joined
Feb 27, 2005
Messages
49
Reaction score
0
It occurs to me (at 3am) that the data in US N&WR is not usually reported with a standard deviation attached. How can they expect any student of the sciences to accept this information? There may be some fine print about this as I don’t have a copy of the magazine in front of me.

Something like the residency director score, for example, reports to the tenths digit. If the standard deviations are of the same magnitude as the last digit reported, then there are quite a large number of schools that are not significantly different at all, although they are “ranked” several spaces apart. Not that it should come as a surprise to anyone that the rankings are a rough guideline at the very best, but it seems that US news may not even be capable of publishing a paper that holds up to the standards put forth by the faction of society that it is attempting to critique.

(P.S. I’m still a rankings *****, now I just realize how wrong it is.)

Members don't see this ad.
 
Anybody knows point differential for Princeton vs Berkeley next ranking season?
 
Oak said:
It occurs to me (at 3am) that the data in US N&WR is not usually reported with a standard deviation attached. How can they expect any student of the sciences to accept this information? There may be some fine print about this as I don’t have a copy of the magazine in front of me.

Something like the residency director score, for example, reports to the tenths digit. If the standard deviations are of the same magnitude as the last digit reported, then there are quite a large number of schools that are not significantly different at all, although they are “ranked” several spaces apart. Not that it should come as a surprise to anyone that the rankings are a rough guideline at the very best, but it seems that US news may not even be capable of publishing a paper that holds up to the standards put forth by the faction of society that it is attempting to critique.

(P.S. I’m still a rankings *****, now I just realize how wrong it is.)


More than just sciency people look at these rankings. Remember that when publishing a magazine: more words/numbers equals higher cost! So why spend it when it really does not matter that much.
 
Members don't see this ad :)
riceman04 said:
More than just sciency people look at these rankings. Remember that when publishing a magazine: more words/numbers equals higher cost! So why spend it when it really does not matter that much.

But it does matter. US news has a large impact on the schools that people choose to apply at - probably less impact on the final decision. I wonder how many people would make different choices for their applications if they understood how fuzzy the numbers were.
 
Oak said:
It occurs to me (at 3am) that the data in US N&WR is not usually reported with a standard deviation attached. How can they expect any student of the sciences to accept this information? There may be some fine print about this as I don’t have a copy of the magazine in front of me.

Something like the residency director score, for example, reports to the tenths digit. If the standard deviations are of the same magnitude as the last digit reported, then there are quite a large number of schools that are not significantly different at all, although they are “ranked” several spaces apart. Not that it should come as a surprise to anyone that the rankings are a rough guideline at the very best, but it seems that US news may not even be capable of publishing a paper that holds up to the standards put forth by the faction of society that it is attempting to critique.

(P.S. I’m still a rankings *****, now I just realize how wrong it is.)

Correct me if I am wrong. I think a standard deviation is reported when you repeat a study several times to check reproducible of the data. A lot of stuff US news uses to rank schools (GPAs, MCAT scores, NIH funding etc) and not going to change for a given year and hence there is no point in conducting the study multiple times to obtain standard deviations (SD will essentially be zero).
 
Oak said:
It occurs to me (at 3am) that the data in US N&WR is not usually reported with a standard deviation attached. How can they expect any student of the sciences to accept this information? There may be some fine print about this as I don’t have a copy of the magazine in front of me.

Something like the residency director score, for example, reports to the tenths digit. If the standard deviations are of the same magnitude as the last digit reported, then there are quite a large number of schools that are not significantly different at all, although they are “ranked” several spaces apart. Not that it should come as a surprise to anyone that the rankings are a rough guideline at the very best, but it seems that US news may not even be capable of publishing a paper that holds up to the standards put forth by the faction of society that it is attempting to critique.

(P.S. I’m still a rankings *****, now I just realize how wrong it is.)

Well, yes the ranking are horrid, but not for the reason you specifiy. As a "student of science", your bigger concern should be about outcome definition.

They are measuring the "best school". This begs the question, how do you define the best school? Their methodology, for research schools, places the emphasis on the score as:

40% peer assessment (dean surveys - which were only filled out by 56% of the deans)
30% NIH $
20% Student selectivity (65% MCAT, 30% GPA, 5% percentage accepted)
10% Ratio of faculty : students.

What you should be asking yourself is whether or not:
1) These are the correct attributes for a med school
2) These are the correct weightings for the correct attributes for a med school.

Their outcome is defined by their measurement tools. Which, as anyone worth their weight in reagents will tell you is crap.

I would submit that the definition of "best" school should have to do with quality of patient care (percentage of "successful" cases handled by graduates of each school, or something like that), or some other outcome which affects the population physicians are trying to serve.

But, as someone else already pointed out - their goal is to sell magazines, and oddly enough they've gotten the buy-in from med schools, and med students along the way. As you point out, as "students of science" we should have already known better.
 
Oak said:
But it does matter. US news has a large impact on the schools that people choose to apply at - probably less impact on the final decision. I wonder how many people would make different choices for their applications if they understood how fuzzy the numbers were.


I think that if there were not a statistically significant difference worth discussing, they probably would mention it.
 
The following link is to a journal article which is a worthwhile read if you are interested in understanding the validity, or lack thereof, of the US News and World Report rankings of medical schools.

http://www.academicmedicine.org/cgi/content/abstract/76/10/985

Note: (The article appears in a journal which requires a subscription, most major colleges and universities with a medical school likely have one. However, the abstract is free. If you can't get the article send me one of those private message things and I can help you.)

The citation:

America's Best Medical Schools: A Critique of the U.S. News & World Report Rankings William C. McGaghie and Jason A. Thompson
Acad Med 2001 76: 985-992

JoeyB
 
One has to question the validity of the Quality Assessment Portion of the rankings when the surveys used to determine the score on this section are so low. If you read the fine print, the survey sent to the "deans deans of admission, deans of academic affairs, and directors of admissions" had a response rate of 53 %. So barely half of all medical school deans responded to the survey. This part of the score accounts for 20% of the TOTAL score on the research list and 25% on the primary care list. The survey of the residency directors had a response rate of 28%. Although valid statistical comparisons can be made with such a low response rate, the strength of these comparisons is very weak. In other words, it is a stretch to say Stanford, for instance, is that much better than, say, Georgetown.
 
Sigma said:
Well, yes the ranking are horrid, but not for the reason you specifiy. As a "student of science", your bigger concern should be about outcome definition.

They are measuring the "best school". This begs the question, how do you define the best school? Their methodology, for research schools, places the emphasis on the score as:

40% peer assessment (dean surveys - which were only filled out by 56% of the deans)
30% NIH $
20% Student selectivity (65% MCAT, 30% GPA, 5% percentage accepted)
10% Ratio of faculty : students.

What you should be asking yourself is whether or not:
1) These are the correct attributes for a med school
2) These are the correct weightings for the correct attributes for a med school.

Their outcome is defined by their measurement tools. Which, as anyone worth their weight in reagents will tell you is crap.

I would submit that the definition of "best" school should have to do with quality of patient care (percentage of "successful" cases handled by graduates of each school, or something like that), or some other outcome which affects the population physicians are trying to serve.

But, as someone else already pointed out - their goal is to sell magazines, and oddly enough they've gotten the buy-in from med schools, and med students along the way. As you point out, as "students of science" we should have already known better.

I'm not saying that US N&WR has a valid ranking system.
NIH money doesn't necessarilly mean that you'll have a good research experience (if that's your thing), and a low student/faculty ratio doesn't mean that you'll have good professors or many interactions with them. The only parts of the survey that may reflect how good a school is are the assessment scores and possibly entering student stats, and I agree that both of these are questionable. With my first post, I was just saying that, even within their defined outcome measures, it's a poorly reported study.

This is especially true for the assessment scores, which differ by very little and should have a standard deviation or other measure of variance attached. It looks like they are just raw averages from a lagre data set (even with only around half of those polled responding). Because they are calculated from many "trials" in the form of seperate surveys, an SD is appropriate and can be calculated. For other measures like NIH funding and student to faculty ratio, it's unclear that any such SD could be arrived at.
 
Top