Difference between 'top' medical schools and 'lower' tiered schools

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
You've lost me a while ago on this one, if you think PD assessments are an arbitrary way to answer this question, and that schools like Duke, Columbia and Yale are similar to Iowa, Colorado and Rochester by any reasonable approach

I think that your selection of these assessments as the "standard" for school ranking is arbitrary. I could use average Step 1 score as the way to rank schools. You would need to define a question before you talk about questions. If your question is what do residency directors think of schools? Then measuring their assessments of schools is a rational, non-arbitrary way to answer the question. But if your question is which schools are the best? Then using residency director assessments is an arbitrary way of measuring it because "best" doesn't only mean "best in the eyes of the residency directors."

I do not believe that Duke, Columbia, and Yale are on the same level as those schools. However, I do believe that US News overvalues WashU, Yale, and NYU, among others.
 
And to illustrate my above argument, say that you want to choose the most competent chemist out of a group of 20 students. That's quite a rational thing to do, much like ranking schools. Just like you want to rank a school based on how well its graduate are trained, say I want to rank my students based on how well they understand the material. Rational. Now, I could go about this in many ways. I could administer some sort of standardized test. I could make my own exam and administer it. I could set up a practical exercise where the first to devise a practical synthesis of compound X is the most competent. I could put them all in a room and set them loose on each other. Which choice I make is arbitrary, assuming that these choices (save the last one) can measure competency roughly equally well. I could choose any one of them. The fact that I choose to make my own exams and administer them is an arbitrary choice I make. I could just as easily have made the lab practical the final exam.
The bolded portion is the mistake you're making; the whole point of trying to identify the best standard is choosing the ranking that we think most logically represents an accurate evaluation of the most important criteria. You can't assume that all the choices measure what they intend to measure equally well, that's the whole point.
Again, your goal could be rational but the way you go about doing it could be arbitrary. Goal: rank schools based on the competency of graduates. Arbitrary choice: measure competency by residency director rating.
Just because there are multiple options to choose from doesn't mean that one option isn't better than the others. I could thoroughly explain why I think res director rankings is the best option; thus, it's not an arbitrary choice to choose res director rankings because there's a logical framework supporting that decision.
 
you can be arbitrary without being random
Yes, but it would have to be personal whim + a lack of "any reason or system." Since there's reasoning behind the decision to choose res director rankings as the standard, it's still not arbitrary, randomness aside.
 
I think that your selection of these assessments as the "standard" for school ranking is arbitrary. I could use average Step 1 score as the way to rank schools. You would need to define a question before you talk about questions. If your question is what do residency directors think of schools? Then measuring their assessments of schools is a rational, non-arbitrary way to answer the question. But if your question is which schools are the best? Then using residency director assessments is an arbitrary way of measuring it because "best" doesn't only mean "best in the eyes of the residency directors."

I do not believe that Duke, Columbia, and Yale are on the same level as those schools. However, I do believe that US News overvalues WashU, Yale, and NYU, among others.
As long as we agree that StartClass is way, way off on a lot of places like I originally said, I think were on the same page there. My impression of WashU was very similar to Penn at interviews so maybe it was the people we happened to be at each with. By step score WashU comes out in the top few (source was int day), by MCAT and GPA metrics also up at the top, by peer ratings and PD ratings it's up there...what metric do you think would show it's correct position, in your opinion? What insights do you have that matriculating students, residency directors, and other med schools lack?
 
The bolded portion is the mistake you're making; the whole point of trying to identify the best standard is choosing the ranking that we think most logically represents an accurate evaluation of the most important criteria. You can't assume that all the choices measure what they intend to measure equally well, that's the whole point.

I believe that there are choices that measure competency of graduates equally as well as or better than residency director opinions, which are influenced by other things. For example, I think that you can measure whether a school trains competent doctors by assessing directly peer opinions of your individual graduates. Do the doctors who work with your graduates think that they are competent doctors? This is the principle that peer review in science is based on. As long as there is even one other measurement you can do that measures the intended variable at least as well as the one you are currently proposing, then your choice of the latter measurement is arbitrary.

Just because there are multiple options to choose from doesn't mean that one option isn't better than the others. I could thoroughly explain why I think res director rankings is the best option; thus, it's not an arbitrary choice to choose res director rankings because there's a logical framework supporting that decision.

So, do you want to explain why you think residency director ranking of schools would be a better measure of competency than peer evaluations of your graduates?
 
I believe that there are choices that measure competency of graduates equally as well as or better than residency director opinions, which are influenced by other things. For example, I think that you can measure whether a school trains competent doctors by assessing directly peer opinions of your individual graduates. Do the doctors who work with your graduates think that they are competent doctors? This is the principle that peer review in science is based on. As long as there is even one other measurement you can do that measures the intended variable at least as well as the one you are currently proposing, then your choice of the latter measurement is arbitrary.



So, do you want to explain why you think residency director ranking of schools would be a better measure of competency than peer evaluations of your graduates?

it's not very difficult to make the argument that a residency director's opinion of how "competent" graduates of different medical schools tend to be ought to hold more weight than the opinions of other random physicians who may or may not be responsible for recruiting, training, and educating several generations of physicians.
 
Yes, but it would have to be personal whim + a lack of "any reason or system." Since there's reasoning behind the decision to choose res director rankings as the standard, it's still not arbitrary, randomness aside.
There's reasoning for choosing any of the other ones as the standard as well. The decision to choose ONE is whimsical without reason or system validating that supremacy
 
As long as we agree that StartClass is way, way off on a lot of places like I originally said, I think were on the same page. My impression of WashU was very similar to Penn at interviews so maybe it was the people we happened to be at each with. By step score WashU comes out in the top few (source was int day), by MCAT and GPA metrics also up at the top, by peer ratings and PD ratings it's up there...what metric do you think would show it's correct position, in your opinion? What insights do you have that matriculating students, residency directors, and other med schools lack?

I believe that a better way to assess the quality of graduates a medical school is producing is a kind of a peer review system for the graduates of the school. I believe that peer review for doctors is a good thing in general. It's how we do things in science and it's how any science-related field should be run. When I submit a grant proposal, it's reviewed by people who are in my chosen field, who are in the best position to judge the quality of my work. We don't have only chairs of departments review these proposals. What do the doctors who work with the graduate think about said graduate? One could establish a numerical system of assessment. Multiple co-workers would be assessing each person. And there will be many people. Put it all together, and you would have a number that characterizes, on average, what other doctors who work with your graduates think about your graduates. Is that not a better measure of how well a med school trains physicians?

For med schools, you would be soliciting assessments from the attendings who work with your students (now as residents). You could rank residencies by assessments from the attendings and partners who end up working with the doctors you graduate.
 
it's not very difficult to make the argument that a residency director's opinion of how "competent" graduates of different medical schools tend to be ought to hold more weight than the opinion's of other random physicians who may or may not be responsible for recruiting, training, and educating several generations of physicians.

In this case, when you're trying to rank a medical school, it's going to be what the attendings who are directly working with your graduates think about your graduates. You're putting all the power in one person who may not even work with your graduate on a day-to-day basis. This person has the power to process all the data and then spit out one number at you. He's basically the middleman who takes what all the attendings who work with the graduates say and processes that. And you're assuming that he will process that correctly. As humans, we tend to remember bad experiences really well. One bad graduate could tarnish a school's reputation because he was just extraordinarily bad and the residency director remembered that. So why not remove the middleman? Take the assessment-level data and aggregate that.
 
I believe that a better way to assess the quality of graduates a medical school is producing is a kind of a peer review system for the graduates of the school. I believe that peer review for doctors is a good thing in general. It's how we do things in science and it's how any science-related field should be run. When I submit a grant proposal, it's reviewed by people who are in my chosen field, who are in the best position to judge the quality of my work. We don't have only chairs of departments review these proposals. What do the doctors who work with the graduate think about said graduate? One could establish a numerical system of assessment. Multiple co-workers would be assessing each person. And there will be many people. Put it all together, and you would have a number that characterizes, on average, what other doctors who work with your graduates think about your graduates. Is that not a better measure of how well a med school trains physicians?

For med schools, you would be soliciting assessments from the attendings who work with your students (now as residents). You could rank residencies by assessments from the attendings and partners who end up working with the doctors you graduate.
So you personally feel WashU is overrated because you know some attendings that are not impressed with the school's grads, and you think theirs is a better read than PDs?
 
In this case, when you're trying to rank a medical school, it's going to be what the attendings who are directly working with your graduates think about your graduates. You're putting all the power in one person who may not even work with your graduate on a day-to-day basis. This person has the power to process all the data and then spit out one number at you. He's basically the middleman who takes what all the attendings who work with the graduates say and processes that. And you're assuming that he will process that correctly. As humans, we tend to remember bad experiences really well. One bad graduate could tarnish a school's reputation because he was just extraordinarily bad and the residency director remembered that. So why not remove the middleman? Take the assessment-level data and aggregate that.

I think you have this precisely backwards. PDs have experience with loads and loads of graduates in a position that is specifically concerned with recruiting the best trainees, retain the most productive residents, in order to pump out the best physicians. It is the random attendings who might have extremely limited experience working with graduates of certain schools -- or not. You don't know. It's possible a peer might only know 3 ppl from UTSW. 2 happened to be tools. Is UTSW a terrible medical school? With PDs you at least have the knowledge that they are in a senior role where they are likely to have a broad and deep reserve of experience to draw from in addition to having access to information that random peers might not have like the board score averages of graduates from X,Y,Z schools, that particular program's history with a certain medical school's graduates and patient complaints, memory of how grads from that school have done in your interview process or in job-search towards the end of the residency, or other salient measurements a residency director might keep track of.
 
So you personally feel WashU is overrated because you know some attendings that are not impressed with the school's grads, and you think theirs is a better read than PDs?

No, that's my general impression of why I think how rankings are done now is bull****. I believe we're mainly on the same page about that. I just don't think that solely using residency directors' rankings as the end-all-be-all is the right way of fixing it.

WashU is another matter. I think that WashU's student body isn't as strong as the student bodies at schools of similar US News ranking. Again, that could be due to my having a non-representative cross-section (you said you had a different experience) but that was my impression. I think that the quality of students translates into quality of graduates.
 
No, that's my general impression of why I think how rankings are done now is bull****. I believe we're mainly on the same page about that. I just don't think that solely using residency directors' rankings as the end-all-be-all is the right way of fixing it.

WashU is another matter. I think that WashU's student body isn't as strong as the student bodies at schools of similar US News ranking. Again, that could be due to my having a non-representative cross-section (you said you had a different experience) but that was my impression. I think that the quality of students translates into quality of graduates.
So to my question of why you personally think WashU is over-rated compared to PD and peer assessment, it comes down to you were less impressed with the people you met on interview day?
 
No, that's my general impression of why I think how rankings are done now is bull****. I believe we're mainly on the same page about that. I just don't think that solely using residency directors' rankings as the end-all-be-all is the right way of fixing it.

WashU is another matter. I think that WashU's student body isn't as strong as the student bodies at schools of similar US News ranking. Again, that could be due to my having a non-representative cross-section (you said you had a different experience) but that was my impression. I think that the quality of students translates into quality of graduates.

"PDs opinions are not a perfect metric" is miles away from "PDs opinions are just as good as the opinions of random peers"
 
I think you have this precisely backwards. PDs have experience with loads and loads of graduates in a position that is specifically concerned with recruiting the best trainees, retain the most productive residents, in order to pump out the best physicians. It is the random attendings who might have extremely limited experience working other graduates of certain schools -- or not. You don't know. It's possible a peer might only know 3 ppl from UTSW. 2 happened to be tools. Is UTSW a terrible medical school? With PDs you at least have the knowledge that they are in a senior role where they are likely to have a broad and deep reserve of experience to draw from in addition to having access to information that random peers might not have like the board score averages of graduates from X,Y,Z schools, that particular program's history with a certain medical school's graduates and patient complains or other salient measurements a residency director might keep track of.

I believe that you have an exalted view of residency directors that may not be entirely deserved. Say three attendings work with graduate of med school A on a regular basis. You solicit their assessments. Let's say there's a system of 1-10, with 10 being the best doctor they've ever worked with and 1 being "don't let this person touch a patient ever." Let's say that attending A has had a bad experience with graduates from med school A in the past. So he ranks this person a 6, although he deserves an 8. Attending B ranks the person an 8. Attending C also hates med school A so gives that person a 4. On average, this person will have a score of 6. In this situation that I've defined, is that an unfair rating for this person? Yes. It's based on two attendings' biases. But don't forget, you have this sort of assessment for hundreds upon hundreds of graduates of med school A. What are the odds that individual biases will affect the sample/population mean? Low. Perhaps the residency director could be given a vote in this and that vote could be weighed slightly more than the coworker evaluations. But I strongly believe that residency directors alone should not have this power to rank schools as they wish.
 
So to my question of why you personally think WashU is over-rated compared to PD and peer assessment, it comes down to you were less impressed with the people you met on interview day?

It comes down to my personal interactions with students at WashU, on interview day and not. I think I have a few years on you, so I've met people in the course of my own undergraduate studies as well as professionally.
 
"PDs opinions are not a perfect metric" is miles away from "PDs opinions are just as good as the opinions of random peers"

I would say lightyears, rather. Because I'm of the opinion that the people who you work with are the ones who are in the best position to evaluate the quality of your work, not some random all-powerful person who you might see once a month. Also, as long as there is some other metric that can measure the intended variable at least as well as the proposed one, then the choice of the proposed one as the metric for said variable is arbitrary.
 
It comes down to my personal interactions with students at WashU, on interview day and not. I think I have a few years on you, so I've met people in the course of my own undergraduate studies as well as professionally.
All due respect to your years on me, but I think attending undergraduate here and working in a clinic here during gap has me at least on even footing for trading anecdotal impressions 😉
 
All due respect to your years on me, I think attending undergraduate here and working in a clinic here during gap has me at least on even footing for trading anecdotal impressions 😉

Of course, but I'm interested in anecdotal impressions of the student body. Not the strength of the research program, the clinical facilities, or what residency directors think. What do you think makes WashU students strong? This is not a facetious question. Because I am open to the idea that my impression of WashU could be wrong. And you could change that.
 
I believe that there are choices that measure competency of graduates equally as well as or better than residency director opinions, which are influenced by other things. For example, I think that you can measure whether a school trains competent doctors by assessing directly peer opinions of your individual graduates. Do the doctors who work with your graduates think that they are competent doctors? This is the principle that peer review in science is based on. As long as there is even one other measurement you can do that measures the intended variable at least as well as the one you are currently proposing, then your choice of the latter measurement is arbitrary.
Again with the begging the question fallacy; the bolded is obvious, and if I accepted that premise to be true then there would be no disagreement. The entire point is that I don't think any two measurements measure the intended variable equally well.
So, do you want to explain why you think residency director ranking of schools would be a better measure of competency than peer evaluations of your graduates?
Not really, because you've essentially already argued my point by explaining so thoroughly why you think peer rankings are better than PD rankings. Should you conclude that rankings based on peer review are the best standards, your choice to choose those rankings would not be arbitrary because you've laid out your logical reasoning for making that choice. It wouldn't be based on randomness or personal whim.
There's reasoning for choosing any of the other ones as the standard as well. The decision to choose ONE is whimsical without reason or system validating that supremacy
The whole point is to choose the one with the best reasoning. Obviously they all have some reasoning, but choosing the most logical option is in no way whimsical.
 
I believe that you have an exalted view of residency directors that may not be entirely deserved. Say three attendings work with graduate of med school A on a regular basis. You solicit their assessments. Let's say there's a system of 1-10, with 10 being the best doctor they've ever worked with and 1 being "don't let this person touch a patient ever." Let's say that attending A has had a bad experience with graduates from med school A in the past. So he ranks this person a 6, although he deserves an 8. Attending B ranks the person an 8. Attending C also hates med school A so gives that person a 4. On average, this person will have a score of 6. In this situation that I've defined, is that an unfair rating for this person? Yes. It's based on two attendings' biases. But don't forget, you have this sort of assessment for hundreds upon hundreds of graduates of med school A. What are the odds that individual biases will affect the sample/population mean? Low. Perhaps the residency director could be given a vote in this and that vote could be weighed slightly more than the coworker evaluations. But I strongly believe that residency directors alone should not have this power to rank schools as they wish.

.....or the results you get are totally meaningless and disorderly because the peer assessments are dominated by the limitations of each individual peer. That is an empirical hypothesis, I dont think we can just accept it as true.

However, I don't think it's at all unreasonable to simply ascribe to residency director's the experience and information access associated with their job description. A priori a PD's opinion >> peer's opinion if you are trying to answer the question: "How favorably do you view graduates from X medical school?" in a way that might reveal something about the quality of undergraduate medical education. It's imperfect but at least it's better than USNWR aka "Who has the most grant money 20XX?"
 
Not really, because you've essentially already argued my point by explaining so thoroughly why you think peer rankings are better than PD rankings. Should you conclude that rankings based on peer review are the best standards, your choice to choose those rankings would not be arbitrary because you've laid out your logical reasoning for making that choice. It wouldn't be based on randomness or personal whim.

The whole point is to choose the one with the best reasoning. Obviously they all have some reasoning, but choosing the most logical option is in no way whimsical.

The idea is that I don't know if my solution would measure the intended variable better than residency director rankings. But if we agree that it measures the intended variable at least as well as residency director rankings, which is a much lower bar, then the choice of either would be arbitrary. There is no universal "best reasoning" or "most logical option."
 
The idea is that I don't know if my solution would measure the intended variable better than residency director rankings. But if we agree that it measures the intended variable at least as well as residency director rankings, which is a much lower bar, then the choice of either would be arbitrary. There is no universal "best reasoning" or "most logical option."
We don't agree, and for the last time, that's the entire point of this debate! I think it's incredibly unlikely that any two measurements measure the intended variable indistinguishably well.

And even if I agreed that it measured the intended variable at least as well as PD rankings, then peer review rankings would be the better standard since the only options would be that those rankings are equally as good or better, which is of course a better option than equally good or worse.
 
.....or the results you get are totally meaningless and disorderly because the peer assessments are dominated by the limitations of each individual peer. That is an empirical hypothesis, I dont think we can just accept it as true.

Yes, this is a hypothesis but it is a hypothesis based on the fundamental laws of statistics. The whole idea of having a body of people making any decision is that the limitations of the individual will be rendered inconsequential by the synergistic sum of the whole. The odds of two or three people having unreasonably low opinions of med school A is high. But the odds of many thousands of people having unreasonably low opinions of med school A? Astronomically low. Unless, of course, there actually is a problem with med school A. That's why we have juries. The odds of one person convicting because of an unreasonably-held opinion is much higher than the odds of many people convicting because they all share that unreasonably-held opinion.

However, I don't think it's at all unreasonable to simply ascribe to residency director's the experience and information access associated with their job description. A priori a PD's opinion >> peer's opinion if you are trying to answer the question: "How favorably do you view graduates from X medical school?" in a way that might reveal something about the quality of undergraduate medical education. It's imperfect but at least it's better than USNWR aka "Who has the most grant money 20XX?"

It's not unreasonable in the sense that it is not without reason. But that's not what is questioned here. What is questioned is whether that is the only measure or if there exists another measure that can measure "quality of undergraduate medical education" at least as well. If there does exist such a measure, then the choice of using one or the other is arbitrary.
 
Yes, this is a hypothesis but it is a hypothesis based on the fundamental laws of statistics. The whole idea of having a body of people making any decision is that the limitations of the individual will be rendered inconsequential by the synergistic sum of the whole. The odds of two or three people having unreasonably low opinions of med school A is high. But the odds of many thousands of people having unreasonably low opinions of med school A? Astronomically low. Unless, of course, there actually is a problem with med school A. That's why we have juries. The odds of one person convicting because of an unreasonably-held opinion is much higher than the odds of many people convicting because they all share that unreasonably-held opinion.



It's not unreasonable in the sense that it is not without reason. But that's not what is questioned here. What is questioned is whether that is the only measure or if there exists another measure that can measure "quality of undergraduate medical education" at least as well. If there does exist such a measure, then the choice of using one or the other is arbitrary.
Are you under the impression that only one residency director is surveyed in the residency director rankings? I'm pretty sure it's hundreds, from various schools and specialties
 
We don't agree, and for the last time, that's the entire point of this debate! I think it's incredibly unlikely that any two measurements measure the intended variable indistinguishably well.

That is the point, my friend 🙂 It is unlikely that any two measurements measure an intended variable equally well. But it is equally unlikely that you wouldn't be able to say that the two measurements are at least as good as the other. The world doesn't exist in black or white. Does the MCAT accurately measure academic preparedness for med school? To some extent. Does GPA accurately measure academic preparedness for med school? To some extent. Is one more accurate than the other? No way of knowing. That's why med schools use both measures. They don't arbitrarily choose one or the other.

And even if I agreed that it measured the intended variable at least as well as PD rankings, then peer review rankings would be the better standard since the only options would be that those rankings are equally as good or better, which is of course a better option than equally good or worse.

The statement "at least as well," statistically speaking, means indistinguishably well. Say C is the variable of interest. Its true value is 15. But there's no way of knowing that outside of this constructed scenario. A is used to measure C. Application of A says that C is between 13 and 16 with 95% certainty. Mean is, say, 14. B is also used to measure C. Application of B says that C is between 13 and 16 with 95% uncertainty. Mean is 15. A and B are statistically indistinguishable when measuring C. Therefore, choosing either of them to measure C is arbitrary.
 
Are you under the impression that only one residency director is surveyed in the residency director rankings? I'm pretty sure it's hundreds, from various schools and specialties

I was talking about the surveying of one institution with the implicit understanding that this is generalized. We reduced the problem to make discussion easier. Surveying 150 residency directors (arbitrary number I chose) doesn't get you as close to the "true" value as surveying 1500 attendings who work closely with the person of interest. That's just statistics.
 
The whole premise of surveying PDs is severely flawed for many reasons. The first is very notion that PDs are ideally positioned to evaluate resident performance and capability. The heterogeneity of the resident evaluation process in a single program in a single institution alone is enough to see that PDs are often not well suited to know how truly "good" a resident is.

The second question is also what a good resident is. Clinical acumen? Sure. But how is this judged? Patient satisfaction? Surgical outcomes? How can this be tracked in a manner that feeds back to the PD? More often than not you'll find that attendings judge residents more or less on how good their secretarial skills are. Then there's research output. Is an academic program going to look favorably on a clinical superstar that publishes no papers? How do you balance the two?

Next up is the simple problem that even if the first two questions were non-factors, most programs in most fields simply do not see the volume and diversity of graduates to form accurate impressions of any non-home institution. In my field, like most surgical specialities, residency classes are typically ~6 (4-12). Even under the most optimistic scenario a program might take a graduate from a given school once every 1 or 2 years. Is that a sufficient sample size even after 5-10 years of experience? And taken as a whole, a PD will only have direct experience with 40-50 schools at a maximum and more likely 15-30. How is this person to draw accurate comparisons? The only exception to this would be the mega fields like medicine or peds where a program can reliably have experience with multiple graduates of a given institution on a yearly basis.

Lastly, the most glaringly obvious problem with this particular method is that the response rate is atrocious.

If one critically analyzes the USNews methodology, every single one of the components is frankly is varying degrees of garbage. To say one is better than the other ignores the obvious (perhaps not obvious) truth that they are all completely flawed, unscientific, and more or less invalid.
 
I can't wait to read through all of this before bed. Just gotta finish this darn pre-lab for analytical chemistry 😢
 
but i thought you didn't like analytical chem either? and you're strictly an organic chemistry fan 🤔

Well, I love when I get students in my lab who have taken analytical chem. Much easier to work with because they know all the techniques and how all the machines work. I could see how the precise measurements you do in analytical chem are pretty much useless in research but the knowledge about techniques is very useful.
 
The whole point is to choose the one with the best reasoning. Obviously they all have some reasoning, but choosing the most logical option is in no way whimsical.
The whole point is that "best reasoning" is itself an arbitrary designation made plainly evident by your own statement
your choice to choose those rankings would not be arbitrary because you've laid out your logical reasoning for making that choice. It wouldn't be based on randomness or personal whim
when one's choice is based on one's personal reasoning it is by definition by personal whim
 
You will in general work with and learn from smarter fellow students and faculty, have more resources in most endeavors and it gives you a healthy boost come residency time
 
I was talking about the surveying of one institution with the implicit understanding that this is generalized. We reduced the problem to make discussion easier. Surveying 150 residency directors (arbitrary number I chose) doesn't get you as close to the "true" value as surveying 1500 attendings who work closely with the person of interest. That's just statistics.
I don't really care whether peer rankings or PD rankings are better. I'm just saying that for any given applicant, there are factors that are more important than others in choosing a school and certain ranking systems will best parallel those factors, making that the objectively best ranking for that particular applicant. And that makes it non-arbitrary since it's the best logical system for that person's goals. I didn't mean to imply that there's a universally best ranking system, just that choosing between the different options isn't arbitrary since it's not random or whimsical but is logical. But I'm getting bored with this semantics debate, you can have the last word if ya want.

I think our disagreement might stem from the fact that there are several distinct categories of applicants with very different long-term goals, and as such there are certain ranking systems that are objectively best for different groups. For example, the best system for ranking schools will be different for those who are pursuing primary care, those who are considering academic careers in hypercompetitive specialties, those who are simply seeking financial security by going into medicine, those interested in global health, etc. For each of those groups, different rankings will be ideal. For example, someone simply seeking financial security might want a ranking system that heavily weights financial aid and cost of attendance. That would be a logical and non-arbitrary way to choose a particular ranking system for that person. For me, PD rankings are especially important since PDs are often the gatekeepers to entering competitive programs in competitive specialties, which is important for me as someone interested in academic medicine and surgical fields. As such, one of the most important things for me in a med school is its ability to make me a competitive applicant for those competitive fields, and PD opinions about my school will play a significant role in that competitiveness, regardless of how inaccurate any PD's opinion is.
The whole premise of surveying PDs is severely flawed for many reasons. The first is very notion that PDs are ideally positioned to evaluate resident performance and capability. The heterogeneity of the resident evaluation process in a single program in a single institution alone is enough to see that PDs are often not well suited to know how truly "good" a resident is.

The second question is also what a good resident is. Clinical acumen? Sure. But how is this judged? Patient satisfaction? Surgical outcomes? How can this be tracked in a manner that feeds back to the PD? More often than not you'll find that attendings judge residents more or less on how good their secretarial skills are. Then there's research output. Is an academic program going to look favorably on a clinical superstar that publishes no papers? How do you balance the two?

Next up is the simple problem that even if the first two questions were non-factors, most programs in most fields simply do not see the volume and diversity of graduates to form accurate impressions of any non-home institution. In my field, like most surgical specialities, residency classes are typically ~6 (4-12). Even under the most optimistic scenario a program might take a graduate from a given school once every 1 or 2 years. Is that a sufficient sample size even after 5-10 years of experience? And taken as a whole, a PD will only have direct experience with 40-50 schools at a maximum and more likely 15-30. How is this person to draw accurate comparisons? The only exception to this would be the mega fields like medicine or peds where a program can reliably have experience with multiple graduates of a given institution on a yearly basis.

Lastly, the most glaringly obvious problem with this particular method is that the response rate is atrocious.

If one critically analyzes the USNews methodology, every single one of the components is frankly is varying degrees of garbage. To say one is better than the other ignores the obvious (perhaps not obvious) truth that they are all completely flawed, unscientific, and more or less invalid.
Again, in attempting to get into a competitive program/specialty, it doesn't really matter if a PD's opinion about your school is accurate, it only matters how favorable the PD's opinion is. From my point of view, it's the role of the medical school to help its students obtain the residency positions that they desire and favorable opinions from PDs about a school will help that school's students access that residency; at least, that's largely what I look for in med schools.
The whole point is that "best reasoning" is itself an arbitrary designation made plainly evident by your own statement

when one's choice is based on one's personal reasoning it is by definition by personal whim
good lord I feel like an English teacher today, here's another definition for ya:

whim: "a sudden desire or change of mind, especially one that is unusual or unexplained"

Someone's personal reasoning about why certain factors are important in a med school for his/her goals has nothing to do with whim.

This is getting so boring, you don't know what arbitrary or whim mean and I'm tired of bashing my head against a wall trying to explain this :bang:
 
I just quickly want to say personal whim is just a passing desire. Not based on logic but feeling/mood at that moment. If there is any reasoning involved at a conscious level (which there undoubtedly will be for residency app evals), it is not a whim.

I agree that the best system for evaluating med schools cannot exist until we can establish a metric for measuring that quality. Many of the metrics used by PDs are based on their personal opinions or experiences and so the resulting ranking will obviously be debatable. However, the fact that many PDs are asked to rank the schools and a composite is created accounts for much of the variability in personal biases the PDs might have. I haven't really looked at any statistical analysis of the PDs' responses but if the ranking for each school has a small enough variability across all PDs' responses, then it's safe to assume that the school deserves that spot.
 
Again, in attempting to get into a competitive program/specialty, it doesn't really matter if a PD's opinion about your school is accurate, it only matters how favorable the PD's opinion is. From my point of view, it's the role of the medical school to help its students obtain the residency positions that they desire and favorable opinions from PDs about a school will help that school's students access that residency; at least, that's largely what I look for in med schools.
For an English teacher you sure don't read real well. My last point regarding response rate covers this already. Even were the response rate not dismal, it's still worthless. What the fck does an aggregate score from PDs in every field make a difference to a specialty-specific residency applicant?

good lord I feel like an English teacher today, here's another definition for ya:
whim: "a sudden desire or change of mind, especially one that is unusual or unexplained"
Someone's personal reasoning about why certain factors are important in a med school for his/her goals has nothing to do with whim.
This is getting so boring, you don't know what arbitrary or whim mean and I'm tired of bashing my head against a wall trying to explain this :bang:
Time for the teacher to sit down for a lesson. Instead of picking up on the single top search engine result, how about cracking open the actual OED available through your library?

Arbitrary -
1. To be decided by one's liking; dependent upon will or pleasure; at the discretion or option of any one.
3. Derived from mere opinion or preference; not based on the nature of things; hence, capricious, uncertain, varying.

Whim -
3a. capricious notion or fancy

If you're going to play teacher it helps to know what you're talking about. This applies equally to English, rankings, statistics, and researching things on the internet.
 
For an English teacher you sure don't read real well. My last point regarding response rate covers this already.
Oh I'm definitely not saying the survey of PDs was done in an ideal way, just that I care about the PD opinions more than the opinions of peer reviewers since PDs play larger roles in resident selection.

Even were the response rate not dismal, it's still worthless. What the fck does an aggregate score from PDs in every field make a difference to a specialty-specific residency applicant?
Most pre-meds don't know what specialty they want to go into by the time they have to decide which med school to attend (which is generally the same time they're looking at med school rankings), so specialty-specific PD rankings would be pretty difficult to use for most of us.

Time for the teacher to sit down for a lesson. Instead of picking up on the single top search engine result, how about cracking open the actual OED available through your library?
Eh that's too much effort for a boring debate haha

Arbitrary -
1. To be decided by one's liking; dependent upon will or pleasure; at the discretion or option of any one.
3. Derived from mere opinion or preference; not based on the nature of things; hence, capricious, uncertain, varying.

Whim -
3a. capricious notion or fancy

If you're going to play teacher it helps to know what you're talking about. This applies equally to English, rankings, statistics, and researching things on the internet.
There are logical reasons why certain ranking systems would be objectively more accurate for well-defined groups of pre-meds with specific long-term goals. That's all I'm gonna say in this thread but feel free to rebut

Edit: first version was too condescending
 
Last edited:
What the fck does an aggregate score from PDs in every field make a difference to a specialty-specific residency applicant?
Can you clarify what you mean for this? Like are you saying a certain school (lets say Yale) might prepare people extremely well for an IM residency but not for surg? I figured it would be pretty constant, that the traits that make someone a good resident would make them a good resident in whatever area.
 
Can you clarify what you mean for this? Like are you saying a certain school (lets say Yale) might prepare people extremely well for an IM residency but not for surg? I figured it would be pretty constant, that the traits that make someone a good resident would make them a good resident in whatever area.

If you're a citizen living in the U.S., you don't really care about the global poverty level. Because it doesn't affect you. The measure that directly affects you is the poverty level in America. Yes, perhaps you don't know what field you will be going into when you enter med school, but that doesn't mean that aggregated residency director ranking becomes automatically the best measure. The better solution would be to ignore residency director rankings when you're entering med school because you don't know which field you're going into so the number is meaningless.
 
If you're a citizen living in the U.S., you don't really care about the global poverty level. Because it doesn't affect you. The measure that directly affects you is the poverty level in America. Yes, perhaps you don't know what field you will be going into when you enter med school, but that doesn't mean that aggregated residency director ranking becomes automatically the best measure. The better solution would be to ignore residency director rankings when you're entering med school because you don't know which field you're going into so the number is meaningless.
I think I'm missing the analogy. Are you saying the schools that produce good residents in one specialty do not produce good residents for a different specialty? Is that why global PD ratings would be a poor measure, you need specialty-specific PD ratings like you need nation-specific wealth distribution data?

This would be a very new idea to me, I thought the traits desired in a fresh med school graduate would be standard across specialties.
 
I think I'm missing the analogy. Are you saying the schools that produce good residents in one specialty do not produce good residents for a different specialty? Is that why global PD ratings would be a poor measure, you need specialty-specific PD ratings like you need nation-specific wealth distribution data?

This would be a very new idea to me, I thought the traits desired in a fresh med school graduate would be standard across specialties.

I don't think it's so strange an idea that schools can train medical students at varying levels dependent on department strength. Yale is the #4 school in the nation, but I guarantee you that it's not #4 in all departments. Yale's molecular biology department is better than its chemistry department, so it stands to reason that the biologists it trains is better than the chemists it trains. Yes, these rankings are graduate-level, but it trickles down because the faculty and graduate students reflect that strength. Similarly, hospitals have specific strengths. Cleveland and heart health, for instance.
 
Top