- Joined
- Jul 15, 2009
- Messages
- 9,746
- Reaction score
- 1,668
you can be arbitrary without being randomOxford dictionary definition of arbitrary: "Based on random choice or personal whim, rather than any reason or system"
you can be arbitrary without being randomOxford dictionary definition of arbitrary: "Based on random choice or personal whim, rather than any reason or system"
You've lost me a while ago on this one, if you think PD assessments are an arbitrary way to answer this question, and that schools like Duke, Columbia and Yale are similar to Iowa, Colorado and Rochester by any reasonable approach
The bolded portion is the mistake you're making; the whole point of trying to identify the best standard is choosing the ranking that we think most logically represents an accurate evaluation of the most important criteria. You can't assume that all the choices measure what they intend to measure equally well, that's the whole point.And to illustrate my above argument, say that you want to choose the most competent chemist out of a group of 20 students. That's quite a rational thing to do, much like ranking schools. Just like you want to rank a school based on how well its graduate are trained, say I want to rank my students based on how well they understand the material. Rational. Now, I could go about this in many ways. I could administer some sort of standardized test. I could make my own exam and administer it. I could set up a practical exercise where the first to devise a practical synthesis of compound X is the most competent. I could put them all in a room and set them loose on each other. Which choice I make is arbitrary, assuming that these choices (save the last one) can measure competency roughly equally well. I could choose any one of them. The fact that I choose to make my own exams and administer them is an arbitrary choice I make. I could just as easily have made the lab practical the final exam.
Just because there are multiple options to choose from doesn't mean that one option isn't better than the others. I could thoroughly explain why I think res director rankings is the best option; thus, it's not an arbitrary choice to choose res director rankings because there's a logical framework supporting that decision.Again, your goal could be rational but the way you go about doing it could be arbitrary. Goal: rank schools based on the competency of graduates. Arbitrary choice: measure competency by residency director rating.
Yes, but it would have to be personal whim + a lack of "any reason or system." Since there's reasoning behind the decision to choose res director rankings as the standard, it's still not arbitrary, randomness aside.you can be arbitrary without being random
As long as we agree that StartClass is way, way off on a lot of places like I originally said, I think were on the same page there. My impression of WashU was very similar to Penn at interviews so maybe it was the people we happened to be at each with. By step score WashU comes out in the top few (source was int day), by MCAT and GPA metrics also up at the top, by peer ratings and PD ratings it's up there...what metric do you think would show it's correct position, in your opinion? What insights do you have that matriculating students, residency directors, and other med schools lack?I think that your selection of these assessments as the "standard" for school ranking is arbitrary. I could use average Step 1 score as the way to rank schools. You would need to define a question before you talk about questions. If your question is what do residency directors think of schools? Then measuring their assessments of schools is a rational, non-arbitrary way to answer the question. But if your question is which schools are the best? Then using residency director assessments is an arbitrary way of measuring it because "best" doesn't only mean "best in the eyes of the residency directors."
I do not believe that Duke, Columbia, and Yale are on the same level as those schools. However, I do believe that US News overvalues WashU, Yale, and NYU, among others.
The bolded portion is the mistake you're making; the whole point of trying to identify the best standard is choosing the ranking that we think most logically represents an accurate evaluation of the most important criteria. You can't assume that all the choices measure what they intend to measure equally well, that's the whole point.
Just because there are multiple options to choose from doesn't mean that one option isn't better than the others. I could thoroughly explain why I think res director rankings is the best option; thus, it's not an arbitrary choice to choose res director rankings because there's a logical framework supporting that decision.
I believe that there are choices that measure competency of graduates equally as well as or better than residency director opinions, which are influenced by other things. For example, I think that you can measure whether a school trains competent doctors by assessing directly peer opinions of your individual graduates. Do the doctors who work with your graduates think that they are competent doctors? This is the principle that peer review in science is based on. As long as there is even one other measurement you can do that measures the intended variable at least as well as the one you are currently proposing, then your choice of the latter measurement is arbitrary.
So, do you want to explain why you think residency director ranking of schools would be a better measure of competency than peer evaluations of your graduates?
There's reasoning for choosing any of the other ones as the standard as well. The decision to choose ONE is whimsical without reason or system validating that supremacyYes, but it would have to be personal whim + a lack of "any reason or system." Since there's reasoning behind the decision to choose res director rankings as the standard, it's still not arbitrary, randomness aside.
As long as we agree that StartClass is way, way off on a lot of places like I originally said, I think were on the same page. My impression of WashU was very similar to Penn at interviews so maybe it was the people we happened to be at each with. By step score WashU comes out in the top few (source was int day), by MCAT and GPA metrics also up at the top, by peer ratings and PD ratings it's up there...what metric do you think would show it's correct position, in your opinion? What insights do you have that matriculating students, residency directors, and other med schools lack?
it's not very difficult to make the argument that a residency director's opinion of how "competent" graduates of different medical schools tend to be ought to hold more weight than the opinion's of other random physicians who may or may not be responsible for recruiting, training, and educating several generations of physicians.
So you personally feel WashU is overrated because you know some attendings that are not impressed with the school's grads, and you think theirs is a better read than PDs?I believe that a better way to assess the quality of graduates a medical school is producing is a kind of a peer review system for the graduates of the school. I believe that peer review for doctors is a good thing in general. It's how we do things in science and it's how any science-related field should be run. When I submit a grant proposal, it's reviewed by people who are in my chosen field, who are in the best position to judge the quality of my work. We don't have only chairs of departments review these proposals. What do the doctors who work with the graduate think about said graduate? One could establish a numerical system of assessment. Multiple co-workers would be assessing each person. And there will be many people. Put it all together, and you would have a number that characterizes, on average, what other doctors who work with your graduates think about your graduates. Is that not a better measure of how well a med school trains physicians?
For med schools, you would be soliciting assessments from the attendings who work with your students (now as residents). You could rank residencies by assessments from the attendings and partners who end up working with the doctors you graduate.
In this case, when you're trying to rank a medical school, it's going to be what the attendings who are directly working with your graduates think about your graduates. You're putting all the power in one person who may not even work with your graduate on a day-to-day basis. This person has the power to process all the data and then spit out one number at you. He's basically the middleman who takes what all the attendings who work with the graduates say and processes that. And you're assuming that he will process that correctly. As humans, we tend to remember bad experiences really well. One bad graduate could tarnish a school's reputation because he was just extraordinarily bad and the residency director remembered that. So why not remove the middleman? Take the assessment-level data and aggregate that.
So you personally feel WashU is overrated because you know some attendings that are not impressed with the school's grads, and you think theirs is a better read than PDs?
So to my question of why you personally think WashU is over-rated compared to PD and peer assessment, it comes down to you were less impressed with the people you met on interview day?No, that's my general impression of why I think how rankings are done now is bull****. I believe we're mainly on the same page about that. I just don't think that solely using residency directors' rankings as the end-all-be-all is the right way of fixing it.
WashU is another matter. I think that WashU's student body isn't as strong as the student bodies at schools of similar US News ranking. Again, that could be due to my having a non-representative cross-section (you said you had a different experience) but that was my impression. I think that the quality of students translates into quality of graduates.
No, that's my general impression of why I think how rankings are done now is bull****. I believe we're mainly on the same page about that. I just don't think that solely using residency directors' rankings as the end-all-be-all is the right way of fixing it.
WashU is another matter. I think that WashU's student body isn't as strong as the student bodies at schools of similar US News ranking. Again, that could be due to my having a non-representative cross-section (you said you had a different experience) but that was my impression. I think that the quality of students translates into quality of graduates.
I think you have this precisely backwards. PDs have experience with loads and loads of graduates in a position that is specifically concerned with recruiting the best trainees, retain the most productive residents, in order to pump out the best physicians. It is the random attendings who might have extremely limited experience working other graduates of certain schools -- or not. You don't know. It's possible a peer might only know 3 ppl from UTSW. 2 happened to be tools. Is UTSW a terrible medical school? With PDs you at least have the knowledge that they are in a senior role where they are likely to have a broad and deep reserve of experience to draw from in addition to having access to information that random peers might not have like the board score averages of graduates from X,Y,Z schools, that particular program's history with a certain medical school's graduates and patient complains or other salient measurements a residency director might keep track of.
So to my question of why you personally think WashU is over-rated compared to PD and peer assessment, it comes down to you were less impressed with the people you met on interview day?
"PDs opinions are not a perfect metric" is miles away from "PDs opinions are just as good as the opinions of random peers"
All due respect to your years on me, but I think attending undergraduate here and working in a clinic here during gap has me at least on even footing for trading anecdotal impressions 😉It comes down to my personal interactions with students at WashU, on interview day and not. I think I have a few years on you, so I've met people in the course of my own undergraduate studies as well as professionally.
All due respect to your years on me, I think attending undergraduate here and working in a clinic here during gap has me at least on even footing for trading anecdotal impressions 😉
Again with the begging the question fallacy; the bolded is obvious, and if I accepted that premise to be true then there would be no disagreement. The entire point is that I don't think any two measurements measure the intended variable equally well.I believe that there are choices that measure competency of graduates equally as well as or better than residency director opinions, which are influenced by other things. For example, I think that you can measure whether a school trains competent doctors by assessing directly peer opinions of your individual graduates. Do the doctors who work with your graduates think that they are competent doctors? This is the principle that peer review in science is based on. As long as there is even one other measurement you can do that measures the intended variable at least as well as the one you are currently proposing, then your choice of the latter measurement is arbitrary.
Not really, because you've essentially already argued my point by explaining so thoroughly why you think peer rankings are better than PD rankings. Should you conclude that rankings based on peer review are the best standards, your choice to choose those rankings would not be arbitrary because you've laid out your logical reasoning for making that choice. It wouldn't be based on randomness or personal whim.So, do you want to explain why you think residency director ranking of schools would be a better measure of competency than peer evaluations of your graduates?
The whole point is to choose the one with the best reasoning. Obviously they all have some reasoning, but choosing the most logical option is in no way whimsical.There's reasoning for choosing any of the other ones as the standard as well. The decision to choose ONE is whimsical without reason or system validating that supremacy
I believe that you have an exalted view of residency directors that may not be entirely deserved. Say three attendings work with graduate of med school A on a regular basis. You solicit their assessments. Let's say there's a system of 1-10, with 10 being the best doctor they've ever worked with and 1 being "don't let this person touch a patient ever." Let's say that attending A has had a bad experience with graduates from med school A in the past. So he ranks this person a 6, although he deserves an 8. Attending B ranks the person an 8. Attending C also hates med school A so gives that person a 4. On average, this person will have a score of 6. In this situation that I've defined, is that an unfair rating for this person? Yes. It's based on two attendings' biases. But don't forget, you have this sort of assessment for hundreds upon hundreds of graduates of med school A. What are the odds that individual biases will affect the sample/population mean? Low. Perhaps the residency director could be given a vote in this and that vote could be weighed slightly more than the coworker evaluations. But I strongly believe that residency directors alone should not have this power to rank schools as they wish.
Not really, because you've essentially already argued my point by explaining so thoroughly why you think peer rankings are better than PD rankings. Should you conclude that rankings based on peer review are the best standards, your choice to choose those rankings would not be arbitrary because you've laid out your logical reasoning for making that choice. It wouldn't be based on randomness or personal whim.
The whole point is to choose the one with the best reasoning. Obviously they all have some reasoning, but choosing the most logical option is in no way whimsical.
We don't agree, and for the last time, that's the entire point of this debate! I think it's incredibly unlikely that any two measurements measure the intended variable indistinguishably well.The idea is that I don't know if my solution would measure the intended variable better than residency director rankings. But if we agree that it measures the intended variable at least as well as residency director rankings, which is a much lower bar, then the choice of either would be arbitrary. There is no universal "best reasoning" or "most logical option."
.....or the results you get are totally meaningless and disorderly because the peer assessments are dominated by the limitations of each individual peer. That is an empirical hypothesis, I dont think we can just accept it as true.
However, I don't think it's at all unreasonable to simply ascribe to residency director's the experience and information access associated with their job description. A priori a PD's opinion >> peer's opinion if you are trying to answer the question: "How favorably do you view graduates from X medical school?" in a way that might reveal something about the quality of undergraduate medical education. It's imperfect but at least it's better than USNWR aka "Who has the most grant money 20XX?"
Are you under the impression that only one residency director is surveyed in the residency director rankings? I'm pretty sure it's hundreds, from various schools and specialtiesYes, this is a hypothesis but it is a hypothesis based on the fundamental laws of statistics. The whole idea of having a body of people making any decision is that the limitations of the individual will be rendered inconsequential by the synergistic sum of the whole. The odds of two or three people having unreasonably low opinions of med school A is high. But the odds of many thousands of people having unreasonably low opinions of med school A? Astronomically low. Unless, of course, there actually is a problem with med school A. That's why we have juries. The odds of one person convicting because of an unreasonably-held opinion is much higher than the odds of many people convicting because they all share that unreasonably-held opinion.
It's not unreasonable in the sense that it is not without reason. But that's not what is questioned here. What is questioned is whether that is the only measure or if there exists another measure that can measure "quality of undergraduate medical education" at least as well. If there does exist such a measure, then the choice of using one or the other is arbitrary.
We don't agree, and for the last time, that's the entire point of this debate! I think it's incredibly unlikely that any two measurements measure the intended variable indistinguishably well.
And even if I agreed that it measured the intended variable at least as well as PD rankings, then peer review rankings would be the better standard since the only options would be that those rankings are equally as good or better, which is of course a better option than equally good or worse.
Are you under the impression that only one residency director is surveyed in the residency director rankings? I'm pretty sure it's hundreds, from various schools and specialties
I can't wait to read through all of this before bed. Just gotta finish this darn pre-lab for analytical chemistry 😢
you have my sympathies. analytical chemistry was an evil course.
Careful there, buddy 😛
but i thought you didn't like analytical chem either? and you're strictly an organic chemistry fan 🤔
It's literally my least favorite UG class and we're only 3 weeks into ityou have my sympathies. analytical chemistry was an evil course.
program directorWhat does PD stand for?
The whole point is that "best reasoning" is itself an arbitrary designation made plainly evident by your own statementThe whole point is to choose the one with the best reasoning. Obviously they all have some reasoning, but choosing the most logical option is in no way whimsical.
when one's choice is based on one's personal reasoning it is by definition by personal whimyour choice to choose those rankings would not be arbitrary because you've laid out your logical reasoning for making that choice. It wouldn't be based on randomness or personal whim
I don't really care whether peer rankings or PD rankings are better. I'm just saying that for any given applicant, there are factors that are more important than others in choosing a school and certain ranking systems will best parallel those factors, making that the objectively best ranking for that particular applicant. And that makes it non-arbitrary since it's the best logical system for that person's goals. I didn't mean to imply that there's a universally best ranking system, just that choosing between the different options isn't arbitrary since it's not random or whimsical but is logical. But I'm getting bored with this semantics debate, you can have the last word if ya want.I was talking about the surveying of one institution with the implicit understanding that this is generalized. We reduced the problem to make discussion easier. Surveying 150 residency directors (arbitrary number I chose) doesn't get you as close to the "true" value as surveying 1500 attendings who work closely with the person of interest. That's just statistics.
Again, in attempting to get into a competitive program/specialty, it doesn't really matter if a PD's opinion about your school is accurate, it only matters how favorable the PD's opinion is. From my point of view, it's the role of the medical school to help its students obtain the residency positions that they desire and favorable opinions from PDs about a school will help that school's students access that residency; at least, that's largely what I look for in med schools.The whole premise of surveying PDs is severely flawed for many reasons. The first is very notion that PDs are ideally positioned to evaluate resident performance and capability. The heterogeneity of the resident evaluation process in a single program in a single institution alone is enough to see that PDs are often not well suited to know how truly "good" a resident is.
The second question is also what a good resident is. Clinical acumen? Sure. But how is this judged? Patient satisfaction? Surgical outcomes? How can this be tracked in a manner that feeds back to the PD? More often than not you'll find that attendings judge residents more or less on how good their secretarial skills are. Then there's research output. Is an academic program going to look favorably on a clinical superstar that publishes no papers? How do you balance the two?
Next up is the simple problem that even if the first two questions were non-factors, most programs in most fields simply do not see the volume and diversity of graduates to form accurate impressions of any non-home institution. In my field, like most surgical specialities, residency classes are typically ~6 (4-12). Even under the most optimistic scenario a program might take a graduate from a given school once every 1 or 2 years. Is that a sufficient sample size even after 5-10 years of experience? And taken as a whole, a PD will only have direct experience with 40-50 schools at a maximum and more likely 15-30. How is this person to draw accurate comparisons? The only exception to this would be the mega fields like medicine or peds where a program can reliably have experience with multiple graduates of a given institution on a yearly basis.
Lastly, the most glaringly obvious problem with this particular method is that the response rate is atrocious.
If one critically analyzes the USNews methodology, every single one of the components is frankly is varying degrees of garbage. To say one is better than the other ignores the obvious (perhaps not obvious) truth that they are all completely flawed, unscientific, and more or less invalid.
good lord I feel like an English teacher today, here's another definition for ya:The whole point is that "best reasoning" is itself an arbitrary designation made plainly evident by your own statement
when one's choice is based on one's personal reasoning it is by definition by personal whim
For an English teacher you sure don't read real well. My last point regarding response rate covers this already. Even were the response rate not dismal, it's still worthless. What the fck does an aggregate score from PDs in every field make a difference to a specialty-specific residency applicant?Again, in attempting to get into a competitive program/specialty, it doesn't really matter if a PD's opinion about your school is accurate, it only matters how favorable the PD's opinion is. From my point of view, it's the role of the medical school to help its students obtain the residency positions that they desire and favorable opinions from PDs about a school will help that school's students access that residency; at least, that's largely what I look for in med schools.
Time for the teacher to sit down for a lesson. Instead of picking up on the single top search engine result, how about cracking open the actual OED available through your library?good lord I feel like an English teacher today, here's another definition for ya:
whim: "a sudden desire or change of mind, especially one that is unusual or unexplained"
Someone's personal reasoning about why certain factors are important in a med school for his/her goals has nothing to do with whim.
This is getting so boring, you don't know what arbitrary or whim mean and I'm tired of bashing my head against a wall trying to explain this![]()
Oh I'm definitely not saying the survey of PDs was done in an ideal way, just that I care about the PD opinions more than the opinions of peer reviewers since PDs play larger roles in resident selection.For an English teacher you sure don't read real well. My last point regarding response rate covers this already.
Most pre-meds don't know what specialty they want to go into by the time they have to decide which med school to attend (which is generally the same time they're looking at med school rankings), so specialty-specific PD rankings would be pretty difficult to use for most of us.Even were the response rate not dismal, it's still worthless. What the fck does an aggregate score from PDs in every field make a difference to a specialty-specific residency applicant?
Eh that's too much effort for a boring debate hahaTime for the teacher to sit down for a lesson. Instead of picking up on the single top search engine result, how about cracking open the actual OED available through your library?
There are logical reasons why certain ranking systems would be objectively more accurate for well-defined groups of pre-meds with specific long-term goals. That's all I'm gonna say in this thread but feel free to rebutArbitrary -
1. To be decided by one's liking; dependent upon will or pleasure; at the discretion or option of any one.
3. Derived from mere opinion or preference; not based on the nature of things; hence, capricious, uncertain, varying.
Whim -
3a. capricious notion or fancy
If you're going to play teacher it helps to know what you're talking about. This applies equally to English, rankings, statistics, and researching things on the internet.
Can you clarify what you mean for this? Like are you saying a certain school (lets say Yale) might prepare people extremely well for an IM residency but not for surg? I figured it would be pretty constant, that the traits that make someone a good resident would make them a good resident in whatever area.What the fck does an aggregate score from PDs in every field make a difference to a specialty-specific residency applicant?
Can you clarify what you mean for this? Like are you saying a certain school (lets say Yale) might prepare people extremely well for an IM residency but not for surg? I figured it would be pretty constant, that the traits that make someone a good resident would make them a good resident in whatever area.
I think I'm missing the analogy. Are you saying the schools that produce good residents in one specialty do not produce good residents for a different specialty? Is that why global PD ratings would be a poor measure, you need specialty-specific PD ratings like you need nation-specific wealth distribution data?If you're a citizen living in the U.S., you don't really care about the global poverty level. Because it doesn't affect you. The measure that directly affects you is the poverty level in America. Yes, perhaps you don't know what field you will be going into when you enter med school, but that doesn't mean that aggregated residency director ranking becomes automatically the best measure. The better solution would be to ignore residency director rankings when you're entering med school because you don't know which field you're going into so the number is meaningless.
I think I'm missing the analogy. Are you saying the schools that produce good residents in one specialty do not produce good residents for a different specialty? Is that why global PD ratings would be a poor measure, you need specialty-specific PD ratings like you need nation-specific wealth distribution data?
This would be a very new idea to me, I thought the traits desired in a fresh med school graduate would be standard across specialties.