Lol, just looked at the data and realized that the stony brook matchlist abbreviated U Penn as UPMC so I gave it a value of 1.0...doh....
Well, UPMC isn't bad either. U Pitt has a decent hospital system (not really comparable to U Penn of course😛).

Lol, just looked at the data and realized that the stony brook matchlist abbreviated U Penn as UPMC so I gave it a value of 1.0...doh....
Hey pyrois, are you considering prelim/primary matches at all, and if so, how are you considering them? What about people who don't match into an internal and only get a prelim match?
Unfortunately it would seem that my number is totally worthless right now since the way I'm ranking the schools is probably not at all like how pyrois is ranking them.
Well, considering that the max differential between the flat average of tiers (base score) of a school and the financial/location/self-match modified scores maxes out at 0.50, so you can't be off by much. For some schools, the difference between the modified score and the base score is as little as 0.05, and sometimes just averages out to 0.
If we tiered the schools drastically differently though that might make a larger difference😛 Well, come to think of it, 0.50 is pretty big on a 4 point scale. Eh, I'll run it and see what the difference is😛
USNews also comes out with a separate ranking of Internal Medicine departments every year. It looks different than the "Best Hospitals" or "Top Medical Schools-Research" list. It's probably more accurate too:
How can anyone possibly care about the results of this "experiment" without asking about the premises?
How can anyone possibly care about the results of this "experiment" without asking about the premises?
Well that depends. Who provided the data on the ranking and what percentage response rate to the survey did they have? (not all US News data is particularly compelling -- for the residency director portion of the research rankings, for example they have a tiny percentage of a smattering of specialties respondng, hardly scientific data; is the response on this better?). Only with such info can you come to the conclusion that it is "more accurate".
USNEWS/ said:In the fall of 2006, residency program directors were asked to rate programs on two separate survey instruments. One survey dealt with research and was sent to a sample of residency program directors in fields outside primary care, including surgery, psychiatry, and radiology. The other survey involved primary care and was sent to residency directors in the fields of family practice, pediatrics, and internal medicine. Survey recipients were asked to rate programs on a scale from "marginal" (1) to "outstanding" (5). Those individuals who did not know enough about a program to evaluate it fairly were asked to mark "don't know." A school's score is the average of all the respondents who rated it. Responses of "don't know" counted neither for nor against a school. About 25 percent of those surveyed for research medical schools responded. Eighteen percent responded for primary-care. The source for the names for both of the residencey directors'surveys was the Graduate Medical Education Directory 2006-2007 edition, published by the American Medical Association.
Regarding the IM dept rankings that are posted above - this issue seems to be debated many times a year on IM Residency Forums. If you look at the US News methodology, it's a little uncertain what it is they are actually ranking but one thing is certain, they are not ranking IM departments. For example, Harvard is on the list as #2 but Harvard does not even have an IM department. Harvard does not have its own hospital - it has affiliated hospitals (MGH, BWH, BIDMC, Mt. Auburn, Cambridge City, VA Roxbury, etc) which all have their own IM departments.
Based on the US News website, it appears they ask medical school deans to nominate 10 medical schools offering the best programs in each medical specialty area (including IM). I'm not even sure what this means. It's not a residency program evaluation, it's not an evaluation of patient care, it's not an evaluation of research - what is it? Is it an evaluation of internal medicine training by a given school for its medical students? So is it an internal medicine clerkship evaluation? The answer is, it's none of these - it's just another stupid list generated by US News to give rank-obsessed people like us something to bicker over. Forgive me and my little rant on US News and their self-generated business on ranking things.
Regarding the whole match list analysis - all I have to say, Pyrois, is whoa. I'm hoping you end at Stanford so you can work with some of the gene chip analysis guys there and put your programming and analysis skills to use in a slightly more productive context. Of course, then you'll probably quit med school after two years to go start a company and become fabulously wealthy but that's another story.
I really don't understand why people propagate the notion that the usnews residency director survey methodology is flawed. Oh wait, I do understand why, but it still bugs me.
USnews reports its methodology here: http://www.usnews.com/usnews/edu/grad/rankings/about/08med_meth_brief.php
For the relevant excerpt see below.
To sum up:
Research residency director surveys were returned by 25% of program directors in surgery, psychiatry and radiology. Why only these programs? I don't know but it is probably because a) they aren't primary care, and b) there are a lot of them.
How many residency directors did they ask to take the survey?
From what it looks like, they asked 616 residency directors. (see http://www.ama-assn.org/vapp/freida/srch/1,1239,,00.html.)
How many residency directors returned the survey?
Around 154 of them returned the survey. If you think there is a problem with the methodology because of the number of responses, or the proportion of responses, you are either misinformed, or have decided to fundamentally disagree with most statisticians.
Here's the excerpt, for your viewing pleasure.
are you saying that the methodology is sound because 154 responses is a large sample size? if so, you're embarassing yourself. it's not like everybody did the survey and the mail receiver at usnews randomly discarded 75% of them. there can be (and likely are) serious bias issues related to *which 25%* decided to do the survey. the data aren't missing at random, and so bias (not just error from a smaller sample size) is likely in the results.
To sum up:
Research residency director surveys were returned by 25% of program directors in surgery, psychiatry and radiology. Why only these programs? I don't know but it is probably because a) they aren't primary care, and b) there are a lot of them.
But we use surveys all the time to help us make decisions.
And this statement is supposed to convince me about something?
We use surveys all the time to produce trivial results that usually lead to bad, or random decisions.
When was the last time you read a scientific paper that used a "survey" of what people think happens in the human body to prove their hypothesis?
Of course there is a response bias, it's a survey. But we use surveys all the time to help us make decisions. I'm sick of people throwing out usnews residency score data as 'useless' because of a 'low response percentage'. That's just flat wrong.
Have you read JAMA lately? That rag has just as many methodology problems in every issue as the usnews residency director survey. Yet medical practice is based incomplete data because clinicians prefer some information to perfect information. This is the way it works.
I think you proved my point for me. 25% of program directors in three specialties simply doesn't mean much. It might be a statistically significant sampling based on % of folks surveyed, but of what? What is this really telling you? Most people go into primary care, even from the top research ranked schools, yet these programs were excluded from the research ranking analysis. Of those who go into other specialties, why were these three highlighted? Does this encompass what most people go into, or just the specialties that happened to respond? Is the opinion of a radiologist or psychiatrist as to what med school is best relevant to a budding orthopod? That is why this component of the survey is troubling to me. US News is treating all residency program directors it surveys as interchangable, and so if it gets 25% responses, it considers that good. But if that 25% is not a fair sampling of the various specialties, which their footnote asserts it is not, I think it renders the results questionable. Most statisticians would agree that how you define your respondant pool makes a difference. 25% response of a group that was not a fair sampling in the first place does not yield useful data. That's what I'm saying.
i would trust it more than people magazine's hottest people of the year crap, but not by much.
No, what you said is:
Hey, if it makes you happy, rely on it as gospel. You seem to be missing the points being raised by a couple of us. Or you just badly want to believe. Which is fine by me.
I am sorry, but if you think randomized controlled trials are all that you find in the medical literature, you my young friend, have a lot to learn.
again, the issue isn't 'low response percentage', but which schools chose to respond versus not respond. yes, these issues are typical with surveys, but good survey analyses will use more sophisticated techniques to correct for bias in the responses, by discerning whether those who responded had substantial characteristic differences from those who did not respond.
i would also add that studying disease in human bodies is a good bit different than studying the attitudes of a population of individuals. i can study polio in one individual and learn something useful about how the disease likely works in others, but it would be insane to ask the residency director of joe state university what he thinks the best programs are and use that as the definitive list.
now what if the 25% that responded to usnews were *all* from med schools in the lowest quartile of mcat/gpa or acceptance rates? the highest? you'd get dramatically different results.
i won't say the rankings are useless, but i don't think they're very reliable. i would trust it more than people magazine's hottest people of the year crap, but not by much. usnews grad school ranking methods (all self-report) are even crappier.
i think the biggest crime is that having these "rankings" leads to an unhealthy obsession among pre-meds, med students, and schools about being at or being the "best" school. ask folks on sdn on both sides of the matching process and most will tell you that school name doesn't take you too far beyond what you do as an individual. i'm convinced now that it's healthier to obsess over which school will make me the happiest, and not which one some for-profit news rag with flawed methods tells me is the "best" (whatever the hell that's supposed to mean when thinking about schools).
1) The idea of RANKINGS of medical schools as having value is a little silly, of course. The problem is that the resolution of their measurement tool is too coarse to distinguish between consecutively ranked schools.
Hey, if it makes you happy, rely on it as gospel. You seem to be missing the points being raised by a couple of us. Or you just badly want to believe. Which is fine by me.
I actually agree with you on this one. That's why I'm using a broad tier system.
Disputes about rankings usually occur only at the "edges" (if schools are ranked 1-100, then there are 99 "edges" to cause conflict). If you group them in massive categories, some argument will ensue at the edges, but they will be relatively minimized.
If you disagree with a tiering system, then I think that's just silly. If not for some notion of where schools "stand" we'd have no idea how to select schools to apply to. It would be a waste of money for a kid with no EC's, a 20 on his MCAT and a 2.0 GPA to apply to Harvard Med, just as it would be silly for a guy who founded the Red Cross, got a 45 on his MCAT, and a 4.0 from MIT to apply to a Caribbean school.
In some ways, the reason why I am attempting this semi-"natural" ranking algorithm is to show people that there are more ways to look at schools than simply what US News spits out each year.
To know that some "lower ranked schools" like Pritzker and Vanderbilt actually outperform many other "higher ranked schools" in the match is meaningful, and releases some of the tension people feel when turning down a "higher ranked" school.
Wow that was a lot of "quotes."
Sorry for hijacking your thread. I actually think your algorithm is interesting, but I got sick of Law2Doc's lawyering attitude to the facts.
Hey could you put RWJ into your algorithm. I'm a huge fan of your comics, I think they are great. keep them coming. Heres a link to the 2007 RWJ matchlist.
http://rwjms.umdnj.edu/admissions/our_students_match_list_2007.htm
thanks
You know, I've received a bunch of PM's to run more schools. I'll probably do another dozen or two, but what would you guys think about me just uploading the program (it's all in php/perl already anyway) and letting you guys input values.
My only gripe is that somebody might put in a bunch of bogus information for a school and mess things up, but it's all in good fun anyway. I can always go back and check things every once in a while.
Sound like a worthy plan? (I don't want to waste my time doing it otherwise😛)
weren't you going to release the program? I'd like to see Wayne State if you could...
http://www.med.wayne.edu/news_media/scribe/PDF/06 Summer Scribe.v5.pdf
It starts on page 8...
Ouch man atleast make a list for him or something! It must be a pain to locate all the IM matches since they're jumbled up and to type them all individually into his program since you can't copy/paste anything. Atleast I'm assuming he types 'em in =P.
alright... fine 😉
what kind of format should I do that in ?
Haha no idea.. but I assume an excel list with the hospital names would be fine.
A few notes:
1) Home match will end up biasing this ranking. Consider two possibilities - either home match is significantly easier, or that a greater proportion of matches to highly competitive IM programs outside of home institution indicates competitiveness of overall school match.
2) Medical schools which have a particularly strong or weak internal medicine department compared to the rest of the medical school will have a result out of proportion with the rest of the match (e.g. UAB).
3) I don't agree at all that Mayo is a Tier 1 program. If considering competitiveness required to match at X program instead of "quality" of program (as you should in this metric), then Mayo should not be amongst the top.
4) It appears that matching at JHU, UCSF, MGH, & B&W is considerably more difficult than matching at a Tier 2 program i.e. the distance in competitiveness between Tier 1 and Tier 2 vs. Tier 2 and Tier 3, etc. is exponential.
5) I won't get into issues regarding Ivy League / West Coast / California / anti-South/Midwest biases, which are endemic to the system.
Finally, you have way too much time on your hands 😀
Ouch man atleast make a list for him or something! It must be a pain to locate all the IM matches since they're jumbled up and to type them all individually into his program since you can't copy/paste anything. Atleast I'm assuming he types 'em in =P.
weren't you going to release the program?
The way I currently process home matches is I multiply the ratio of matches "above" home against the matches "below" home, normalize the value to 1 using the average ratio of all schools currently in the system as a "base value." Then I use that as a weighting factor for the home match.
Take, for example, Yale. They are one of the few schools who match less people into their home program than other programs (2 match back into Yale in IM, while 6 go to Harvard 5 go to UCSF, 1 to UPenn, and 1 to Mayo). This actually elevates their matching ability even more, since they don't need to use their home-match as a "crutch" or a "backup."
WashU on the other hand, matches nearly half back into their own program, and fewer to "better" hospitals. Although their match is good, it does seem that they are falling back to WashU's home programs at a relatively high rate.
The modifiers generated by this effect are relatively small though. Usually in the 0.02-0.05 range. But it all adds up.
I think there is a slight misunderstanding in the way I "tier" the programs. The tiers are purely "merit" based, not based on "competitiveness." The location of Mayo actually inherently lowers it's score (the modifier for location bias is calculated from within the data pool). If I were to lower Mayo out of the top tier solely due to location, it would STILL get docked points for location during processing. It's only the final ratings, with modifiers that I deem to be correlative to competitiveness.
If you still think Mayo doesn't deserve to be a tier 1 school even based purely on merit, I can easily change it, but to be honest it will only visibly change the ranking of Mayo itself since most other schools match either 1 or 0 students into Mayo's residency programs.
I have a nifty trick for attempting to approximate these self-selection biases. I first calculate if there is an in-state bias for a school. If there is, I add a plus modifier to all schools within the state based on how heavy the in-state bias is.
For california schools, this boosts the ratings of most in-state schools by almost 0.5.
Excellent. I ought to remind myself that the flipside of course, is that your home program is really that good.
i'll just go back to blindly looking at how many rads matches are listed 😉 👍
Nothing wrong with that, that's how I used to do it😛

although I would like to play around with this thing... it is, sadly, something I would consider "fun"![]()
Let's all please take a second to take UMP's sentence out of context.

How can anyone possibly care about the results of this "experiment" without asking about the premises?
You totally made that up. At least 40% of statistics are made up on the spot, as often as 60% of the time.They've done studies, you know. 60% of the time, it works every time