IM match list analysis

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Lol, just looked at the data and realized that the stony brook matchlist abbreviated U Penn as UPMC so I gave it a value of 1.0...doh....

Well, UPMC isn't bad either. U Pitt has a decent hospital system (not really comparable to U Penn of course:p).

Members don't see this ad.
 
Hey pyrois, are you considering prelim/primary matches at all, and if so, how are you considering them? What about people who don't match into an internal and only get a prelim match?

Sorry, not considering prelims/primaries. System is only setup right now to work with straight up IM. Fortunately there's usually enough IM matches to spit out something meaningful.
 
You know, I've received a bunch of PM's to run more schools. I'll probably do another dozen or two, but what would you guys think about me just uploading the program (it's all in php/perl already anyway) and letting you guys input values.

My only gripe is that somebody might put in a bunch of bogus information for a school and mess things up, but it's all in good fun anyway. I can always go back and check things every once in a while.

Sound like a worthy plan? (I don't want to waste my time doing it otherwise:p)
 
Members don't see this ad :)
Unfortunately it would seem that my number is totally worthless right now since the way I'm ranking the schools is probably not at all like how pyrois is ranking them.

Well, considering that the max differential between the flat average of tiers (base score) of a school and the financial/location/self-match modified scores maxes out at 0.50, so you can't be off by much. For some schools, the difference between the modified score and the base score is as little as 0.05, and sometimes just averages out to 0.

If we tiered the schools drastically differently though that might make a larger difference:p Well, come to think of it, 0.50 is pretty big on a 4 point scale. Eh, I'll run it and see what the difference is:p
 
USNews also comes out with a separate ranking of Internal Medicine departments every year. It looks different than the "Best Hospitals" or "Top Medical Schools-Research" list. It's probably more accurate too:

USNews Internal Medicine Ranked in 2007:
1 Johns Hopkins
2 Harvard
3 UCSF
4 Penn
5 Duke
6 UW
7 WashU
8 Michigan
9 Yale
10 Columbia
10 UTSW
12 Stanford
13 UAB
14 UCLA
14 Vanderbilt
16 Mayo
17 UNC
18 Cornell
19 U Chicago
20 Northwestern
20 UCSD
22 Pitt
23 Emory
24 Mt. Sinai
24 Iowa
24 Rochester

And the Internal Medicine Rankings from 2006:
1. Johns Hopkins University (MD)
2. Harvard University (MA)
3. University of California–San Francisco
4. Duke University (NC)
5. University of Pennsylvania
6. Washington University in St. Louis
7. University of Washington
8. University of Michigan–Ann Arbor
9. U. of Texas Southwestern Medical Center–Dallas
10. Stanford University (CA)
11. Columbia U. College of Physicians and Surgeons (NY)
Yale University (CT)
13. University of Chicago (Pritzker)
14. University of California–Los Angeles (Geffen)
15. Cornell University (Weill) (NY)
16. Vanderbilt University (TN)
17. Mayo Medical School (MN)
18. Emory University (GA)
University of Alabama–Birmingham
University of California–San Diego
University of North Carolina–Chapel Hill
22. University of Pittsburgh
23. University of Colorado–Denver and Health Sciences Center
University of Virginia
25. Baylor College of Medicine (TX)
Northwestern University (Feinberg) (IL)
27. Pennsylvania State University College of Medicine

Well, considering that the max differential between the flat average of tiers (base score) of a school and the financial/location/self-match modified scores maxes out at 0.50, so you can't be off by much. For some schools, the difference between the modified score and the base score is as little as 0.05, and sometimes just averages out to 0.

If we tiered the schools drastically differently though that might make a larger difference:p Well, come to think of it, 0.50 is pretty big on a 4 point scale. Eh, I'll run it and see what the difference is:p
 
USNews also comes out with a separate ranking of Internal Medicine departments every year. It looks different than the "Best Hospitals" or "Top Medical Schools-Research" list. It's probably more accurate too:

Well that depends. Who provided the data on the ranking and what percentage response rate to the survey did they have? (not all US News data is particularly compelling -- for the residency director portion of the research rankings, for example they have a tiny percentage of a smattering of specialties respondng, hardly scientific data; is the response on this better?). Only with such info can you come to the conclusion that it is "more accurate".
 
How can anyone possibly care about the results of this "experiment" without asking about the premises?
 
How can anyone possibly care about the results of this "experiment" without asking about the premises?

not my fault you didn't ask

although some people asked relevant questions about it, and I explained some stuff earlier
 
Well that depends. Who provided the data on the ranking and what percentage response rate to the survey did they have? (not all US News data is particularly compelling -- for the residency director portion of the research rankings, for example they have a tiny percentage of a smattering of specialties respondng, hardly scientific data; is the response on this better?). Only with such info can you come to the conclusion that it is "more accurate".

I really don't understand why people propagate the notion that the usnews residency director survey methodology is flawed. Oh wait, I do understand why, but it still bugs me.

USnews reports its methodology here: http://www.usnews.com/usnews/edu/grad/rankings/about/08med_meth_brief.php

For the relevant excerpt see below.

To sum up:
Research residency director surveys were returned by 25% of program directors in surgery, psychiatry and radiology. Why only these programs? I don't know but it is probably because a) they aren't primary care, and b) there are a lot of them.

How many residency directors did they ask to take the survey?
From what it looks like, they asked 616 residency directors. (see http://www.ama-assn.org/vapp/freida/srch/1,1239,,00.html.)

How many residency directors returned the survey?
Around 154 of them returned the survey. If you think there is a problem with the methodology because of the number of responses, or the proportion of responses, you are either misinformed, or have decided to fundamentally disagree with most statisticians.






Here's the excerpt, for your viewing pleasure.

USNEWS/ said:
In the fall of 2006, residency program directors were asked to rate programs on two separate survey instruments. One survey dealt with research and was sent to a sample of residency program directors in fields outside primary care, including surgery, psychiatry, and radiology. The other survey involved primary care and was sent to residency directors in the fields of family practice, pediatrics, and internal medicine. Survey recipients were asked to rate programs on a scale from "marginal" (1) to "outstanding" (5). Those individuals who did not know enough about a program to evaluate it fairly were asked to mark "don't know." A school's score is the average of all the respondents who rated it. Responses of "don't know" counted neither for nor against a school. About 25 percent of those surveyed for research medical schools responded. Eighteen percent responded for primary-care. The source for the names for both of the residencey directors'surveys was the Graduate Medical Education Directory 2006-2007 edition, published by the American Medical Association.
 
Regarding the IM dept rankings that are posted above - this issue seems to be debated many times a year on IM Residency Forums. If you look at the US News methodology, it's a little uncertain what it is they are actually ranking but one thing is certain, they are not ranking IM departments. For example, Harvard is on the list as #2 but Harvard does not even have an IM department. Harvard does not have its own hospital - it has affiliated hospitals (MGH, BWH, BIDMC, Mt. Auburn, Cambridge City, VA Roxbury, etc) which all have their own IM departments.

Based on the US News website, it appears they ask medical school deans to nominate 10 medical schools offering the best programs in each medical specialty area (including IM). I'm not even sure what this means. It's not a residency program evaluation, it's not an evaluation of patient care, it's not an evaluation of research - what is it? Is it an evaluation of internal medicine training by a given school for its medical students? So is it an internal medicine clerkship evaluation? The answer is, it's none of these - it's just another stupid list generated by US News to give rank-obsessed people like us something to bicker over. Forgive me and my little rant on US News and their self-generated business on ranking things.

Regarding the whole match list analysis - all I have to say, Pyrois, is whoa. I'm hoping you end at Stanford so you can work with some of the gene chip analysis guys there and put your programming and analysis skills to use in a slightly more productive context. Of course, then you'll probably quit med school after two years to go start a company and become fabulously wealthy but that's another story.
 
Regarding the IM dept rankings that are posted above - this issue seems to be debated many times a year on IM Residency Forums. If you look at the US News methodology, it's a little uncertain what it is they are actually ranking but one thing is certain, they are not ranking IM departments. For example, Harvard is on the list as #2 but Harvard does not even have an IM department. Harvard does not have its own hospital - it has affiliated hospitals (MGH, BWH, BIDMC, Mt. Auburn, Cambridge City, VA Roxbury, etc) which all have their own IM departments.

Based on the US News website, it appears they ask medical school deans to nominate 10 medical schools offering the best programs in each medical specialty area (including IM). I'm not even sure what this means. It's not a residency program evaluation, it's not an evaluation of patient care, it's not an evaluation of research - what is it? Is it an evaluation of internal medicine training by a given school for its medical students? So is it an internal medicine clerkship evaluation? The answer is, it's none of these - it's just another stupid list generated by US News to give rank-obsessed people like us something to bicker over. Forgive me and my little rant on US News and their self-generated business on ranking things.

Regarding the whole match list analysis - all I have to say, Pyrois, is whoa. I'm hoping you end at Stanford so you can work with some of the gene chip analysis guys there and put your programming and analysis skills to use in a slightly more productive context. Of course, then you'll probably quit med school after two years to go start a company and become fabulously wealthy but that's another story.

You beat me to the punch.

I had trouble using the US News "IM" rankings because if you actually look at match lists, and look at the IM rankings, you'll realize that ranking hospital departments by their affiliated universities just isn't accurate. Harvard is at the top when there are differences between MGH/B&W vs. Beth Israel Deaconess. Likewise there's places like California Pacific that some would argue collaborate with UCSF, but they aren't really UCSF's teaching hospital.

As for the gene chip thing, that would be fun, and thanks for the compliment:p In the meantime, this should be entertaining enough:p
 
I really don't understand why people propagate the notion that the usnews residency director survey methodology is flawed. Oh wait, I do understand why, but it still bugs me.

USnews reports its methodology here: http://www.usnews.com/usnews/edu/grad/rankings/about/08med_meth_brief.php

For the relevant excerpt see below.

To sum up:
Research residency director surveys were returned by 25% of program directors in surgery, psychiatry and radiology. Why only these programs? I don't know but it is probably because a) they aren't primary care, and b) there are a lot of them.

How many residency directors did they ask to take the survey?
From what it looks like, they asked 616 residency directors. (see http://www.ama-assn.org/vapp/freida/srch/1,1239,,00.html.)

How many residency directors returned the survey?
Around 154 of them returned the survey. If you think there is a problem with the methodology because of the number of responses, or the proportion of responses, you are either misinformed, or have decided to fundamentally disagree with most statisticians.






Here's the excerpt, for your viewing pleasure.

are you saying that the methodology is sound because 154 responses is a large sample size? if so, you're embarassing yourself. it's not like everybody did the survey and the mail receiver at usnews randomly discarded 75% of them. there can be (and likely are) serious bias issues related to *which 25%* decided to do the survey. the data aren't missing at random, and so bias (not just error from a smaller sample size) is likely in the results.
 
Members don't see this ad :)
are you saying that the methodology is sound because 154 responses is a large sample size? if so, you're embarassing yourself. it's not like everybody did the survey and the mail receiver at usnews randomly discarded 75% of them. there can be (and likely are) serious bias issues related to *which 25%* decided to do the survey. the data aren't missing at random, and so bias (not just error from a smaller sample size) is likely in the results.

Of course there is a response bias, it's a survey. But we use surveys all the time to help us make decisions. I'm sick of people throwing out usnews residency score data as 'useless' because of a 'low response percentage'. That's just flat wrong.

As far as the severity of the response bias:
154 residency directors of 616 does involve a response bias, but you are only interested in what 616 people think. When 154 of those 616 tell you what they think, it does a pretty good job. The issue isn't 'large sample size', as you put it, but small population of interest.

Have you read JAMA lately? That rag has just as many methodology problems in every issue as the usnews residency director survey. Yet medical practice is based incomplete data because clinicians prefer some information to perfect information. This is the way it works.

Lets see here:
Risk factors for cardiovascular disease--USELESS, Framingham is based on men.
Silent Gallstones--get the knife back and cut them all out, the sample was a bunch of white university faculty members.
Normal temperature--no idea! 37 was a bunch of male medical students a long time ago.
 
To sum up:
Research residency director surveys were returned by 25% of program directors in surgery, psychiatry and radiology. Why only these programs? I don't know but it is probably because a) they aren't primary care, and b) there are a lot of them.

I think you proved my point for me. 25% of program directors in three specialties simply doesn't mean much. It might be a statistically significant sampling based on % of folks surveyed, but of what? What is this really telling you? Most people go into primary care, even from the top research ranked schools, yet these programs were excluded from the research ranking analysis. Of those who go into other specialties, why were these three highlighted? Does this encompass what most people go into, or just the specialties that happened to respond? Is the opinion of a radiologist or psychiatrist as to what med school is best relevant to a budding orthopod? That is why this component of the survey is troubling to me. US News is treating all residency program directors it surveys as interchangable, and so if it gets 25% responses, it considers that good. But if that 25% is not a fair sampling of the various specialties, which their footnote asserts it is not, I think it renders the results questionable. Most statisticians would agree that how you define your respondant pool makes a difference. 25% response of a group that was not a fair sampling in the first place does not yield useful data. That's what I'm saying.
 
But we use surveys all the time to help us make decisions.

And this statement is supposed to convince me about something?

We use surveys all the time to produce trivial results that usually lead to bad, or random decisions.

When was the last time you read a scientific paper that used a "survey" of what people think happens in the human body to prove their hypothesis?
 
And this statement is supposed to convince me about something?

We use surveys all the time to produce trivial results that usually lead to bad, or random decisions.

When was the last time you read a scientific paper that used a "survey" of what people think happens in the human body to prove their hypothesis?

Yes, it is. Ever read Hegel?

If you would like more convincing, how about this:

When you are trying to answer a question using research, you first have to ask yourself about the nature of the question. If you are interested in the mechanism cardiovascular disease in residency directors, then a survey will probably give you some information, but there are much better ways to do that. But, if you are interested in what they think, the best way to do that would be to ask them. I am sorry, but if you think randomized controlled trials are all that you find in the medical literature, you my young friend, have a lot to learn. Surveys are, in fact, all over the place in many specialties. Particularly when you are asking questions about the effect of treatment on quality of life.

I am sorry, usnews is flawed because it uses surveys just doesn't cut it.
 
Of course there is a response bias, it's a survey. But we use surveys all the time to help us make decisions. I'm sick of people throwing out usnews residency score data as 'useless' because of a 'low response percentage'. That's just flat wrong.

Have you read JAMA lately? That rag has just as many methodology problems in every issue as the usnews residency director survey. Yet medical practice is based incomplete data because clinicians prefer some information to perfect information. This is the way it works.

again, the issue isn't 'low response percentage', but which schools chose to respond versus not respond. yes, these issues are typical with surveys, but good survey analyses will use more sophisticated techniques to correct for bias in the responses, by discerning whether those who responded had substantial characteristic differences from those who did not respond.

i would also add that studying disease in human bodies is a good bit different than studying the attitudes of a population of individuals. i can study polio in one individual and learn something useful about how the disease likely works in others, but it would be insane to ask the residency director of joe state university what he thinks the best programs are and use that as the definitive list.

now what if the 25% that responded to usnews were *all* from med schools in the lowest quartile of mcat/gpa or acceptance rates? the highest? you'd get dramatically different results.

i won't say the rankings are useless, but i don't think they're very reliable. i would trust it more than people magazine's hottest people of the year crap, but not by much. usnews grad school ranking methods (all self-report) are even crappier.

i think the biggest crime is that having these "rankings" leads to an unhealthy obsession among pre-meds, med students, and schools about being at or being the "best" school. ask folks on sdn on both sides of the matching process and most will tell you that school name doesn't take you too far beyond what you do as an individual. i'm convinced now that it's healthier to obsess over which school will make me the happiest, and not which one some for-profit news rag with flawed methods tells me is the "best" (whatever the hell that's supposed to mean when thinking about schools).
 
I think you proved my point for me. 25% of program directors in three specialties simply doesn't mean much. It might be a statistically significant sampling based on % of folks surveyed, but of what? What is this really telling you? Most people go into primary care, even from the top research ranked schools, yet these programs were excluded from the research ranking analysis. Of those who go into other specialties, why were these three highlighted? Does this encompass what most people go into, or just the specialties that happened to respond? Is the opinion of a radiologist or psychiatrist as to what med school is best relevant to a budding orthopod? That is why this component of the survey is troubling to me. US News is treating all residency program directors it surveys as interchangable, and so if it gets 25% responses, it considers that good. But if that 25% is not a fair sampling of the various specialties, which their footnote asserts it is not, I think it renders the results questionable. Most statisticians would agree that how you define your respondant pool makes a difference. 25% response of a group that was not a fair sampling in the first place does not yield useful data. That's what I'm saying.

No, what you said is: "for the residency director portion of the research rankings, for example they have a tiny percentage of a smattering of specialties respondng"

a)it's not a tiny percentage
b)i would argue that it's not a smattering of specialties. Why did they pick surgery instead of orthopaedic surgery? As I alluded to, perhaps because there are more surgery residency directors (~250 vs. ~100). Would the putative orthopod have been better served if they had asked orthopods? Yes.
c) don't tell me the research RD survey is flawed because it doesn't tell you anything about primary care. Thats just silly. If you want to go into primary care, look at the primary care RD survey. I simply didn't go into detail about it because you were ragging on the research RD survey, and because I'm not interested in primary care.


If you want to argue methodology based on a bias of who they asked, go for it, but I still think the data are usefull. You are less likely to find out what pathology RDs think if you ask a bunch of radiology RDs, but are you going to get a reasonable approximation? Yes. A usefull approximation? Yes.

Is rads, surgery and psych a fair sampling? I think it might be, it cuts a fairly wide swath through non-primary care specialties. Is this a point that could be argued in a journal discussion group. Yep. Is it better than the Framingham study in terms of 'patient characteristics'? Probably.
 
i would trust it more than people magazine's hottest people of the year crap, but not by much.

Actually, this is more reliable, because you are equally expert to judge whether such people are, in fact, "hot", and so you aren't forced to rely on a magazine's survey data being sound -- you can do an independant check based on the pictures.
 
Hey, if it makes you happy, rely on it as gospel. You seem to be missing the points being raised by a couple of us. Or you just badly want to believe. Which is fine by me.

agreed.
 
I am sorry, but if you think randomized controlled trials are all that you find in the medical literature, you my young friend, have a lot to learn.

Oh, you're talking about CLINICAL journals.

Yeah, a great deal of BS.

Ever wonder why eggs are good for you one day, then bad for you the next, then cause cancer, but only in california?

Lousy surveys:p
 
again, the issue isn't 'low response percentage', but which schools chose to respond versus not respond. yes, these issues are typical with surveys, but good survey analyses will use more sophisticated techniques to correct for bias in the responses, by discerning whether those who responded had substantial characteristic differences from those who did not respond.

i would also add that studying disease in human bodies is a good bit different than studying the attitudes of a population of individuals. i can study polio in one individual and learn something useful about how the disease likely works in others, but it would be insane to ask the residency director of joe state university what he thinks the best programs are and use that as the definitive list.

now what if the 25% that responded to usnews were *all* from med schools in the lowest quartile of mcat/gpa or acceptance rates? the highest? you'd get dramatically different results.

i won't say the rankings are useless, but i don't think they're very reliable. i would trust it more than people magazine's hottest people of the year crap, but not by much. usnews grad school ranking methods (all self-report) are even crappier.

i think the biggest crime is that having these "rankings" leads to an unhealthy obsession among pre-meds, med students, and schools about being at or being the "best" school. ask folks on sdn on both sides of the matching process and most will tell you that school name doesn't take you too far beyond what you do as an individual. i'm convinced now that it's healthier to obsess over which school will make me the happiest, and not which one some for-profit news rag with flawed methods tells me is the "best" (whatever the hell that's supposed to mean when thinking about schools).


I think you have misunderstood somethings or have taken some things out of context. Let me see if I can clear some things up, as well as respond to a few things.

REGARDING MY OPINIONS:
1) The idea of RANKINGS of medical schools as having value is a little silly, of course. The problem is that the resolution of their measurement tool is too coarse to distinguish between consecutively ranked schools.

2) I am suspicious that the residency director survey results used as PART of the usnews rankings actually provide some good information, when used alone. Especially because this is what we tell ourselves we're interested in when we look at the rankings: will this school help me get a competitive residency. Sometimes we might actually be interested in the effect of the school on our ego, in which case the rankings overall are probably more usefull.

REGARDING THE STRUCTURE OF GRADUATE AND UNDERGRADUATE MEDICAL EDUCATION:
3) These are RESIDENCY DIRECTORS, not medical schools, that are being surveyed. Residencies programs are sometimes affiliated with a medical school, sometimes not affiliated with a medical school. Often the hospital affiliates of a particular medical schools have several residency programs for a single specialty.

REGARDING STATISTICS, SURVEYS AND RESPONSE BIAS:
4) Right, right, I understood you were talking about response bias. This is why I addressed it in my first reply to you. I agree it would be more rigorous if usnews would publish the characteristics of responding RDs vs. nonresponding RDs, but when you have numbers like 154/616, it is unlikely that, for example, all of the responders will be from the east coast or... from non academic medical institutions. Unlikely, but possible.

REGARDING STATISTICS AND DISEASES VS. ATTITUDES:
5)Yes, they are very different. Yes, you use different methodologies to ask questions about diseases vs. attitudes. And YES, standard statistical methods work with both of them.
 
1) The idea of RANKINGS of medical schools as having value is a little silly, of course. The problem is that the resolution of their measurement tool is too coarse to distinguish between consecutively ranked schools.

I actually agree with you on this one. That's why I'm using a broad tier system.

Disputes about rankings usually occur only at the "edges" (if schools are ranked 1-100, then there are 99 "edges" to cause conflict). If you group them in massive categories, some argument will ensue at the edges, but they will be relatively minimized.

If you disagree with a tiering system, then I think that's just silly. If not for some notion of where schools "stand" we'd have no idea how to select schools to apply to. It would be a waste of money for a kid with no EC's, a 20 on his MCAT and a 2.0 GPA to apply to Harvard Med, just as it would be silly for a guy who founded the Red Cross, got a 45 on his MCAT, and a 4.0 from MIT to apply to a Caribbean school.

In some ways, the reason why I am attempting this semi-"natural" ranking algorithm is to show people that there are more ways to look at schools than simply what US News spits out each year.

To know that some "lower ranked schools" like Pritzker and Vanderbilt actually outperform many other "higher ranked schools" in the match is meaningful, and releases some of the tension people feel when turning down a "higher ranked" school.

Wow that was a lot of "quotes."
 
Hey, if it makes you happy, rely on it as gospel. You seem to be missing the points being raised by a couple of us. Or you just badly want to believe. Which is fine by me.

Sweetheart, you are misconstruing yet again.

Did I say I relied on it as gospel? No.

Did I say I thought it was a helpful peice of information that shouldn't be tossed out as useless without ACTUALLY examining the facts? Yes.

Do either of us need to rely on these data? No. We are both in medical school. Yet we both feel the need to pontificate about their reliability. It could be because of the respective medical schools we are at, yes. It could also be because we have some other kind of ulterior motive.

Maybe you feel like discouraging the competitive status seeking attitude developing in our youth. Maybe I hate it when facts are subservient to ideology...
 
I actually agree with you on this one. That's why I'm using a broad tier system.

Disputes about rankings usually occur only at the "edges" (if schools are ranked 1-100, then there are 99 "edges" to cause conflict). If you group them in massive categories, some argument will ensue at the edges, but they will be relatively minimized.

If you disagree with a tiering system, then I think that's just silly. If not for some notion of where schools "stand" we'd have no idea how to select schools to apply to. It would be a waste of money for a kid with no EC's, a 20 on his MCAT and a 2.0 GPA to apply to Harvard Med, just as it would be silly for a guy who founded the Red Cross, got a 45 on his MCAT, and a 4.0 from MIT to apply to a Caribbean school.

In some ways, the reason why I am attempting this semi-"natural" ranking algorithm is to show people that there are more ways to look at schools than simply what US News spits out each year.

To know that some "lower ranked schools" like Pritzker and Vanderbilt actually outperform many other "higher ranked schools" in the match is meaningful, and releases some of the tension people feel when turning down a "higher ranked" school.

Wow that was a lot of "quotes."


Sorry for hijacking your thread. I actually think your algorithm is interesting, but I got sick of Law2Doc's lawyering attitude to the facts.
 
Sorry for hijacking your thread. I actually think your algorithm is interesting, but I got sick of Law2Doc's lawyering attitude to the facts.

We all do what we do. Law2Doc's lawyering attitude may be frustrating sometimes, but it's also very helpful in that he has a knack at forcing people to dig up more and more supporting information for their arguments while also tactfully touching upon important counter arguments. SDN wouldn't be quite the same without him:p

As for me, what I do is throw in a snide comment here and there, but also sit back, read, and take into account the points being brought up.

Don't think your words are being lost on me. There's still a bit of work to be done before I think the algorithm is "decent." By the end a lot of your ideas will be mixed into the calculations.

And don't worry, once I'm "done" with my own tweaking, I'll put up the source code somewhere along with explanatory comments and I will certainly listen to anybody bored/interested enough to peruse them and critique them.
 
You know, I've received a bunch of PM's to run more schools. I'll probably do another dozen or two, but what would you guys think about me just uploading the program (it's all in php/perl already anyway) and letting you guys input values.

My only gripe is that somebody might put in a bunch of bogus information for a school and mess things up, but it's all in good fun anyway. I can always go back and check things every once in a while.

Sound like a worthy plan? (I don't want to waste my time doing it otherwise:p)

I'd be interested in this
 
Ouch man atleast make a list for him or something! It must be a pain to locate all the IM matches since they're jumbled up and to type them all individually into his program since you can't copy/paste anything. Atleast I'm assuming he types 'em in =P.

alright... fine ;)

what kind of format should I do that in ?
 
alright... fine ;)

what kind of format should I do that in ?

Haha no idea.. but I assume an excel list with the hospital names would be fine.
 
A few notes:

1) Home match will end up biasing this ranking. Consider two possibilities - either home match is significantly easier, or that a greater proportion of matches to highly competitive IM programs outside of home institution despite strong home program indicates competitiveness of overall school match.

2) Medical schools which have a particularly strong or weak internal medicine department compared to the rest of the medical school will have a result out of proportion with the rest of the match (e.g. UAB).

3) I don't agree at all that Mayo is a Tier 1 program. If considering competitiveness required to match at X program instead of "quality" of program (as you should in this metric), then Mayo should not be amongst the top.

4) It appears that matching at JHU, UCSF, MGH, & B&W is considerably more difficult than matching at a Tier 2 program i.e. the distance in competitiveness between Tier 1 and Tier 2 vs. Tier 2 and Tier 3, etc. is exponential.

5) I won't get into issues regarding Ivy League / West Coast / California / anti-South/Midwest biases (location biases), which are endemic to the system. E.g. Tufts and BU get a disproportionate set of Harvard matches vis-a-vis any other Tier 1 IM program, and also compared to their ranking. I'll agree that the object of your metric is not to sort out these issues.

Finally, you have way too much time on your hands :D
 
Haha no idea.. but I assume an excel list with the hospital names would be fine.

there are too many people in Wayne's class... and I got some midnight deadlines. might have to come back to this later
 
Hmm...seems like my list is a little more forgiving since I actually got 2.56 for downstate...and after correcting UPMC to U Pitt Stony Brook is actually 2.54 in my system lol. Although, it's pretty much pointless anyway, more or less I think it just means they come out very even.

I think I might have to tweak my tier 1.

OK, retweaked my Tier 1 and fixed my mistakes in the stony brook calculations, and added back in the data for people who just matched for prelims (since downstate didn't seem to separate this out I figured it made more sense to include this data although I don't think it changed things much anyway), and now I get 2.58 for Stony Brook and 2.54 for Downstate...lol, I'm going to give up trying to get the two schools to score anything significantly different...curious if it'll come out so similar for pyrois when he runs it.

I think my problem is that I'm being way too forgiving for my tiers....going to have to knock down these schools a bit when I have time lol.
 
A few notes:

1) Home match will end up biasing this ranking. Consider two possibilities - either home match is significantly easier, or that a greater proportion of matches to highly competitive IM programs outside of home institution indicates competitiveness of overall school match.

The way I currently process home matches is I multiply the ratio of matches "above" home against the matches "below" home, normalize the value to 1 using the average ratio of all schools currently in the system as a "base value." Then I use that as a weighting factor for the home match.

Take, for example, Yale. They are one of the few schools who match less people into their home program than other programs (2 match back into Yale in IM, while 6 go to Harvard 5 go to UCSF, 1 to UPenn, and 1 to Mayo). This actually elevates their matching ability even more, since they don't need to use their home-match as a "crutch" or a "backup."

WashU on the other hand, matches nearly half back into their own program, and fewer to "better" hospitals. Although their match is good, it does seem that they are falling back to WashU's home programs at a relatively high rate.

The modifiers generated by this effect are relatively small though. Usually in the 0.02-0.05 range. But it all adds up.

2) Medical schools which have a particularly strong or weak internal medicine department compared to the rest of the medical school will have a result out of proportion with the rest of the match (e.g. UAB).

You make a good point. This would imply that the 0-4 scale would naturally form a non-linear rating scale. On the other hand, I actually don't think this will effect the actual relative ratings as much as one might think. I'll have to play around with that.

3) I don't agree at all that Mayo is a Tier 1 program. If considering competitiveness required to match at X program instead of "quality" of program (as you should in this metric), then Mayo should not be amongst the top.

I think there is a slight misunderstanding in the way I "tier" the programs. The tiers are purely "merit" based, not based on "competitiveness." The location of Mayo actually inherently lowers it's score (the modifier for location bias is calculated from within the data pool). If I were to lower Mayo out of the top tier solely due to location, it would STILL get docked points for location during processing. It's only the final ratings, with modifiers that I deem to be correlative to competitiveness.

If you still think Mayo doesn't deserve to be a tier 1 school even based purely on merit, I can easily change it, but to be honest it will only visibly change the ranking of Mayo itself since most other schools match either 1 or 0 students into Mayo's residency programs.

4) It appears that matching at JHU, UCSF, MGH, & B&W is considerably more difficult than matching at a Tier 2 program i.e. the distance in competitiveness between Tier 1 and Tier 2 vs. Tier 2 and Tier 3, etc. is exponential.

Although the weightings for tier 1, 2, 3, and 4 are not quite exponential, they aren't linear either. The relative weightings are also calculated based on information from the data pool and change slightly as I add more schools (which is why I keep updating the numbers of other schools)

5) I won't get into issues regarding Ivy League / West Coast / California / anti-South/Midwest biases, which are endemic to the system.

I have a nifty trick for attempting to approximate these self-selection biases. I first calculate if there is an in-state bias for a school. If there is, I add a plus modifier to all schools within the state based on how heavy the in-state bias is.

For california schools, this boosts the ratings of most in-state schools by almost 0.5.


Finally, you have way too much time on your hands :D

You bet I do:p
 
Ouch man atleast make a list for him or something! It must be a pain to locate all the IM matches since they're jumbled up and to type them all individually into his program since you can't copy/paste anything. Atleast I'm assuming he types 'em in =P.

Yeah, I have to enter each location at least once, but by now, most schools are stored in my autocomplete:p
 
weren't you going to release the program?

I will, but I need to create a mini-user interface for it first.

Right now it's all in a php script that runs on my server. I currently input all the data directly into the database.

Since I don't want to give people access to my database (which has a lot of my other projects in it:p) you guys will have to wait til I find a chunk of time to put something together so you guys can input match lists and have the results be automatically saved/processed.
 
The way I currently process home matches is I multiply the ratio of matches "above" home against the matches "below" home, normalize the value to 1 using the average ratio of all schools currently in the system as a "base value." Then I use that as a weighting factor for the home match.

Take, for example, Yale. They are one of the few schools who match less people into their home program than other programs (2 match back into Yale in IM, while 6 go to Harvard 5 go to UCSF, 1 to UPenn, and 1 to Mayo). This actually elevates their matching ability even more, since they don't need to use their home-match as a "crutch" or a "backup."

WashU on the other hand, matches nearly half back into their own program, and fewer to "better" hospitals. Although their match is good, it does seem that they are falling back to WashU's home programs at a relatively high rate.

The modifiers generated by this effect are relatively small though. Usually in the 0.02-0.05 range. But it all adds up.

Excellent. I ought to remind myself that the flipside of course, is that your home program is really that good.

I think there is a slight misunderstanding in the way I "tier" the programs. The tiers are purely "merit" based, not based on "competitiveness." The location of Mayo actually inherently lowers it's score (the modifier for location bias is calculated from within the data pool). If I were to lower Mayo out of the top tier solely due to location, it would STILL get docked points for location during processing. It's only the final ratings, with modifiers that I deem to be correlative to competitiveness.

If you still think Mayo doesn't deserve to be a tier 1 school even based purely on merit, I can easily change it, but to be honest it will only visibly change the ranking of Mayo itself since most other schools match either 1 or 0 students into Mayo's residency programs.

This entire analysis however is based on some subjective notion of "quality of match". However, I see it more as an analysis of "competitiveness" vs. "quality". I think people would be more likely impressed by 50 Yale matches vs. 50 UAB matches, even though arguably the programs are not that much different in actual residency quality. I submit that competitiveness factors are more important in premed/allopath impressions vs. actual merit.

Would correcting my statement above with "Mayo shouldn't be in the 'elite tier' " make any difference?

I have a nifty trick for attempting to approximate these self-selection biases. I first calculate if there is an in-state bias for a school. If there is, I add a plus modifier to all schools within the state based on how heavy the in-state bias is.

For california schools, this boosts the ratings of most in-state schools by almost 0.5.

Careful, now we're getting into a complex model which rivals climate modeling. :D.
 
i'll just go back to blindly looking at how many rads matches are listed ;) :thumbup:
 
Excellent. I ought to remind myself that the flipside of course, is that your home program is really that good.

Technically if the home program is "really that good" it wouldn't be "worse" than the third party programs people are matching into, and thus would flip the modifier as well.

If there is an in-state preference it would be accounted for by the in-state bias calculations.
 
i'll just go back to blindly looking at how many rads matches are listed ;) :thumbup:

Nothing wrong with that, that's how I used to do it:p
 
Nothing wrong with that, that's how I used to do it:p

although I would like to play around with this thing... it is, sadly, something I would consider "fun" :hardy:
 
although I would like to play around with this thing... it is, sadly, something I would consider "fun" :hardy:

Let's all please take a second to take UMP's sentence out of context.
 
Let's all please take a second to take UMP's sentence out of context.

ha, stroking it to the match lists....

8=======:barf:

UTI
 
How can anyone possibly care about the results of this "experiment" without asking about the premises?

They've done studies, you know. 60% of the time, it works every time
 
Top