2012-2013 Rank Order List Power Score and Compilation Thread

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
But this is supposed to be based on actual data, not gut feelings.

I'm not saying I disagree with you, heck, my gut tells me that 1/3rd of the list is pretty much *ss backwards, but this is supposed to be based on numbers only.

Yes, I'm saying looking at the numbers my gut says the formula could use improvement. Call it the smell test if you don't like "gut". If a program has twice as many rankings in the top than another program has total rankings it doesn't make sense that the former would be lower ranked simply because 1 or 2 didn't rank it highly.

Members don't see this ad.
 
Yes, I'm saying looking at the numbers my gut says the formula could use improvement. Call it the smell test if you don't like "gut". If a program has twice as many rankings in the top than another program has total rankings it doesn't make sense that the former would be lower ranked simply because 1 or 2 didn't rank it highly.

It seems that the only non-controversial way to improve the accuracy is MOAR RANK LISTS!!! :) come on all you lurkers out there--PM me your lists! Thanks to the many who have!
 
It seems that the only non-controversial way to improve the accuracy is MOAR RANK LISTS!!! :) come on all you lurkers out there--PM me your lists! Thanks to the many who have!

The data will necessarily be skewed, because of the different number of programs ranked by each person, as well as the individual decision to not rank certain places at which they interviewed.

I wonder what the chart would look like if everyone ranked 10 and only 10 programs.

Do you spend more time thinking about your choice #1 versus your choice #2 or your choice #11 versus your choice #12? Since both decisions mean one place in the averages, the decision has equal impact on the results. Is that fair or reasonable?
 
Members don't see this ad :)
The data will necessarily be skewed, because of the different number of programs ranked by each person, as well as the individual decision to not rank certain places at which they interviewed.

I wonder what the chart would look like if everyone ranked 10 and only 10 programs.

Do you spend more time thinking about your choice #1 versus your choice #2 or your choice #11 versus your choice #12? Since both decisions mean one place in the averages, the decision has equal impact on the results. Is that fair or reasonable?

Hey guys--

After discussing it with several people, I decided that the best course of action would be to take any rankings that are above 10 and roll them into a "10+" category.

This causes several things to happen to the mean:

1. The highest possible mean is now "1" and lowest possible is now "10"
2. Any rankings below 10 will pull the program down towards the "10" average
3. Outliers will now not penalize a program unduly.
 
hey ian, if you are really feeling ambitious...I don't know how you've been treating rols with alphabetical lists etc. after ranked programs... but if you were leaving them out (I think you were), you (could) go back and add a "10+" in their column if posters had listed 9 programs before the alphabetical list....unsure how many this would apply to but just another way to improve the data without getting moar submitted lists...
 
hey ian, if you are really feeling ambitious...I don't know how you've been treating rols with alphabetical lists etc. after ranked programs... but if you were leaving them out (I think you were), you (could) go back and add a "10+" in their column if posters had listed 9 programs before the alphabetical list....unsure how many this would apply to but just another way to improve the data without getting moar submitted lists...

Good idea! so good that I already had done it :)
 
Skewed data or not, this spreadsheet is worth its wait in gold * 10. Good work, Ian! :thumbup:
 
Excuse the noob question, but can a program that has say 5 EM residents in each class interview the same number of students as a program that has 15? Are there rules about interview invites?
 
Excuse the noob question, but can a program that has say 5 EM residents in each class interview the same number of students as a program that has 15? Are there rules about interview invites?

There aren't any rules about this; each program decides how many interviews it needs or wants to fill its PGY1 class. In theory, a more competitive program might interview fewer, however, they may choose to interview more to have more of a pick of the litter so to speak. That's my thought anyway. Most programs end up interviewing approximately 12 students per spot in their class, i.e., a program with 10 spots will interview approximately 120.
 
There aren't any rules about this; each program decides how many interviews it needs or wants to fill its PGY1 class. In theory, a more competitive program might interview fewer, however, they may choose to interview more to have more of a pick of the litter so to speak. That's my thought anyway. Most programs end up interviewing approximately 12 students per spot in their class, i.e., a program with 10 spots will interview approximately 120.

Thanks man, appreciate the answers and hard work with the spreadsheet. I think its really interesting to look at.
 
It's fun to check, helps relieve some boredom haha. Hopefully people don't use it as a "best program list" but rather a "most popular based on SDN members in the 2013 class" or something
 
Members don't see this ad :)
Been interesting watching the evolution of this list.

Nice work.
 
The compilation sheet just had its first >20% response rate with Utah!

pretty cool I think to have one of every 5 ranks in the country for that program.
 
He's in there updating now. I just saw it! It's like watching history.

Could you put the link in your sig? That way we don't have to search for it on the forum page. Or will the site remain the same so we can just bookmark it in our browsers?
 
He's in there updating now. I just saw it! It's like watching history.

Could you put the link in your sig? That way we don't have to search for it on the forum page. Or will the site remain the same so we can just bookmark it in our browsers?

It's in my sig now.

Great, this is the only thing I'm ever going to be known for :D

But seriously, over 1,000 people have opened the spreadsheet just today. Changing lives in the most irrelevant possible way, one page view at a time. :cool:
 
The compilation sheet just had its first >20% response rate with Utah!

pretty cool I think to have one of every 5 ranks in the country for that program.

Utah isn't getting one in five ranks in the country it's getting one in five on SDN. This is the problem with your list. People are going to extrapolate things that aren't true.
 
Utah isn't getting one in five ranks in the country it's getting one in five on SDN. This is the problem with your list. People are going to extrapolate things that aren't true.

Not quite...with the number of people that have answered the survey with a rank for Utah, one out of every five people in the country that interviewed at Utah HAVE submitted how they ranked it. That's the truth based on last year's interview data. Whether or not it is a representative sample is of course debatable, but that's quite a sample size for one program. Actually, with the last updates I put in, 25 of 96 interviewed have responded, over 25%.

I think you interpreted my quote as saying that one in five of every rank lists on SDN includes a Utah rank, which is not true.

Also, quite a few people have been getting their friends to submit their ROLs, which makes it a little more unbiased.
 
Last edited:
maybe you should just have a new thread that is stickied up top called the 2012-2013 ROL Survey or something like that.

I don't know how to do that, pretty non-forum savvy actually!

however, for what it's worth, the link from the very first post will always work.
 
So I think the way the SDN bias is showing itself is that there are far more responders for popular programs (i.e., SDN'ers are more likely to get interviews to the more competitive programs). Where they rank the programs that they report is probably not necessarily biased in a big way.

Conversely, random "less popular" programs are not getting any people that even interviewed at them on here, much less ranking them highly. I.e., Genesys with ONE person out of 140 that even interviewed with them on here.
 
MAJOR UPDATE:

I think we have enough ROLs to change the way the programs are ordered. I think a more accurate picture overall than the mean to gauge a program's popularity is to order it by the percent of people that rank it in their top 3. There is a new category to this effect. By "smell test" I feel the rankings are more accurate this way. Tell me what you think!
 
MAJOR UPDATE:

I think we have enough ROLs to change the way the programs are ordered. I think a more accurate picture overall than the mean to gauge a program's popularity is to order it by the percent of people that rank it in their top 3. There is a new category to this effect. By "smell test" I feel the rankings are more accurate this way. Tell me what you think!

I like it! Hopefully that's not just the insomnia speaking.
 
I like it! Hopefully that's not just the insomnia speaking.

So the majority of feedback i'm getting via PM (you guys can post suggestions on this page too!) is that people like the average rank better.

The people have spoken, and I have obeyed. It's your rank lists after all! :)
 
Hey all! We hit "n" of 150 today, thanks for all the submissions!

If you are on here and haven't submitted your list, please do--I will keep it completely anonymous and delete your PM after I update the list. The following programs have a response rate of less than 5%, so if you ranked any of these, please contribute. Thanks to everyone!

New York Hosp Med Ctr of Queens/Cornell Univ Med Coll
Earl K Long Med Ctr/Louisiana State Univ (Baton Rouge)
Hackensack University Med Ctr (New Jersey)
West Virginia Univ
Geisinger Health System
Univ of Mississippi Med Ctr
Texas A&M College of Med-Scott and White
Temple Univ Hosp
Southern Illinois Univ Sch of Med
Christus Spohn Memorial Hosp
Albert Einstein College of Med (Jacobi/Montefiore)
New York Med College (Metropolitan)
Akron General Med Ctr/NEOMED
New York Univ Sch of Med
Lincoln Med and Mental Health Ctr Program
Boston Med Ctr
Univ at Buffalo
Baylor College of Med
Louisiana State Univ (New Orleans)
Mercy St Vincent Med Ctr/Mercy Health Partners
Univ of Connecticut
Univ of Chicago
John Hopkins Univ
Louisiana State Univ (Shreveport)
SUNY Upstate Med Univ (Syracuse)
Newark Beth Israel Med Ctr
Univ of Louisville
Albert Einstein Healthcare Network (Philadelphia)
Brooklyn Hosp Ctr
Univ of South Florida Morsani
Central Michigan Univ (Saginaw)
SUNY at Stony Brook
Univ of Kansas Sch of Med
Kern Med Ctr
U of Oklahoma College of Med-Tulsa
Univ of Toledo
Atlantic Health (Morristown)
Lehigh Valley Health Network/Univ of S Florida College of Med
UMDNJ-Robert Wood Johnson (New Brunswick, New Jersey)
Univ of Arkansas
Florida Hosp Med Ctr
Genesys Regional Med Ctr
 
So the majority of feedback i'm getting via PM (you guys can post suggestions on this page too!) is that people like the average rank better.

The people have spoken, and I have obeyed. It's your rank lists after all! :)
Just adding my dissenting opinion that I like something that gives weight to higher ranks over plain average rank.

Maybe it should be a little like olympic judging: throw out the low ranks. I think high ranks are very meaningful while bottom of the list ranks have a lot more to do with just bad fit, bad location, or a random bad experience.
 
food for thought...
it'd be really interesting if the nrmp made this data public in a similar manner...with actual rols that were submitted, obviously
 
food for thought...
it'd be really interesting if the nrmp made this data public in a similar manner...with actual rols that were submitted, obviously

I know, I wish they would! But programs would never allow it, as they would all like to maintain a veneer of "we're an awesome program and everyone loves us!" which would dissipate quickly if data showed that nobody ranked them in their top 3!
 
I think you might be on to something here.

I don't think "throwing out all the low rankings" is the right answer, but I do think that setting the "rank+" line at 5 might be a good idea (i.e., the categories go from 1 to 5+ instead of 10+).

Several reasons for this--

1. Nearly all applicants rank at least 4-5 programs, but after that the spread gets really big, with some ranking 5 to 6 and some ranking 18-20. A 6 ranking for the former person is kind of the same as a 20 ranking for the latter (i.e., both last) but each would affect an average very differently.

2. Quite a few people who submit their lists add a "no particular order" addendum to their list, usually after 6-10 but never before 5. Changing the final category to 5+ would "equalize" these discrepancies in reporting ROLs.

3. The lower ranks would be appropriately diminished in their contribution to an average, while maintaining the accuracy of keeping the top numbers actual.

4. As you mentioned above, the top of people's rank lists are probably scrutinized by the applicants a lot more--there's not too much information to be gathered from someone ranking a program 7th on a list of 8 vs 16th on a list of 20.

This would be a big change that would obliterate some data, so I want to get some feedback before I make this an actual change. Here is what the data would look like if this change was made:

https://docs.google.com/spreadsheet/ccc?key=0AnzZUifXW_SgdGtmQmFWMXZjb21kd253UklCNFJkQnc#gid=0

I admit I don't think the data loses much and it might be a big improvement. Thoughts?

Ian
I like it. That's a much better stated version of the idea I was kicking around.
 
I think you might be on to something here.

I don't think "throwing out all the low rankings" is the right answer, but I do think that setting the "rank+" line at 5 might be a good idea (i.e., the categories go from 1 to 5+ instead of 10+).

Several reasons for this--

1. Nearly all applicants rank at least 4-5 programs, but after that the spread gets really big, with some ranking 5 to 6 and some ranking 18-20. A 6 ranking for the former person is kind of the same as a 20 ranking for the latter (i.e., both last) but each would affect an average very differently.

2. Quite a few people who submit their lists add a "no particular order" addendum to their list, usually after 6-10 but never before 5. Changing the final category to 5+ would "equalize" these discrepancies in reporting ROLs.

3. The lower ranks would be appropriately diminished in their contribution to an average, while maintaining the accuracy of keeping the top numbers actual.

4. As you mentioned above, the top of people's rank lists are probably scrutinized by the applicants a lot more--there's not too much information to be gathered from someone ranking a program 7th on a list of 8 vs 16th on a list of 20.

This would be a big change that would obliterate some data, so I want to get some feedback before I make this an actual change. Here is what the data would look like if this change was made:

https://docs.google.com/spreadsheet/ccc?key=0AnzZUifXW_SgdGtmQmFWMXZjb21kd253UklCNFJkQnc#gid=0

I admit I don't think the data loses much and it might be a big improvement. Thoughts?

Ian

i actually like for the average ranking to be on a scale of 1-10.
i know your n for most programs is too small to be predictive/valuable but i still prefer that to just lumping every program on a 1-5 scale.
plus 10 interviews is where the bar graph peaks, per nrmp charting outcomes 2011.
 
I like 1-10+, it gives more of a spread and I like to see how people ranked programs.
 
I like 1-10+, it gives more of a spread and I like to see how people ranked programs.

Agreed, to me there wasn't a big gap between programs 2 and 5, but a big difference between 5 and 10 (would be thrilled at #5, at #10 "hooray I'm in EM")
 
Agree with 1-10 still. Difference between 4th and 5th program is huge compared to 4th and 10th program which should be reflected. Still doing some shielding with the 10+ concept.
 
I think 90% of applicants will end up in their top 5. It's a little ridiculous to let 5-10 influence the "average rank" of a program. I agree that it's interesting to see 5-10 as well, but I think the average rank should represent the most relevant information.

That being said, an average of 1-5 is a legitimate system to represent an average that actually has significance. Otherwise, the programs will all end up with a similar "average rank" number as more rols are included.
 
I think the overall match rate for EM was only 89% last year..so no way 90% of applicants get top 5
When you take into account imgs, dos and below par students you may have a point. But seriously, most of the SDNers who even care to look at the spread sheet are typically those with pretty decent stats.
 
I think 90% of applicants will end up in their top 5. It's a little ridiculous to let 5-10 influence the "average rank" of a program. I agree that it's interesting to see 5-10 as well, but I think the average rank should represent the most relevant information.

I think it's interesting to know how people actually ranked programs, since most of the rank is personal preference, fit, family, etc and not the "strength" of the programs. What may be a top program pre-interview may not even get ranked when the time comes.

Rank lists are personal preference.
 
I think the overall match rate for EM was only 89% last year..so no way 90% of applicants get top 5

From charting outcomes 2011, 5 ranks put applicants at 80% chance of matching, 7 at 90%, and 9 at 95%. While it's not necessarily a valid statement that 80% will match in their top 5, etc., 7 or 9 and then n+1 onward seems like it would be a better equilibrium point to appropriately value high vs low-ranked programs.

Edit: And this would also focus on the list of programs that most applicants paid attention to when putting together their lists, at least if the general idea was "rank enough programs that I'll match and be happy where I end up," which I assume is the case.
 
Last edited:
Good discussion. I think the essential question is at which point in one's rank list does someone transition from "would love to match here" to "well, at least I'm going to be a doctor." I know for me I would be happy at 1-6 and 6-10 were just ok.

Because "ranking" the programs really is a matter of the subjective "would love to be here" vs "not really", i.e., a program where more people would love to be there after interviewing is by definition the more popular one, which is kind of what I think the chart can get across if the average is done the right way.

I agree with above posters that it is still fun to see all the rankings, but I do think that the average should be capped at 6+ (assuming that my transition point generalizes well, which it seems to based on looking back at people's rank lists). So I might keep all the numbers in tehre for the sake fo the data but just do a weighted average where all the ones at 6+ are rolled into one like the second spreadsheet.

Thoughts?

Edit: Or an easy compromise would be just to have 1-7+ (which would assume that 1-6 are good and 7+ are "rather not but will take it".
 
Last edited:
Good discussion. I think the essential question is at which point in one's rank list does someone transition from "would love to match here" to "well, at least I'm going to be a doctor." I know for me I would be happy at 1-6 and 6-10 were just ok.

Because "ranking" the programs really is a matter of the subjective "would love to be here" vs "not really", i.e., a program where more people would love to be there after interviewing is by definition the more popular one, which is kind of what I think the chart can get across if the average is done the right way.

I agree with above posters that it is still fun to see all the rankings, but I do think that the average should be capped at 6+ (assuming that my transition point generalizes well, which it seems to based on looking back at people's rank lists). So I might keep all the numbers in tehre for the sake fo the data but just do a weighted average where all the ones at 6+ are rolled into one like the second spreadsheet.

Thoughts?

Edit: Or an easy compromise would be just to have 1-7+ (which would assume that 1-6 are good and 7+ are "rather not but will take it".

I really don't think it matters. You're not going to make everyone happy, and people will get mad when they see program x, y, z on top of the chart.
 
I really don't think it matters. You're not going to make everyone happy, and people will get mad when they see program x, y, z on top of the chart.

truth!

I think the point of all this is march 15th can't come soon enough!!:rolleyes:
 
Top