2012-2013 Rank Order List Power Score and Compilation Thread

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
also, strangely NRMP only lists 4 PGY-spots for Lehigh Valley? Is this thing updated? I mean I assumed the NRMP numbers should be valid at this point, right?

Members don't see this ad.
 
out of curiosity I went into NRMP to see how many EM categorical spots are participating in the match, apparently there are 164, our list only has 154. Also, I found this weird EM category

EM/SACM -> Emory has one, Central Michigan, Baylor etc... with only 1-2 spots per program... anyone have any idea what this SACM is? Also there are some International EM residency spots again 1-2 per program. Hmmmmm

SACM stands for the Saudi Arabian Cultural Mission to the US. These are spots for Saudis that are offered outside the match and I believe payed for by the Saudi government. Talked to a couple people interviewing for the spot at Emory, the deal is they do residency in the US and then go back to Saudi Arabi to develop EM there.
 
out of curiosity I went into NRMP to see how many EM categorical spots are participating in the match, apparently there are 164, our list only has 154. Also, I found this weird EM category

EM/SACM -> Emory has one, Central Michigan, Baylor etc... with only 1-2 spots per program... anyone have any idea what this SACM is? Also there are some International EM residency spots again 1-2 per program. Hmmmmm

The SACM is some kind of resident program with Saudi Arabia. I noticed it when ranking Maryland.
 
Last edited:
Members don't see this ad :)
"n" is now up to 80, or nearly 5% of all applicants this cycle! :thumbup: Again, feel free to PM me ROLs and I will put your data on the spreadsheet.
 
Last edited:
still won't fix your sample bias ;)

Ah but my answer to that is--the SDN sample is the top 5% of applicants, and therefore the SDN members can see how they are doing compared to the people most likely to take their spot! :p
 
BREAKING NEWS:

Denver takes the lead with an incredible 111 POWER SCORE!
 
If the power score comes up amongst applicants and programs on the interview trail this year, my head will explode.
 
Last edited:
Medical College of Georgia and Georgia Health Sciences are the same program.
 
Members don't see this ad :)
Probably doesn't make a big difference but Kentucky has 10 spots now.
 
At the current number of rank columns, you could probably just switch to a tally of the number of rankers at each spot 1-9 and a 10+ (or go to sixteen, if you must) to save space. Would probably simplify the entry for the calculation to a sum vs a count function, unless you've been hand-counting.
 
At the current number of rank columns, you could probably just switch to a tally of the number of rankers at each spot 1-9 and a 10+ (or go to sixteen, if you must) to save space. Would probably simplify the entry for the calculation to a sum vs a count function, unless you've been hand-counting.

not a bad idea, but would make the "average rank" function a lot harder.
 
MAJOR UPDATE:

There have been several suggestions to organize the spreadsheet in a more "statistically relevant" and less subjective way to make it more useful for this class and for people in the future. As I aim to please, this has been done--the admittedly brilliant "POWER SCORE" is no more. It will be missed.

It was decided that the only really statistically significant categories are the mean ROL ranking and the percentage of the interview class that has answered the survey (as a measure of confidence in the spreadsheet; i.e., the more people from any program's interview class that have contributed, the more confident we can be in the ROL average). The spreadsheet is now sorted by ROL program average, with a column next to the mean stating the percentage of the interview class for an individual program has participated in the spreadsheet (I pulled the actual interview #s from FREIDA from last year). For programs that did not list their interview #s on FREIDA, I estimated based on the average interviews/position of the programs that did post their info (which averaged 12 interviews per spot). The more people that post their ROLs, the higher the confidence in the mean will be.

Certain programs already have almost 15% of their entire interview class represented, which I think is pretty cool!

To keep it interesting, I added a column that projects the number of people that theoretically have ranked a particular program first based on the current info in the spreadsheet. This of course is the single number that would be the coolest to know. The higher percentage of a program's interviewees that participate, the closer that projection will be to reality.

If you have ranked a program that doesn't have a lot of rankings yet, please PM me your list or post it on the ROL thread!

Here is the link:

https://docs.google.com/spreadsheet...YkNVZGtVUmZwY2JDZVpuUzJwcVE&usp=sharing#gid=0
 
Last edited:
I think this whole thing is pretty one sided. You should get programs to anonymously submit their rank lists as well.
 
I think you should have an "unranked" list for those programs that have yet to cross the 3% response threshold.
 
Ian, I am not going to say that you created a list of which programs are strong/competitive, but what you created is a list of which programs are likable and popular.

I can safely say that I bet you a $100 bucks that this list will be used next year when applicants are gauging where to apply when they are shelling out their money for ERAS.

The only popularity trend I am really seeing though are the Southwest block (Colorado, New Mexico, Utah and Cali) along w/ some of the southern big names.
 
Ian, I am not going to say that you created a list of which programs are strong/competitive, but what you created is a list of which programs are likable and popular.

I can safely say that I bet you a $100 bucks that this list will be used next year when applicants are gauging where to apply when they are shelling out their money for ERAS.

The only popularity trend I am really seeing though are the Southwest block (Colorado, New Mexico, Utah and Cali) along w/ some of the southern big names.

So Denver is the most "likable" program?
 
Great suggestion, I've implemented it.

Just updated the list with 10 more rank lists.

Now that the number of rank lists submitted is getting large it might be better to change the way you list ranks so that the width of the sheet doesn't get out of hand. I was working on a change, but it doesn't take into account whatever rank lists you most recently added. I'll pm you a link to the modified list so you can check out what I mean.
 
Now that the number of rank lists submitted is getting large it might be better to change the way you list ranks so that the width of the sheet doesn't get out of hand. I was working on a change, but it doesn't take into account whatever rank lists you most recently added. I'll pm you a link to the modified list so you can check out what I mean.

Great thanks!
 
Now that the number of rank lists submitted is getting large it might be better to change the way you list ranks so that the width of the sheet doesn't get out of hand. I was working on a change, but it doesn't take into account whatever rank lists you most recently added. I'll pm you a link to the modified list so you can check out what I mean.

Ok I did this. The new link is

https://docs.google.com/spreadsheet...YkNVZGtVUmZwY2JDZVpuUzJwcVE&usp=sharing#gid=0
 
Ian, I am not going to say that you created a list of which programs are strong/competitive, but what you created is a list of which programs are likable and popular.

I can safely say that I bet you a $100 bucks that this list will be used next year when applicants are gauging where to apply when they are shelling out their money for ERAS.

The only popularity trend I am really seeing though are the Southwest block (Colorado, New Mexico, Utah and Cali) along w/ some of the southern big names.

Yeah, this list seems helpful.
 
I was asked to explain how I calculated the (very wide confidence interval :)) projected #1 ranks. I took the number of #1 ranks so far, took into account the percentage of students of the total interviewee pool for a particular program, and calculated how many #1s would be there if the numbers currently reported were extrapolated to the entire interviewee pool.

For example:

-- Program In N Out Burger interviewed 200 people this year
-- 10% of the people that interviewed have submitted their answer to the spreadsheet (20 responses)
-- Of those 20 responses, 5 ranked In N Out first (25%)
-- Projecting for a 100% response rate, the total number of students who will rank In N Out first is 25.

Make sense?

Of course, if a program has no number one rankings, then this number projects to zero (which is of course unlikely). However, the more ROLs that are submitted, the better the prediction and smaller the confidence interval becomes.
 
"n" is now 102. Thanks for all the ROLs guys! keep them coming!

Some programs now have had almost 20% of their interview pool respond, which is amazing!
 
Last edited:
"n" is now 102. Thanks for all the ROLs guys! keep them coming!

Some programs now have had almost 20% of their interview pool respond, which is amazing!

This is amazing. Really wish my top 3 didn't have such a high #1 project though...:(.
Come on people, keep them coming (and hopefully lower the projected number 1 for some of these programs...)
 
Can you add a thing that gives the current n somewhere, maybe as a count to total number 1 ranks?
 
Added at the top of the sheet.

Special thanks to EMquest3 for being so helpful with the spreadsheet, the new design is all him.

Thanks to everyone as well for all the kind PMs.

P.s. it's so funny, I get all excited when a program "graduates" into the top group because they have enough respondents! On the flip side, I feel sorry for the programs with only one ranking and it's like 15 :(
 
How about double weighting a 3, triple weighting a 2 and quintuple weighting a 1?

Or, perhaps treating anything higher than a 10 as a 10 (like picking up after 8 strokes.)

Current formula does not fairly take into account programs where one interviewed, but did not rank.

Sorry, do not mean to be overly critical. Thanks for all your work putting it together. It is interesting information.
 
How about double weighting a 3, triple weighting a 2 and quintuple weighting a 1?

Or, perhaps treating anything higher than a 10 as a 10 (like picking up after 8 strokes.)

Current formula does not fairly take into account programs where one interviewed, but did not rank.

Sorry, do not mean to be overly critical. Thanks for all your work putting it together. It is interesting information.

No worries, not critical at all!

I did that with my original power rankings (weighting numbers differently), but at the end of the day, I decided that if the "n" is high enough then the average will still be a good representation of the program. I can't decide about the "picking up after 10" thing -- for the person that ranked a program 17th, when given the chance between X and Y they still ranked X 17th. But it's true that big outliers can really skew a program's average--I briefly considered using the median instead of mean to account for outliers but the rankings didn't pass the smell test when I tried it.

It's true that the formula doesn't take non-rankers into account, but I have to think that at least most people rank most programs.

Thanks for the suggestions, I'll think more about it!
 
No worries, not critical at all!

I did that with my original power rankings (weighting numbers differently), but at the end of the day, I decided that if the "n" is high enough then the average will still be a good representation of the program. I can't decide about the "picking up after 10" thing -- for the person that ranked a program 17th, when given the chance between X and Y they still ranked X 17th. But it's true that big outliers can really skew a program's average--I briefly considered using the median instead of mean to account for outliers but the rankings didn't pass the smell test when I tried it.

It's true that the formula doesn't take non-rankers into account, but I have to think that at least most people rank most programs.

Thanks for the suggestions, I'll think more about it!

Just looking at the list right now, I'm starting to think that the mean rank is not doing an adequate job. When I look at the rankings for Denver or Indiana my gut says they should be higher than the programs above them. Having one low score is taking too big of a toll IMO.
 
Just looking at the list right now, I'm starting to think that the mean rank is not doing an adequate job. When I look at the rankings for Denver or Indiana my gut says they should be higher than the programs above them. Having one low score is taking too big of a toll IMO.

But this is supposed to be based on actual data, not gut feelings.

I'm not saying I disagree with you, heck, my gut tells me that 1/3rd of the list is pretty much *ss backwards, but this is supposed to be based on numbers only.
 
Top