Competitive Specialties Matched By Medical School (Plots and Data)

Sep 18, 2019
113
32
Status (Visible)
  1. Pre-Medical
This will always be affected by personal choice and to be honest is kind of an exercise in futility. Many many people hate the “competitive” specialties regardless of how their application is. Sure it’s very common for someone to want ortho before boards come back then magically like family med, but there’s also a large chunk of people who want to get through med school, go to primary care back in their home town, and call it a day.

It’s pretty common knowledge which schools have connection(s) to allow a somewhat easier road to these specialties if you choose, but no matter how you crunch the data there is no way to account for personal preference vs dual specialty applying vs soaping and all that because schools just don’t give that information out for better or for worse
What is "soaping"?
 

REL

Senior Member
15+ Year Member
May 23, 2005
1,731
1,892
Doesn’t even include the most competitive specialities like IR, urology, ophthalmology. Just laziness and makes it more useless.
I agree that expanding this to the 9 most competitive residency would make it more accurate. But the DOPEN creator did indicate they were not including some of them because they were deemed to be lesser in total percentage. What is here in DOPEN does seem to speak to the fact that certain UGME medical programs would seem to expose students to these residency options and support their students to apply. Selecting a specialty is often the product of what they are able to see during their clinical years in med school. I can tell you that some of the newer med programs have risen to be consistently in the upper quartile nationally based on NBME 3-year data related to the categories of research, Step 1, Step 2, and altruistic activities. That means that some that were traditionally always in the upper quartile are no longer there for the UGME programs. Could this also be true for the residency specialty rankings? It is just a question; pure speculation on my part, and something that is not in my wheelhouse to speak deeply about. But things certainly are changing.

I suspect that this DOPEN initiative is a good starting point to open a door to some knowledge for applicants applying to med school programs to understand the possible outcomes of those programs. All that applicants have now is the class GPA, MCAT class entry data which have little to do with clinical specialty outcomes four years later. Applicants to MD schools are preyed upon by the for-profit annual rags (publications) that sell their annual med school rankings. The data in these rags that are not based on good data points, nor a good data acquisition process. Applicants cant have the licensure exam scores since they are not individually releasable. I guess they could gather all of the annual Match lists and try to make a determination, or they could use DOPEN or some similar derivative.
 
  • Like
Reactions: 1 user
About the Ads

littlecow

5+ Year Member
Jan 22, 2016
29
94
Could combine your score with a match rate into T4, or T20, or T25 IM programs. Wouldn't use Peds since I don't think it's that competitive even at the top levels.
 
  • Like
Reactions: 1 user

Lucca

Will Walk Rope for Sandwich
Staff member
Administrator
Volunteer Staff
7+ Year Member
Oct 22, 2013
8,506
19,469
City of the Future
Status (Visible)
  1. MD/PhD Student
very interesting! as someone who has spent a lot of time making graphs for this website, we all know how difficult it is to interpret this data, but doing exercises like this can sometimes reveal surprising things, or give us a way to give some substance to what people mean when they say things like "match well".

I agree with @efle that its tough to use any number of specialties as a surrogate measure and that it is preferable to look at individual specialties on a case by case basis. If someone did the footwork of doing it for every specialty, we could have something like a comprehensive assessment. But that would probably be more work than its actually worth. Something easier to do is look at large fields with a lot of people matching into them and then the academic prestige of those matches specifically, as @efle suggested. This does not account for personal preferences, variability, who was able to get what interview, and so on, but is a much "flatter" comparison than looking at proportion going into X specialties, IMO, which is more sensitive to personal preferences and year to year variability. For example, Rad Onc was among the most competitive specialties from 2005-2011 and now it is among the least for USMDs. Anyone looking at data spanning 2005-2020 at once would have a tough time seeing the forest for the trees.

To give an example of what looking at one specialty at a time, here's 11 years of IM matching data from Stanford in two graphs, one by raw counts matched at X institution and another by proportion of all matches in 11 years.
1616854099137.png

1616854085932.png
 
  • Like
Reactions: 1 users

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
Interesting to see the home match rate quantified compared to other big hospital systems, too. Someone earlier mentioned needing to control for that. I dont think I would, though - I'd view it as a central part of the "match advantage" were discussing.
 
  • Like
Reactions: 1 users

Lucca

Will Walk Rope for Sandwich
Staff member
Administrator
Volunteer Staff
7+ Year Member
Oct 22, 2013
8,506
19,469
City of the Future
Status (Visible)
  1. MD/PhD Student
The IM spreadsheet at least always has a section on the average ROL placement of institutions for applicants who ranked them on the spreadsheet that year and then an ad hoc ranking. That might actually be more useful than doximity because it reflects the preferences of actual students instead of peer institutions
 

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
The IM spreadsheet at least always has a section on the average ROL placement of institutions for applicants who ranked them on the spreadsheet that year and then an ad hoc ranking. That might actually be more useful than doximity because it reflects the preferences of actual students instead of peer institutions
I wouldnt trust a single bit of data from the spreadsheets my dude
 
  • Like
  • Haha
Reactions: 2 users

darkamgine

5+ Year Member
Nov 25, 2014
23
108
Status (Visible)
  1. Pre-Health (Field Undecided)
If people think IR, Urology, and Ophtha should be added, I may add them after some time. I took a quick look at the numbers and it would change the competitive specialties from 9-10% to about 13-14%, a 40-50% increase. I imagine relative rankings of most schools wouldn't change much but if people think that makes this information more useful/practical, I can try it next week. I think it would be more appropriate to create 3-4 tiers of med schools as to their relationship to DOPEN match rate (top quartile, above average, below average, and bottom quartile). And just because a school has a lower match rate into DOPEN does not mean it is worse or has resources, it could be they recruit students to primary care or other specialties.

To those that say this is not useful at all or in general, you are welcome to think that. These are competitive specialties that have 20% or more US med graduates that go unmatched each year. If you are not interested in these specialty (including most med students), this list is not targeted for you. Or if you think match list do not mean anything because we do not know what percent of med students by school wanted to go into these specialties, then I guess I cannot convince you that match list mean anything at all. All I think is that match lists are a history of outcomes of the med school, and where appropriate, it is good to know the history even if we do not know the intentions of the students.

I see some comments about if there is a way to rank match lists to other specialties such as IM. I am probably not going to do it because I can already see the criticism of preference being even more exaggerated. The other problem I see is inbreeding so I expect because the highest number of matches to be the home institution, the rank list will be very similar to the affiliated hospital residency ranking. However, if you try to adjust for inbreeding, I do not think that is an accurate picture because staying at the home institution is a draw of med school. Lastly, because IM is less competitive, the match is more about choice as opposed to being matched or not. Matching in a competitive specialty is a career changing event whereas matching at a slightly higher prestige IM residency may or may not change someone's career (probably not). If someone wants to try it, I would be interested to see but I will not be doing it.
 
  • Like
Reactions: 4 users
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
Interesting chart! More than 60% of people with a 250+ select a less competitive specialty however, so for trying to read a match list, you'd really have to look into where people were matching in big fields like IM

I've always been too lazy to do it, but someone could in theory put together a rank list assessment that looks at how many people land in the "top X" programs in their specialty. It will probably be the same group of big research centers that overperform the average, but would be interesting to quantify the advantage
I have done exactly this on a small scale for a few schools that some accepted students were deciding between (I.e., Case, Einstein, and Emory; little to no difference in students matching top 15 research-based IM programs over 4-5 year average).

Also, for those interested in a serious career in academic medicine, I usually point to the Doximity collaboration Paper published in “Academic Medicine” that analyzed the long-term career performances of 600,000+ graduates, and presented a T25 list based on career outcomes (rather than “student selectivity” scores, and other gameable nonsense):


I would be happy to do a similar match list analysis on a larger scale, if there is interest in such a thing. I am not sure if it would help with the toxic “top” culture on the site, but it would at least make it more accurate (based on data, rather than hear-say).
 
  • Like
Reactions: 1 users
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
Adding to the data identifying match rate by generally perceived school rank, this paper just came out in the Laryngoscope:


View attachment 333608

I find it very telling that for applicants applying to ENT, top 25 medical school rank was the only major objective factor that statistically increased odds of matching (if one took a research year). The analysis from OP indicates there are plenty of non top 25 schools that are associated with good DOPEN match rates, but that analysis combined with this one still point towards a institutional bias for traditionally competitive fields.
Interesting, although I think your conclusions are too drastic.

The effect that they observed was a noticeably “minor” one (I.e., 90% vs 96% match rate), and the students had to both have a research fellowship, and be part of this T25 group as you described. The effect of going to a “top” school was insignificant alone. I also find it questionable that they do not mention the trend they observed with non T25 schools with/without fellowship; while not statistically significant, their was a small trend in the difference in averages that approached what the T25 schools had (83.5% vs 87%).

Regardless, I think the majority of this analysis is just playing with statistics. Their variance in most groups overtook any “effect” they observed, and even the one result they had was barely-significant (p-value = 0.017). (Edit: they observed a larger effect with research on T25 ENT residencies; 30.5% to 58.6%. Probably the most interesting thing they found).

I would not read into these results much.
 
Last edited:
  • Like
Reactions: 1 user

YCAGA

2+ Year Member
Jun 10, 2017
870
1,498
Status (Visible)
  1. Medical Student
Adding to the data identifying match rate by generally perceived school rank, this paper just came out in the Laryngoscope:


View attachment 333608

I find it very telling that for applicants applying to ENT, top 25 medical school rank was the only major objective factor that statistically increased odds of matching (if one took a research year). The analysis from OP indicates there are plenty of non top 25 schools that are associated with good DOPEN match rates, but that analysis combined with this one still point towards a institutional bias for traditionally competitive fields.
You are misinterpreting this analysis. Did you read the figure legend before you took the screenshot? lol

All this is saying is that for people who go to a T25 medical school, research year vs no research year correlates with matching. For any other sub-group, research years don't seem to matter. This is NOT saying that going to a T25 "increased odds of matching (if one took a research year)." Attending a T25 is constant between the two groups, the variable when calculating the odds ratio is research year or no research year. They would need to compare match rates for top 25 vs >25 to make the conclusion that you made, but they didn't do that in this paper. I mean...what they are studying is literally in the title of the paper..."Impact of Medical Student Research Fellowships on Otolaryngology Match Outcomes". This is only talking about research fellowships.
 
  • Like
Reactions: 1 users
About the Ads

nimbus

Member
15+ Year Member
Jan 14, 2006
7,213
9,905
Status (Visible)
Also if you look on a more granular level at DOPEN matches at places like Stanford and Columbia, they tend to be at highly ranked programs within the specialty. If you want Ortho at HSS or derm at NYU, it’s going to be very hard not coming from a T20 school.


 
Last edited:
  • Like
Reactions: 2 users

YCAGA

2+ Year Member
Jun 10, 2017
870
1,498
Status (Visible)
  1. Medical Student
Also if you look on a more granular level at DOPEN matches at places like Stanford and Columbia, they tend to be at highly ranked programs within the specialty.
This has probably already been mentioned in this thread but the concept of DOPEN matches as a reliable indicator is silly to begin with. An ortho match at a community program that barely hits the minimum ACGME case numbers is not nearly as impressive as a general surgery match at MGH or a peds match at CHOP.
 
  • Like
Reactions: 3 users
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
You are misinterpreting this analysis. Did you read the figure legend before you took the screenshot? lol

All this is saying is that for people who go to a T25 medical school, research year vs no research year correlates with matching. For any other sub-group, research years don't seem to matter. This is NOT saying that going to a T25 "increased odds of matching (if one took a research year)." Attending a T25 is constant between the two groups, the variable when calculating the odds ratio is research year or no research year. They would need to compare match rates for top 25 vs >25 to make the conclusion that you made, but they didn't do that in this paper. I mean...what they are studying is literally in the title of the paper..."Impact of Medical Student Research Fellowships on Otolaryngology Match Outcomes". This is only talking about research fellowships.
You are right, but it is even worse. I dug more into the paper. It actually boggles my mind that this was published, even with the goals they were aiming for.

Both in the results, and conclusions, they speak of a significant difference in students actually completing research fellowships between groups (T25 = 39.5%; T50= 15.8%; rest = 6%). This means that there was a massive difference in the number of students actually doing research among their groups in the first place.

When you take this into account, the minor significant differences they found in the T25 school cohort (90% no research vs 96% with research) is likely due to the fact that there is a much bigger “n” in this cohort that can reduce the variance among mean percentages (I.e., there are literally nearly 3x as many students completing research in the T25 than the next group; n=151, vs n=65).

So their conclusion that research only helps students at T25 vs outside the T25 is completely confounded (and unfounded)! If more students would complete research outside the T25 in the first place, the 2.5% difference seen in the lower groups could have easily become the 6% boost they saw in the T25.

this... isn’t a very good paper. For individuals so keen on describing the importance of strong research to their field, they do not seem as keen about producing it in their own work.
 
Last edited:
  • Like
Reactions: 1 user

littlecow

5+ Year Member
Jan 22, 2016
29
94
I would be happy to do a similar match list analysis on a larger scale, if there is interest in such a thing. I am not sure if it would help with the toxic “top” culture on the site, but it would at least make it more accurate (based on data, rather than hear-say).

Would be very interesting to see
 

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
I would be happy to do a similar match list analysis on a larger scale, if there is interest in such a thing. I am not sure if it would help with the toxic “top” culture on the site, but it would at least make it more accurate (based on data, rather than hear-say).
It would probably put to bed the "match lists = tea leaves" sentiment I see frequently. It's hard for a premed to read a match list cold, but tell them "this school puts people into Top 10 programs in medicine/peds/surgery/etc 5x as often" and that's information they can actually use for decision making.

I'll also plug the old chronicidal post in a similar vein

 
  • Like
Reactions: 2 users

darkamgine

5+ Year Member
Nov 25, 2014
23
108
Status (Visible)
  1. Pre-Health (Field Undecided)
I added and updated some data. There is a pretty clear trend where most established private med schools have higher than expected DOPEN match rates. I wonder how things look if I separate them by public/private. I am still not finished with 2019 data and definitely am missing quite a few schools but my next update will probably not be soon. I will be revamping the spreadsheet so that will be updated next time.
 
Oct 1, 2019
5
16
Status (Visible)
  1. Pre-Medical
I got bored so I spent half of today compiling and organizing data from the 2020 and 2021 Match. Please do not over-interpret or misinterpret the data: I am showing numbers not an explanation.

It may appear hard to understand whether a school's match is good or not, and honestly, it takes too much time to sort through each school's match data. I compiled a spreadsheet to look into the number and proportion of students matched into competitive specialties (Derm, Ortho, Plastics, ENT, NeuroSurgery, abbreviated DOPEN) by each medical school. I think most medical students would agree these are competitive specialties that are hard to get into because of expectations of high STEP scores, research, and good clinical grades. I know there are other potential competitive specialties such as IR and Rad Onc but they were not included because it is more work for me without a large difference in end results (Feel free to add it to my data if you want). Additionally, I realize not everyone who has a competitive application will apply into these specialties (for one, I am not looking into any surgical specialty), but I still believe the data is interesting to investigate. 9.1% of all US MD graduates in 2020 matched into DOPEN specialties (1751/19326) so that is the expected average for all US medical schools.

So below is my first plot showing the number of DOPEN matched in 2020 (Y axis) by the approximate class size or graduating class of 2020 (X axis). The trendline is approximately the average for all medical schools. About half of all medical students is represented by the medical schools shown below. Schools that are above the line have more than the expected number of students match into DOPEN whereas schools below the line have less students than expected match into DOPEN. There are many possible explanations why a school may want to be under the line such as primary care focus, under-served focus, research (MD-PhD) focus, or more. Note, I do not rank/distinguish at which hospitals people match.

View attachment 333719

However, it is interesting that the schools with the highest proportion of students matching into DOPEN are top research universities including Vanderbilt, UPenn, JHU, and Harvard. You can view the spreadsheet linked here [outdated, making new one, please wait] for specific numbers but there is a trend for many of the top private medical schools to be very specialized. There are some schools way above the line which I did not expect based on ranking for example, Miami and University of Rochester matched a lot of DOPEN (14%). Now, whether that is variance or not requires more data but it is something to pay attention to. Note, I inputted data from other years for schools that I could not find their 2020 match list.

I have also compiled the preliminary data for 2021 Match which looks even more interesting. If you look at the spreadsheet for 2021 data, you see some very eye-popping numbers. For example, Mayo and Vanderbilt have over 20% of their graduating class match into DOPEN specialties!! Yale and Miami are not far behind them in terms of percentage! Is this because of variance, COVID-19, student preferences, or something else, I do not know but the separation between schools appears larger this year than last year. Additionally, the data is currently incomplete but I wonder if more graduates from top medical school matching into DOPEN is a trend or just an aberration of this year.

View attachment 333720


Hopefully, these graphs and data were interesting to you. Keep in mind, there are many factors that go into residency applications and the medical school you go to is just one of them. I honestly would take this data with a grain of salt because the variation year to year is huge for some schools. UCLA went from below expected to among the highest DOPEN matched in a year. Same with Wake Forest.

If anyone is interested in doing more analysis or compiling more data with this, let me know or just use the spreadsheet. I do not know or promise all information in the spreadsheet is correct as I just compiled it today. Special thanks to @Kracin for his work on the match list table of contents. Also thanks to anyone making readable and searchable match list (you can really tell if a school puts in the work or not).

TLDR: Data from the Match showing graduates of which medical schools matched more than the national average into competitive surgical specialties.

Edit (3/24): Updated graphs and added 2019-2021 Average Data. Fixed an calculation error in some schools.
Edit (3/29) added and updated more data. Spreadsheet is out of date and working on new one. Excel is ripping me a new one by changing my formulas.

Here is average DOPEN match rate by medical schools from 2019-2021 with standard deviations. I think this mirrors research rankings by US news and reports quite a bit but there are some unexpected ones such as Miami being rather high while UPitts being lower than expected.
Almost all the top DOPEN match are private schools while public schools (even good ones) are a lot lower on the list than most people would expect.


View attachment 333723
The numbers used for some of these schools are blatantly wrong. For instance I go to one that you gave # of 15 to when the actual # is 20. Just food for thought for those looking at it!
 
  • Like
Reactions: 1 user

Chibucks15

2+ Year Member
Sep 7, 2016
2,479
6,103
Status (Visible)
  1. Resident [Any Field]
I added and updated some data. There is a pretty clear trend where most established private med schools have higher than expected DOPEN match rates. I wonder how things look if I separate them by public/private. I am still not finished with 2019 data and definitely am missing quite a few schools but my next update will probably not be soon. I will be revamping the spreadsheet so that will be updated next time.
Again, why does having “higher than expected DOPEN” matter? You cannot account for personal preference when it comes down to it. Cool exercise, VERY limited utility
 
  • Like
Reactions: 1 users

Frogger27

2+ Year Member
Aug 2, 2016
2,123
4,344
Status (Visible)
  1. Medical Student
Again, why does having “higher than expected DOPEN” matter? You cannot account for personal preference when it comes down to it. Cool exercise, VERY limited utility
Majority of AOA in my class went into IM, Radiology and Peds
 
  • Like
Reactions: 2 users
About the Ads

Chibucks15

2+ Year Member
Sep 7, 2016
2,479
6,103
Status (Visible)
  1. Resident [Any Field]
Majority of AOA in my class went into IM, Radiology and Peds
Exactly. I know several top of the class rockstar types who went midwest rural FM. Maybe cuz I’m a lowly DO school grad (soon at least)
 

darkamgine

5+ Year Member
Nov 25, 2014
23
108
Status (Visible)
  1. Pre-Health (Field Undecided)
Majority of AOA in my class went into IM, Radiology and Peds
Again, why does having “higher than expected DOPEN” matter? You cannot account for personal preference when it comes down to it. Cool exercise, VERY limited utility

To some, it does not matter. Even for those who are going into these specialties, I think it is more like a history book and not the rule book. But I can make the analogy of becoming a med student from college. Majority of the top of my undergraduate did not go into medicine (engineering, business, tech, research, etc). It was personal preference as to why they didn't go into medicine. Still, I believe medicine to be competitive and knowing the number of premeds that eventually become med students would be nice to know information even if I would not solely use it for deciding my undergraduate.
 

Chibucks15

2+ Year Member
Sep 7, 2016
2,479
6,103
Status (Visible)
  1. Resident [Any Field]
To some, it does not matter. Even for those who are going into these specialties, I think it is more like a history book and not the rule book. But I can make the analogy of becoming a med student from college. Majority of the top of my undergraduate did not go into medicine (engineering, business, tech, research, etc). It was personal preference as to why they didn't go into medicine. Still, I believe medicine to be competitive and knowing the number of premeds that eventually become med students would be nice to know information even if I would not solely use it for deciding my undergraduate.
I don't disagree with you. I more am putting that out there for people who just scroll through and get the wrong idea about things. Its rampant in the pre-med and preclinical community. Like I said, super cool idea I wish it was more generalizable and there was a way to account for people getting top choices and all that but that's under lock and key unfortunately. I feel like people getting top 3 choices sorted by specialty gives you a real good idea of how well people match from a school and accounts for their interests as well
 
  • Like
Reactions: 1 user

Frogger27

2+ Year Member
Aug 2, 2016
2,123
4,344
Status (Visible)
  1. Medical Student
To some, it does not matter. Even for those who are going into these specialties, I think it is more like a history book and not the rule book. But I can make the analogy of becoming a med student from college. Majority of the top of my undergraduate did not go into medicine (engineering, business, tech, research, etc). It was personal preference as to why they didn't go into medicine. Still, I believe medicine to be competitive and knowing the number of premeds that eventually become med students would be nice to know information even if I would not solely use it for deciding my undergraduate.
Similar sentiments as @Chibucks15, and this is coming from someone who was DOPEN 100% in medical school and just matched. After going through the process I do think a better metric would be knowing how people do on their rank list, but even this will vary a lot by year
 
  • Like
Reactions: 1 user

nimbus

Member
15+ Year Member
Jan 14, 2006
7,213
9,905
Status (Visible)
There is a large degree of self selection for specialties and people tend to eventually select specialties that they have a realistic chance of matching.
 
Last edited:
  • Like
Reactions: 1 users

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
To some, it does not matter. Even for those who are going into these specialties, I think it is more like a history book and not the rule book. But I can make the analogy of becoming a med student from college. Majority of the top of my undergraduate did not go into medicine (engineering, business, tech, research, etc). It was personal preference as to why they didn't go into medicine. Still, I believe medicine to be competitive and knowing the number of premeds that eventually become med students would be nice to know information even if I would not solely use it for deciding my undergraduate.
If you were choosing colleges, would you rather know the percent choosing medicine, or the percent getting into a top 10 program in their chosen field (JD, MD, PhD, MBA, etc)?

I'd want to know the latter
 
  • Like
Reactions: 1 user

Chibucks15

2+ Year Member
Sep 7, 2016
2,479
6,103
Status (Visible)
  1. Resident [Any Field]
If you were choosing colleges, would you rather know the percent choosing medicine, or the percent getting into a top 10 program in their chosen field (JD, MD, PhD, MBA, etc)?

I'd want to know the latter
I personally chose based on the football team and college life but that's why I needed a gap year I guess :rofl:
 
  • Like
  • Haha
Reactions: 3 users

Frogger27

2+ Year Member
Aug 2, 2016
2,123
4,344
Status (Visible)
  1. Medical Student
If you were choosing colleges, would you rather know the percent choosing medicine, or the percent getting into a top 10 program in their chosen field (JD, MD, PhD, MBA, etc)?

I'd want to know the latter

I chose because it was a great party school and all my friends were going... oops

@Chibucks15 i has to take two gap years because of my escapades. No Ragrets!
 
  • Like
Reactions: 1 users
Feb 13, 2020
65
32
Status (Visible)
  1. Pre-Health (Field Undecided)
  2. Pre-Medical
Oops, looks like the class size was wrong. It should be 120 instead of 50. Thanks for having me double check that. The percentage is about 6% and not 14%. I will update that after the weekend.

awesome! tysm for doing that.
 

FindersFee5

2+ Year Member
Jun 22, 2016
620
1,787
Status (Visible)
  1. Medical Student
I have done exactly this on a small scale for a few schools that some accepted students were deciding between (I.e., Case, Einstein, and Emory; little to no difference in students matching top 15 research-based IM programs over 4-5 year average).

Also, for those interested in a serious career in academic medicine, I usually point to the Doximity collaboration Paper published in “Academic Medicine” that analyzed the long-term career performances of 600,000+ graduates, and presented a T25 list based on career outcomes (rather than “student selectivity” scores, and other gameable nonsense):


I would be happy to do a similar match list analysis on a larger scale, if there is interest in such a thing. I am not sure if it would help with the toxic “top” culture on the site, but it would at least make it more accurate (based on data, rather than hear-say).

This is interesting, and does seem like it would help account for school size vs total funding which is naturally tilted towards schools with more faculty members. However, the issue is that it's essentially a composite ranking of the school over the last 30+ years, whereas school quality and reputation change more quickly than that.
 
About the Ads
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
This is interesting, and does seem like it would help account for school size vs total funding which is naturally tilted towards schools with more faculty members. However, the issue is that it's essentially a composite ranking of the school over the last 30+ years, whereas school quality and reputation change more quickly than that.
I assume you are talking about the Academic Medicine paper?

Their main table shows the accumulated average over 60 years, but they also show the trend in school outcomes over ten-year increments (though only for a few schools; UCSF grads really suffer a decline, while other schools gradually rise up).

I would argue that reputation for institutions remains rather stable (or increases over time for the top; see explanation below), and taking graduate outcomes for 600,000+ students across time shows a more accurate representation of that reputation.

On the other hand, I agree - school quality does absolutely change on a short time-scale basis. Reputation doesn’t, mainly because of the “Matthew effects” - the notion that reputation is a positive feedback loop for those at the top; you need to reach a certain threshold to receive the benefits of the effect, but once you do, you have to do little to snowball the increase in your reputation. It self-propagates.

In either case, the main model people refer to today (US News) completely fails to capture med school quality (or changes to curriculum/student outcomes) at all, and their survey of “reputation” is abysmally inaccurate (over 70-80% of Program directors they send their surveys to throw their mail out, and they don’t care to correct for uneven distribution among specialty/region out of the small sample of PD’s that actually humor them.

It’s a statistician’s nightmare; a completely subjective parameter, collected in a horrifically deficient manner. The minimum threshold of confidence for such surveys, most agree, is 70%; go to 50%, and you have them clenching their teeth, and adding a Shakespearean amount of qualifiers to their result statements. Drop down to 20%, and they either laugh or cry - usually both, with vodka - when the public actually somehow “believes this crap”).

Anyways, my $0.02. While imperfect, I think that the paper presented the best measure, by far, of gauging med school’s overall quality in a way that would incentivize schools to produce better academic physicians rather than attract a 0.05 point difference in GPA and 2 point difference in MCAT of applicants to shoot up 20 rankings (I.e., see NYU). It would also tell applicants what school would likely offer them the best environment for them to succeed the most.
 
Last edited:

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
I assume you are talking about the Academic Medicine paper?

Their main table shows the accumulated average over 60 years, but they also show the trend in school outcomes over ten-year increments (though only for a few schools; UCSF grads really suffer a decline, while other schools gradually rise up).

I would argue that reputation for institutions remains rather stable (or increases over time for the top; see explanation below), and taking graduate outcomes for 600,000+ students across time shows a more accurate representation of that reputation.

On the other hand, I agree - school quality does absolutely change on a short time-scale basis. Reputation doesn’t, mainly because of the “Matthew effects” - the notion that reputation is a positive feedback loop for those at the top; you need to reach a certain threshold to receive the benefits of the effect, but once you do, you have to do little to snowball the increase in your reputation. It self-propagates.

In either case, the main model people refer to today (US News) completely fails to capture med school quality (or changes to curriculum/student outcomes) at all, and their survey of “reputation” is abysmally inaccurate (over 70-80% of Program directors they send their surveys to throw their mail out, and they don’t care to correct for uneven distribution among specialty/region out of the small sample of PD’s that actually humor them.

It’s a statistician’s nightmare; a completely subjective parameter, collected in a horrifically deficient manner. The minimum threshold of confidence for such surveys, most agree, is 70%; go to 50%, and you have them clenching their teeth, and adding a Shakespearean amount of qualifiers to their result statements. Drop down to 20%, and they either laugh or cry - usually both, with vodka - when the public actually somehow “believes this crap”).

Anyways, my $0.02. While imperfect, I think that the paper presented the best measure, by far, of gauging med school’s overall quality in a way that would incentivize schools to produce better academic physicians rather than attract a 0.05 point difference in GPA and 2 point difference in MCAT of applicants to shoot up 20 rankings (I.e., see NYU). It would also tell applicants what school would likely offer them the best environment for them to succeed the most.

ok, maybe my $0.03
To play devil's advocate, it's an awful lot of coincidence that different metrics all favor the same couple dozen schools.

If I want a med school with the most academically capable students - it's the "top 20" that hold higher MCAT/GPA intervals. Alternatively if you want the most impressive achievements - Rhodes & co winners, authors published in Nature level journals, professional level athletics, etc they're more common at top ranking schools.

If I want a med school with the most research funding, or research activity/awards, or easy funding for student projects, or with built in research time - it's the top 20 that boast these.

If I want a med school with, on average, the strongest affiliated residencies, it's top 20 university hospital systems.

If I care about my grandma being able to brag to strangers, the schools everyone recognizes are well ranked.

If I want the best financial package- full demonstrated need, or even merit packages- these are more available at the high endowment well ranked schools.

US news and doximity are flawed, but we cant pretend they're drawing names from a hat when they rank schools. Theres a real phenomenon they're trying to capture, they just do a mediocre job of it.
 
  • Like
Reactions: 1 users
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
To play devil's advocate, it's an awful lot of coincidence that different metrics all favor the same couple dozen schools.

If I want a med school with the most academically capable students - it's the "top 20" that hold higher MCAT/GPA intervals. Alternatively if you want the most impressive achievements - Rhodes & co winners, authors published in Nature level journals, professional level athletics, etc they're more common at top ranking schools.

If I want a med school with the most research funding, or research activity/awards, or easy funding for student projects, or with built in research time - it's the top 20 that boast these.

If I want a med school with, on average, the strongest affiliated residencies, it's top 20 university hospital systems.
But here lies the “chicken and egg” problem. Top students - on average, but certainly not always - are attracted to top schools, which in turn benefit off of the top student’s success, which help propagate the school’s reputation. A la’ Matthews effects.

The interesting Component of this is the “self-fulfilling” prophecy of the US news effect. The fact that they list something as top-whatever draws those students in. If the system consistently ranked a Carribean school as top 10, over a few years, you would start to see some great match lists out of that school, and eventually more funding to it, because (at least some) top students would be attracted there and make the school look better.

My point is that if we want to rank schools at all, and make them succeed more off its rankings, I think we should do so on real measures of school success collected accurately.

Even in the current system that artificially props schools up, plenty of schools listed as “top 40” by US News produce graduates that are much more successful in academic medicine than the top 20, and even the top 10 (I.e., Rochester, Einstein, Case, Brown, and Boston grads do way better than UCLA, Pitt, Emory, and others)

If we ranked them accordingly, I wonder how much more drastic this effect would become, and how these other schools would react to produce better academic physicians to compete. I think it would produce a system where schools actually get better, as a whole, in training physicians.
 
Last edited:
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
If I want a med school with the most research funding, or research activity/awards, or easy funding for student projects, or with built in research time - it's the top 20 that boast these.

If I want a med school with, on average, the strongest affiliated residencies, it's top 20 university hospital systems.

If I care about my grandma being able to brag to strangers, the schools everyone recognizes are well ranked.

If I want the best financial package- full demonstrated need, or even merit packages- these are more available at the high endowment well ranked schools.

US news and doximity are flawed, but

I would also like to add that this principle is ridiculously misguided, and bears no resemblance to reality. The “top 20” contrivance is constantly thrown around, with little to no effect on students or institution.

Do you know which schools made the cut of “top 20” for US News? Uchicago is one of them. Do you know which didn’t? Emory, Einstein, Minnesota, UC-Denver, UAB, and Case, among others. Would you then reasonably expect that Pritzker had way more research funding, PI’s, opportunities than this group - who were ranked lower in the research category? I certainly would.

I find it strange, then, that all of these schools (and others) had greater funding than Pritzker, including Emory, who had nearly twice the funding that Chicago did, and over 100 more PI’s than it (I hold no bias; not a student there).

Why would this be the case? Probably because the average MCAT is 1-2 points higher at UChicago than these other schools, and the medical school directors and PD’s that didn’t toss their survey were those who were in the general region/speciality that favored Uchicago.

Pritzker is a wonderful school, by the way, and in no way am I trying to denigrate it. But are Emory, Colorado, etc. also wonderful schools with equal or greater opportunities and research strength? Absolutely! I just use them to show how deeply flawed this system is.

In my opinion, we really should have never given credibility to a magazine company to evaluate medical schools in the first place.
 
Last edited:

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
I would also like to add that this principle is ridiculously misguided, and bears no resemblance to reality. The “top 20” contrivance is constantly thrown around, with little to no effect on students or institution.

Do you know which schools made the cut of “top 20” for US News? Uchicago is one of them. Do you know which didn’t? Emory, Einstein, Minnesota, UC-Denver, UAB, and Case, among others. Would you then reasonably expect that Pritzker had way more research funding, PI’s, opportunities than this group - who were ranked lower in the research category? I certainly would.

I find it strange, then, that all of these schools (and others) had greater funding than Pritzker, including Emory, who had nearly twice the funding that Chicago did, and over 100 more PI’s than it (I hold no bias; not a student there).

Why would this be the case? Probably because the average MCAT is 1-2 points higher at UChicago than these other schools, and the 20% of distribution of medical school directors and PD’s that didn’t throw their survey out were those who were in the general region/speciality that favored Uchicago.

Pritzker is a wonderful school, by the way, and in no way am I trying to denigrate it. But are Emory, Colorado, etc. also wonderful schools with equal or greater opportunities and research strength? Absolutely! I just use them to show how deeply flawed this system is.

In my opinion, we really should have never given credibility to a magazine company to evaluate medical schools in the first place.
I think your main gripe is that they call it a research ranking while adding in a lot of non-research factors (reputation, selectivity). Pretty much nobody is actually reading it to see where the funding goes.
 
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
I think your main gripe is that they call it a research ranking while adding in a lot of non-research factors (reputation, selectivity). Pretty much nobody is actually reading it to see where the funding goes.
Not exactly (and also want to bring up that this is a larger response to viewpoints I’ve heard; not yours alone).

I brought up the funding because you mentioned the research funding/opportunities available in the top 20. The truth is that they aren’t that different (and for some, lesser) than the ones that are left out of that group.

Regarding reputation - it’s subjective, so it’s messy. But if it’s used, it should be at least accurate. It’s far from that.

student selectivity as a metric, I feel, should be thrown out completely. It’s too gameable, and offers no description of the actual teaching provided by the school. What’s more, the difference in the scores across what they define as the top 50 schools are so small that schools easily experience a massive boost from just a small increase in their class’s stat averages (on one end, you see 3.8 and 516; on the other, 3.9 and 518. That’s an A rather than A- in a few more undergrad classes, and 1 extra point on two MCAT sections).
 

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
Not exactly (and also want to bring up that this is a larger response to viewpoints I’ve heard; not yours alone).

I brought up the funding because you mentioned the research funding/opportunities available in the top 20. The truth is that they aren’t that different (and for some, lesser) than the ones that are left out of that group.

Regarding reputation - it’s subjective, so it’s messy. But if it’s used, it should be at least accurate. It’s far from that.

student selectivity as a metric, I feel, should be thrown out completely. It’s too gameable, and offers no description of the actual teaching provided by the school. What’s more, the difference in the scores across what they define as the top 50 schools are so small that schools easily experience a massive boost from just a small increase in their class’s stat averages (on one end, you see 3.8 and 516; on the other, 3.9 and 518. That’s an A rather than A- in a few more undergrad classes, and 1 extra point on two MCAT sections).
Top 20 is a loose number. I bet if you look at the first 25 compared to 25-50, the average funding (per capita or total) is markedly higher. It's all a gradient, but "top 20" is a useful shorthand.

Reputation may be subjective but it's been remarkably consistent across the years. Schools budge like 0.1/5 per year or don't change at all.

Student selectivity mattered to me. I'm sure it does to others too. If it's easy to game, where are the mid tier programs with 520 medians? High stats people with their choice of multiple programs tend to gravitate towards the same places, and being in a class full of remarkable minds is an appealing part of choosing one of those places.
 
  • Like
Reactions: 1 users
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
Top 20 is a loose number. I bet if you look at the first 25 compared to 25-50, the average funding (per capita or total) is markedly higher. It's all a gradient, but "top 20" is a useful shorthand.

Reputation may be subjective but it's been remarkably consistent across the years. Schools budge like 0.1/5 per year or don't change at all.

Student selectivity mattered to me. I'm sure it does to others too. If it's easy to game, where are the mid tier programs with 520 medians? High stats people with their choice of multiple programs tend to gravitate towards the same places, and being in a class full of remarkable minds is an appealing part of choosing one of those places.
I have looked. The funding/PI follows the same trend for most programs over the past 15 years (Blue Ridge Institute for Medical Research), with some outliers, but the outliers are actually outside of the US News “T25” (Programs with slightly less faculty getting more funding, like Case. Several of these programs also are very successful in securing F30’s to their MSTP students).

You should check about the “stability” of the rankings. NYU was 30 not too long ago, and now apparently 2nd or 3rd. Cornell was 20, then went to 9, then back down to 18, and Duke is doing a similar Macarena dance.

Reputation scores have showed some stability, but this is because of the bias in sample recruitment (those few directors that have previously completed the rankings are very likely to complete it again). I have already found several examples of high-ranking schools producing match lists worse than those much lower rank by US news standards, likely because many of the top program directors don’t touch the surveys at all, and think they are completely bogus (you can read the numerous critiques of it from some of them in articles and commentaries published in AAMC’s “Academic Medicine” to see these opinions more formally stated).

I assume you are asking why there aren’t mid tier programs with 520 MCAT scores? It’s a circular question - if they do have a 520 (or close) score, they are way more likely to be higher in ranking. And if they are already higher in ranking, they are more likely to attract students (like yourself) that keep the average inflated, and thus your perception of them would be “top tier”, not “mid tier“ or whatever elitist bs people are saying these days.

Anecdotally, I had scored a 520+ on my MCAT when I applied. Did this make me smarter than my colleagues that scored 1 point lower in a couple of sections? It’s absolutely ridiculous to think so. You would find many gifted minds just as easily at a school with a lower score, just likely less people that had the extra $3000 from their parents lying around to pay for prep courses.
 
Last edited:

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
I have already found several examples of high-ranking schools producing match lists worse than those much lower rank by US news standards
Oh that sounds juicy, please share the examples

I assume you are asking why there aren’t mid tier programs with 520 MCAT scores? It’s a circular question - if they do have a 520 (or close) score, they are way more likely to be higher in ranking. And if they are already higher in ranking, they are more likely to attract students (like yourself) that keep the average inflated, and thus your perception of them would be “top tier”, not “mid tier“ or whatever elitist bs people are saying these days.

Anecdotally, I had scored a 520+ on my MCAT when I applied. Did this make me smarter than my colleagues that scored 1 point lower in a couple of sections? It’s absolutely ridiculous to think so. You would find many gifted minds just as easily at a school with 516, just likely less people that had the extra $3000 from their parents lying around to pay for prep courses (granted, not what I did, but a very common strategy).
The mid tiers are currently already doing their best to maximize their metrics. They can't get as high because too many of the high scorers prefer to usual suspects at the top of the rankings

Idk what to tell you other than the people I've met at my med school were far and away the most impressive crowd I've ever been a part of. Maybe a rank 100 U of State is the exact same...maybe not.
 
  • Like
Reactions: 1 user
Mar 4, 2020
58
87
Status (Visible)
  1. MD/PhD Student
Oh that sounds juicy, please share the examples


The mid tiers are currently already doing their best to maximize their metrics. They can't get as high because too many of the high scorers prefer to usual suspects at the top of the rankings

Idk what to tell you other than the people I've met at my med school were far and away the most impressive crowd I've ever been a part of. Maybe a rank 100 U of State is the exact same...maybe not.
I feel like it would be self-defeating for my purpose. My point is to highlight that the rankings don’t really matter. If I can’t convince others here to believe that, I at least wanted people to more accurately identify schools that would help students more easily enter a career in academics. But now, I’m seeing that any discussion on it just becomes a petty contest to see “which is better”. if I identify schools that match better at “top” residencies, I inherently accept the same principle, but for the next step in the career path, and just highlight some other institutions to make them look better, while making others look bad (which is just the opposite of my goal...)

Anyways, I don’t want to derail the thread further. The best school is the one you get in where you feel you’ll be happiest (and for 80% of us, that choice is quite simple)
 
Last edited:
  • Like
  • Okay...
Reactions: 1 users

efle

not an elf
7+ Year Member
Apr 6, 2014
13,703
21,585
Status (Visible)
  1. Medical Student
I feel like it would be self-defeating for my purpose (and the way you phrased “juicy” just skeeves me out).
I'm really skeptical of your claim, this feels pretty confirmatory

My point is to highlight that the rankings don’t really matter. If I can’t convince others here to believe that, I at least wanted people to more accurately identify schools that would help students more easily enter a career in academics. But now, I’m seeing that any discussion on it just becomes a petty contest to see “which is better”. if I identify schools that match better at “top” residencies, I inherently accept the same principle, but for the next step in the career path, and just highlight some other institutions to make them look better, while making others look bad (which is just the opposite of my goal...)

Anyways, I don’t want to derail the thread further. The best school is the one you get in where you feel you’ll be happiest (and for 80% of us, that choice is quite simple)
Fair enough. Ranking doesn't matter unless you care what's being ranked. If someone cares about a weighted assessment of reputation, selectivity, and research funding it's a useful tool. If someone wants to find their best fit and doesn't care about those things, then it's not.
 
  • Like
Reactions: 1 users
About the Ads

Your message may be considered spam for the following reasons:

  1. Your new thread title is very short, and likely is unhelpful.
  2. Your reply is very short and likely does not add anything to the thread.
  3. Your reply is very long and likely does not add anything to the thread.
  4. It is very likely that it does not need any further discussion and thus bumping it serves no purpose.
  5. Your message is mostly quotes or spoilers.
  6. Your reply has occurred very quickly after a previous reply and likely does not add anything to the thread.
  7. This thread is locked.