Updated Post II Acceptance Rates 2023

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Hey all, I've been working on this community project for the last few weeks while I wait for the cycle to end.

You can see individual schools' updated application numbers, interviews, and acceptances that were in the 2021 sheet but now updated for 2023. This lets you see the application -> interview conversion rate and interview -> acceptance conversion rate. It also breaks stats down by in-state and out-of-state which is neat.

Soon I'll add all the school secondaries for the last 5 years and show cool info like the probability the secondary will show up in a future cycle based on the past trend. This should help with prioritizing pre-writing and make the whole admissions process less about Google searching and playing scavenger hunt for info.

Hope this helps a little with applying. I'll keep working on it out of boredom and see how it goes. If you have any feedback please let me know and I'll try to see what I can improve.

Link

Members don't see this ad.
 
Last edited by a moderator:
“your outcome is less impressive if you take more time to do it, but like sorry if I offended you!”

also you already did hijack this thread by bringing it up and remaining incessant about it after HappyRabbit made the point that it’s impossible to do with the data that’s available.
If the ranking of a medical school is based off of "impressiveness" then I do think the methodology should be reevaluated. I also think if your statement is truly what you took away from the conversation I do think its worth another read.

I don't see how this thread was "hijacked" especially when we were discussing how 5th-year promotability does present ANOTHER confounding variable for match lists on top of opting out of publicizing your results, location-based home programs (staying at Dartmouth [Hanover] vs staying at NYU [NYC]), penalizing those who desire rural-based medicine, penalizing those who prioritize staying to support their local communities, and using a system that is hidden from the public.

Nonetheless, I want to ask again why are we not using Step 2 scores?? lol
 
If the ranking of a medical school is based off of "impressiveness" then I do think the methodology should be reevaluated. I also think if your statement is truly what you took away from the conversation I do think its worth another read.

I don't see how this thread was "hijacked" especially when we were discussing how 5th-year promotability does present ANOTHER confounding variable for match lists on top of opting out of publicizing your results, location-based home programs (staying at Dartmouth [Hanover] vs staying at NYU [NYC]), penalizing those who desire rural-based medicine, penalizing those who prioritize staying to support their local communities, and using a system that is hidden from the public.

Nonetheless, I want to ask again why are we not using Step 2 scores?? lol
I think an interesting system would be to evaluate schools based on what % of students match one of their top 3 or top 5 or whatever residency choices. That would go a long way toward eliminating many of the concerns you’re listing!

Unfortunately, most schools do not publicize that data.
 
Lots of great points raised - overall a valuable and important discussion. On my end, the main limitation will always be the availability of data from schools that can be included in these features.

Over time, I hope to give schools the ability to both update existing info on Admit as well as provide new data points. I've been trialing this with a few med schools who have reached out and it seems that there is decent interest in such a concept.
 
Members don't see this ad :)
If the ranking of a medical school is based off of "impressiveness" then I do think the methodology should be reevaluated. I also think if your statement is truly what you took away from the conversation I do think its worth another read.

I don't see how this thread was "hijacked" especially when we were discussing how 5th-year promotability does present ANOTHER confounding variable for match lists on top of opting out of publicizing your results, location-based home programs (staying at Dartmouth [Hanover] vs staying at NYU [NYC]), penalizing those who desire rural-based medicine, penalizing those who prioritize staying to support their local communities, and using a system that is hidden from the public.

Nonetheless, I want to ask again why are we not using Step 2 scores?? lol

As @Mr. Macrophage pointed out, that data isn't available. I think a lot of the critical feedback directed towards Admit's ranking system is asking for data to be included that simply doesn't exist or isn't freely given. Accounting for 5th years, accounting for home program matching, accounting for regional preference, and whatever else people want; there's nothing there. I suspect the "match list strength" portion of the rankings are derived from % of students attending top programs within their specialty as determined by Doximity. Maybe % of students at a top 20 within their specialty, maybe top 40. Either way, I doubt Admit uses any other criteria for assessing match list strength, and I personally believe it should stay that way. The more factors you include, the less reliable the conclusion (very basic methodology we should all be familiar with). Rather than account for every factor under the sun (most of which aren't supported by data), I think clarity regarding how match lists are used in the rankings would be a much better solution. @HappyRabbit published a methodology doc for the first iteration of the match list, hopefully that doc can make a come back and be linked somewhere on the rankings page. If the rankings are up front about what is and what is not considered for a school's ranking, I don't think anyone can reasonably complain that a certain factor is included. If they have a problem with Admit/Rabbit's list and the factors considered, they can make their own list. This hinges upon Admit's ranking criteria being upfront and detailed, however.

Also, the questionableness of the medical school ranking list being partially based off of Doximity (another ranking site with dubious and possibly opaque ranking criteria) via match lists isn't lost on me.

Edit: I somehow missed Rabbit's statement about releasing the methodology soon AND the part about how "significant time" was spent controlling for confounding variables. I'll wait and see what that methodology looks like.
 
Last edited:
I’m not sure what bold claims you’re talking about.

Admit.org’s match list for WashU counts 104 students. Their MD class is 124 students, and 22 for MD/PhD.

A substantial number of students are missing from admit.org’s list.

Other schools are similarly missing a large number of matches, which almost always includes a large chunk of MSTP.

Edit: fixed the class size!
WashU's official 2025 match list includes 114 students, though their totals don't match the # listed for anesthesiology, psych, and IM/physician-scientist (counting is hard lol). I feel that is close enough to the 124 class size to fall within year-to-year variance, students opting out, and/or some not going to residency.

EDIT: They say 112 students matched in their press release. Who even knows at this point...

The missing matches on admit.org are: 1 FM, 1 anesthesiology, 2 DR, 1 psych, and 6 general surgery prelims. Admit.org has an erroneous extra Yale IM and includes an Ohio State IM instead of OHSU IM. Accounting for these discrepancies should, if anything, harm WashU because that is a shocking number of forced prelims.

In fairness, I called out my school for being sneaky about our general surgery prelims by listing them as just general surgery. I don't know if other schools do similar things with their lists. If possible, I think prelim-only matches should either be excluded from the analysis or else be weighted negatively.

Regarding MSTP matches, I know that they are included in our school's match list as long as they do not opt out. Do you have a source for the claim that other schools do not do this?
 
Last edited:
WashU's official 2025 match list includes 114 students, though their totals don't match the # listed for anesthesiology, psych, and IM/physician-scientist (counting is hard lol). I feel that is close enough to the 124 class size to fall within year-to-year variance, students opting out, and/or some not going to residency.

The missing matches on admit.org are: 1 FM, 1 anesthesiology, 2 DR, 1 psych, and 6 general surgery prelims. Admit.org has an erroneous extra Yale IM and includes an Ohio State IM instead of OHSU IM. Accounting for these discrepancies should, if anything, harm WashU because that is a shocking number of forced prelims.

In fairness, I called out my school for being sneaky about our general surgery prelims by listing them as just general surgery. I don't know if other schools do similar things with their lists. If possible, I think prelim-only matches should either be excluded from the analysis or else be weighted negatively.

Regarding MSTP matches, I know that they are included in our school's match list as long as they do not opt out. Do you have a source for the claim that other schools do not do this?
That discrepancy is exactly what I was talking about! For most schools, admit.org’s data is missing several matches (and as you pointed out, adds several matches) and many of those matches are MSTP due to the unique nature of many of their programs (physician scientist pathway, etc).

In WashU’s case, that’s essentially 10% of the class that’s inaccurate.

Many schools don’t even list prelim matches, or they are very challenging to find, like in Northwestern’s case. Moreover, many schools never explain why there is such a gap between their class size and their match list length.

Factor in the 5th year complaint about not comparing apples to apples… then throw in the mission fit of many schools leaning more primary care, and the fact that many students (especially at state schools) simply wanted to stay in state and prioritized location over ranking, and I’m realizing it’s impossible to have an equitable ranking of lists without a large quantity of data that doesn’t exist.

Especially since, as @TheRealBibFortuna pointed out, these rankings are based on the dubious and questionable doximity residency rankings to begin with.

Seeing match data and being able to compare it school to school is a fantastic tool, but after this discussion, I’m now of the opinion that match list strength should not be a very prominent factor in the ranking, until the kind of data needed to account for the many substantial confounding variables becomes readily accessible. I know many will disagree with me regarding this, and that’s totally fair. At the end of the day, rankings are irrelevant to begin with (as no one looks at them except premeds) so there are no stakes here!

Yes, as @TheRealBibFortuna mentioned, people looking at the data should look at the methodology and determine its flaws for themself, but realistically, how many actually will? I just want the data to be as equitable as possible for future premeds to be able to rely on more fully.
 
Seeing match data and being able to compare it school to school is a fantastic tool, but after this discussion, I’m now of the opinion that match list strength should not be a very prominent factor in the ranking, until the kind of data needed to account for the many substantial confounding variables becomes readily accessible. I know many will disagree with me regarding this, and that’s totally fair. At the end of the day, rankings are irrelevant to begin with (as no one looks at them except premeds) so there are no stakes here!

Yes, as @TheRealBibFortuna mentioned, people looking at the data should look at the methodology and determine its flaws for themself, but realistically, how many actually will? I just want the data to be as equitable as possible for future premeds to be able to rely on more fully.
You can directly communicate any information about discrepancies you notice through the little chat button in the lower right corner of the site. From what I can tell, the whole thing is run by a single M1 who manually inputs match lists. As relatively chill as first year is, it's still ridiculous to expect USNews levels of precision, etc.

Personally, I think admit.org is very much moving in the right direction with more focus on student outcomes via match lists. Past a certain point, what difference does another 100 million in research funding make for a medical student? That's what medical school rankings have historically stratified on and what cemented HMS as the perennial #1.

In terms of data, we should get more standardized and high quality public reporting of match outcomes beginning in 2026. Maybe we'll get school-specific Step 2 averages with that as well, which I agree would be a great addition to the ranking methodology. With that in mind, I think it's great that @HappyRabbit has created a method to quantify match list strength with what we have now.

Plus I like being able to tell people I go to a T3😎

EDIT: I still don't think you provided convincing evidence that those missing matches are MSTP-dominant. Taking WashU as an example, it would be legitimately horrifying if those 6 prelim surg matches were MD/PhD to boot. I really think these are just honest mistakes.
 
Last edited:
You can directly communicate any information about discrepancies you notice through the little chat button in the lower right corner of the site. From what I can tell, the whole thing is run by a single M1 who manually inputs match lists. As relatively chill as first year is, it's still ridiculous to expect USNews levels of precision, etc.

Personally, I think admit.org is very much moving in the right direction with more focus on student outcomes via match lists. Past a certain point, what difference does another 100 million in research funding make for a medical student? That's what medical school rankings have historically stratified on and what cemented HMS as the perennial #1.

In terms of data, we should get more standardized and high quality public reporting of match outcomes beginning in 2026. Maybe we'll get school-specific Step 2 averages with that as well, which I agree would be a great addition to the ranking methodology. With that in mind, I think it's great that @HappyRabbit has created a method to quantify match list strength with what we have now.

Plus I like being able to tell people I go to a T3😎
Admit.org is an incredible resource, and it’s insane that HappyRabbit has managed to accomplish all of this! The school list builder especially is an absolute lifesaver, and everyone I know uses it.

With that being said, if schools are going to be ranked with unreliable data (unreliable as in there are some discrepancies on admit, unreliable as in schools do not fully or accurately report their data, and unreliable as in it depends on a dubious external ranking (doximity)), why rank them at all? You could go up to a stranger and say Stanford is #1, #3, #10, or #20, and they wouldn’t argue with you, because no one but premeds look at these numbers. Whether a school is #3 or #10, is there an actual change in the quality of education you receive? No.

In my opinion, rankings are designed to help inform applicants. Match list strength is a very important factor, but with the data we have now, quantifying it properly is impossible, in my opinion, and quantifying it improperly is a disservice.
 
EDIT: I still don't think you provided convincing evidence that those missing matches are MSTP-dominant. Taking WashU as an example, it would be legitimately horrifying if those 6 prelim surg matches were MD/PhD to boot. I really think these are just honest mistakes.
Anecdotal based on schools on my list (not WashU specifically) in comparing match lists to admit’s data. I’ll use the chat button to point them out so that the team can get them fixed!

The original discussion surrounding MSTP was whether or not that’s a confounding variable too, and I 100% believe it is. Ideally, MSTP match list strength would be evaluated separately, as it’s such a different program.
 
Admit.org is an incredible resource, and it’s insane that HappyRabbit has managed to accomplish all of this! The school list builder especially is an absolute lifesaver, and everyone I know uses it.

With that being said, if schools are going to be ranked so imperfectly, why rank them at all? What does ranking actually accomplish? Whether a school is #3 or #10, is there an actual change in the quality of education you receive? No.

Match list strength is a very important factor, but with the data we have know, quantifying it properly is impossible, in my opinion.
This is what USNews tried to do. After they changed to a tier system, they have become entirely irrelevant, and schools have essentially solidified their last published rank in the public consciousness.

Beyond premeds deciding between schools, I think rankings and the data associated with them are important for a different but arguably more important reason: they hold schools accountable. I think that by picking the right metrics, we can incentivize schools to optimize things that actually matter for students (e.g. match outcomes) instead of those that do not (e.g. research dollars).

EDIT: Removed an analogy that does not make sense. Got carried away haha
 
Last edited:
This is what USNews tried to do. After they changed to a tier system, they have become entirely irrelevant, and schools have essentially solidified their last published rank in the public consciousness.

Beyond premeds deciding between schools, I think rankings and the data associated with them are important for a different but arguably more important reason: they hold schools accountable. I think that by picking the right metrics, we can incentivize schools to optimize things that actually matter for students (e.g. match outcomes) instead of those that do not (e.g. research dollars). You can think of it like shifting from fee-for-service, where providers are compensated based on the raw number of procedures, etc. billed, to value-based care, where compensation is based on agreed-upon metrics for patient outcomes and quality of care.
I see the merits of your argument, and completely agree that holding schools accountable would be fantastic.

I’m just not convinced rankings are capable of doing that anymore. USNews for instance relied on a ton of data that schools submitted internally. A lot of admit.org data is directly sourced from USNews. If schools opt out (like they did for USNews), where would this data be sourced? I also think rankings encourage schools to try and game them. Remember what Columbia undergrad did? What’s to stop schools from misrepresenting their numbers? Especially when admit, as you pointed out, is a small team compared to USNews.

Again though, I do agree holding schools accountable is super important. I think there could be a way to placate some of the concerns over confounding variables while still providing useful data to premeds and still potentially holding schools accountable (provided my assumptions above are wrong). You pointed out that tiers ruined USNews, and I definitely think USNews went about it the wrong way (too few tiers, bad methodology), but I do think a tiered system could potentially be beneficial. It eliminates obsession over minute differences in prestige, and could help muddle the impact of some of the confounding variables.

As an example and thought experiment, compare Michigan and Yale. I think Michigan (like many state schools) is penalized in the match list component for having a large number of students that simply want to stay in-state and don’t care about ranking (If you compare the old admit ranking to the new one, most state schools moved down as a result). I think Yale is given a boost due to the percentage of students that take a 5th year, resulting in a more competitive app. If there were just tiers, and Yale and Michigan were both in, say, tier 2 (below Harvard, etc), that would help muddle the confounding variables. Match list would still be a strong factor, but exact precision would be less of a concern. It could still hold schools accountable, as Michigan or Yale could theoretically still easily drop a tier (and it would be much easier to drop a tier if there were, say, 8-10 instead of the small number USNews used) if their results don’t stay strong.
 
Again though, I do agree holding schools accountable is super important. I think there could be a way to placate some of the concerns over confounding variables while still providing useful data to premeds and still potentially holding schools accountable (provided my assumptions above are wrong). You pointed out that tiers ruined USNews, and I definitely think USNews went about it the wrong way (too few tiers, bad methodology), but I do think a tiered system could potentially be beneficial. It eliminates obsession over minute differences in prestige, and could help muddle the impact of some of the confounding variables.
That's an interesting compromise. Raw rankings would put more pressure on schools to stay at the top of their game, knowing they could be scrutinized for any dip in rank. On the other hand, given the many confounders, small dips are utterly meaningless. Like you said, a tier system with enough tiers and the right methodology would help filter out the noise without allowing schools to become too complacent.

However, to make a tier system, you'd have to decide how many tiers to have, how many schools to include in each tier, whether tiers should be equal in size, etc. All of those decisions would be arbitrary and less objective than a raw number. It may also amplify the perceived difference between two schools right on the boundary between tiers (e.g. #20 vs #21 if the cutoff is top 20).

With the raw rankings, we already see premeds referring to schools in general tiers (e.g. T5/10/20). I think I lean towards giving more information rather than less and letting people form their own tiers if they wish.
 
Members don't see this ad :)
That's an interesting compromise. Raw rankings would put more pressure on schools to stay at the top of their game, knowing they could be scrutinized for any dip in rank. On the other hand, given the many confounders, small dips are utterly meaningless. Like you said, a tier system with enough tiers and the right methodology would help filter out the noise without allowing schools to become too complacent.

However, to make a tier system, you'd have to decide how many tiers to have, how many schools to include in each tier, whether tiers should be equal in size, etc. All of those decisions would be arbitrary and less objective than a raw number. It may also amplify the perceived difference between two schools right on the boundary between tiers (e.g. #20 vs #21 if the cutoff is top 20).

With the raw rankings, we already see premeds referring to schools in general tiers (e.g. T5/10/20). I think I lean towards giving more information rather than less and letting people form their own tiers if they wish.
Fair enough! I’ve said my piece. What HappyRabbit does is of course completely up to him and his team.

I am still very very concerned about the confounding variables, but at the end of the day, no one but premeds will be looking at this ranking. If they end up choosing a school based solely on this data without recognizing the confounding variables, I guess that’s on them.

I really appreciate you approaching this discussion civilly and engaging with my points with respect!
 
I find it amusing how much people romanticize and mystify students at institutions like Stanford. Stanford’s student body is not meaningfully different from those at peer institutions. As someone accepted to Stanford and several peer institutions, I can confidently say that admitted students are neither "niche experts in their field" nor destined to be the next "Fauci." Student priorities, in this case additional research years, simply reflect the values and culture of their programs. At Stanford, I might have stayed an extra year or two because the curriculum supports it, funding is available, the culture encourages it, and many peers do the same. At NYU, I’d likely pursue the 3-year accelerated pathway. Stanford students aren’t inherently more driven or exceptional. It is just not that deep.

Mr. Macrophage is spot-on, and I’d go further: metrics like match list strength and stats (MCAT/GPA) are flawed for ranking medical schools. UCSF could easily admit students with stats matching those at NYU, USF Morsani, or Hofstra, showing these metrics reveal little about the quality and reputation of these programs. Match list strength is equally vague, muddled by too many confounding variables to provide meaningful data points. What defines a "strong" match list? Are we penalizing schools like UCSF for attracting students passionate about primary care in underserved areas, while rewarding programs like CWR for recruiting those drawn to competitive specialties like surgery?

When evaluating medical schools, we ought to prioritize factors like historical reputation, NIH funding, research productivity (e.g., publications in high-impact journals), the quality of affiliated teaching hospitals (size, specialty diversity, patient population), average graduate indebtedness, number of students receiving scholarships, and student wellness. These provide a fuller picture of a program’s strength and impact in my opinion.
 
When evaluating medical schools, we ought to prioritize factors like historical reputation, NIH funding, research productivity (e.g., publications in high-impact journals), the quality of affiliated teaching hospitals (size, specialty diversity, patient population), average graduate indebtedness, number of students receiving scholarships, and student wellness. These provide a fuller picture of a program’s strength and impact in my opinion.
I don't agree with your first three criteria at all. Why should a school's historical reputation and research metrics matter to a medical student? Sure, you might say that both can help the student produce a competitive application for residency. Then why not directly quantify the desired outcome: the school's residency match list?

I agree with all your other criteria and would love to see them included in the ranking, but only if sufficient data is available. Graduate indebtedness definitely is. Wellness statistics are not standardized as far as I know and probably hard to quantify. Quantifying the quality of affiliated teaching hospitals is a whole other can of worms. Hospital rankings reflect medical care rather than the educational experience, and even residency rankings are not a direct reflection of the med student experience on the wards. I think some combination of those would still be informative to include in the ranking algorithm.

Ultimately, I believe the goal of medical school is to match into your specialty and residency program of choice. That's why I am heavily in favor of using match list strength as the top factor in a ranking of medical schools. Focusing on your first three suggested metrics only incentivizes schools to improve in ways that mean little to nothing to the students attending.

P.S. For those interested, someone published a ranking of medical schools focused solely on research metrics of graduates (impact of publications, NIH grants, awards, etc). It's from 2015, but the top 10 ended up being Harvard, Hopkins, Yale, UChicago, Cornell, Stanford, Penn, Columbia, Duke, and WashU (in that order). Pretty interesting stuff, likely of interest to students aiming for an academic career.
 
Last edited:
One of the issues with using graduate indebtedness is that there's a pretty strong correlation between med school rank and the probability that the student has their med school tuition being paid for by their parents (and being high SES overall).

You'll notice for example that Harvard has one of the lowest average debt amounts, but also one of the lowest % of students who are receiving financial aid. To my understanding, full-pay students are counted as having $0 average debt and therefore skew the average significantly lower (but this gets revealed in the % of students receiving aid). Let me know if my thinking is right here, because it's something I wanted to include, but wasn't sure if feasible because of the confound.
 
I don't agree with your first three criteria at all. Why should a school's historical reputation and research metrics matter to a medical student? Sure, you might say that both can help the student produce a competitive application for residency. Then why not directly quantify the desired outcome: the school's residency match list?

I agree with all your other criteria and would love to see them included in the ranking, but only if sufficient data is available. Graduate indebtedness definitely is. Wellness statistics are not standardized as far as I know and probably hard to quantify. Quantifying the quality of affiliated teaching hospitals is a whole other can of worms. Hospital rankings reflect medical care rather than the educational experience, and even residency rankings are not a direct reflection of the med student experience on the wards. I think some combination of those would still be informative to include in the ranking algorithm.

Ultimately, I believe the goal of medical school is to match into your specialty and residency program of choice. That's why I am heavily in favor of using match list strength as the top factor in a ranking of medical schools. Focusing on your first three suggested metrics only incentivizes schools to improve in ways that mean little to nothing to the students attending.
I think it is shortsighted to conclude that the historical reputation of medical schools does not matter to students. Reputation carries serious weight in academia, opens doors, and impacts residency matching. I’ve seen many accepted applicants on SDN and Reddit turn down full-ride offers from solid mid-tier programs like Albert Einstein to go to Harvard and other elite schools, mostly because of the clout and prestige those names hold in medicine and beyond. Why care about research metrics? Grants are awarded to experts and leaders in their respective fields who have a track record of productivity. When you are surrounded by such people, you get opportunities for mentorship, letters of recommendations from people who are renowned in their field, networking opportunities, and the opportunity to get your name on high impact papers, the sum of which give you a leg up in matching and later in your career.

I agree that metrics such as student wellness are not easy to quantify. Likewise, I would argue that "match list strength" is equally difficult to quantify. What even constitutes a strong match list? Students matching into competitive specialties? Students matching to competitive institutions? Take for instance California students. Most want to stay in California and will readily pass on more prestigious programs in other regions to stay in their state of preference. Is that an indictment of the medical school or the ability of students at the school to match into competitive residency programs? What about schools that deliberately recruit students with an interest in global health, health equity and primary care? Does that mean that the school is worse at matching into competitive specialties because students choose not to match into neurosurgery and dermatology? I think not.. I am curious to hear why GPA/MCAT score impacts the experience of medical students at an institution.
 
Last edited:
One of the issues with using graduate indebtedness is that there's a pretty strong correlation between med school rank and the probability that the student has their med school tuition being paid for by their parents (and being high SES overall).

You'll notice for example that Harvard has one of the lowest average debt amounts, but also one of the lowest % of students who are receiving financial aid. To my understanding, full-pay students are counted as having $0 average debt and therefore skew the average significantly lower (but this gets revealed in the % of students receiving aid). Let me know if my thinking is right here, because it's something I wanted to include, but wasn't sure if feasible because of the confound.
You are faced with an impossible task: to quantify that which is largely unquantifiable. Your analysis concerning graduate indebtedness is correct. A school with a disproportionate number of students whose parents pay their tuition will result in lower average graduate indebtedness. MSAR does report the percentage of matriculated students who receive financial aid, which I suppose could be a substitute, although not a perfect one.

I'm curious to hear your perspective on why metrics such as the number of accepted students and yield are relevant to the quality of a program, though? Would such a metric not punish excellent programs in undesirable locations, such as the Mayo Clinic, WashU, and University of Michigan, disproportionately?
 
You are faced with an impossible task: to quantify that which is largely unquantifiable. Your analysis concerning graduate indebtedness is correct. A school with a disproportionate number of students whose parents pay their tuition will result in lower average graduate indebtedness. MSAR does report the percentage of matriculated students who receive financial aid, which I suppose could be a substitute, although not a perfect one.

I'm curious to hear your perspective on why metrics such as the number of accepted students and yield are relevant to the quality of a program, though? Would such a metric not punish excellent programs in undesirable locations, such as the Mayo Clinic, WashU, and University of Michigan, disproportionately?

Yield is an important metric to include because it has a direct correlation with both the desirability of the school and the quality of the students that matriculate. It also gives insight into the admissions practices of schools - especially those who have to give out 2 or 3 times the number of acceptances compared to peer institutions.

Say two schools share similar metrics across the board, including class size, but one school has to give out 3 times the number of acceptances as the other to yield the same class size - why would this school not be ranked lower? We know that the school sending out more acceptances relative to their class size is going to have a 'worse' pick of students and overall be less in demand.

Also, have to mention FYI that rankings are probably the lowest thing on my priority list and something I don't care too much about - they provide close to zero value to applicants and I would rather spend my time on utility features that make a direct impact on access to admissions.
 
Yield is an important metric to include because it has a direct correlation with both the desirability of the school and the quality of the students that matriculate. It also gives insight into the admissions practices of schools - especially those who have to give out 2 or 3 times the number of acceptances compared to peer institutions.

Say two schools share similar metrics across the board, including class size, but one school has to give out 3 times the number of acceptances as the other to yield the same class size - why would this school not be ranked lower? We know that the school sending out more acceptances relative to their class size is going to have a 'worse' pick of students and overall be less in demand.
What you are essentially measuring is the desirability of the location of the medical school. It's no coincidence that programs such as WashU, Vanderbilt, Mayo Clinic, Yale, and John Hopkins receive considerably fewer applications than their peer institutions, and have to admit more students. It's simply because many applicants don't want to live in Rochester, St. Louis and Baltimore. I don't agree that these programs have a "worse" pick of students, as there is little that separates students who get admitted to T10 programs in metrics such as intelligence, life experiences and abilities.

I absolutely hear you when you say you don't care about the ranking. Unfortunately, it's probably the thing pre-medical students care about the most, lol.
 
P.S. For those interested, someone published a ranking of medical schools focused solely on research metrics of graduates (impact of publications, NIH grants, awards, etc). It's from 2015, but the top 10 ended up being Harvard, Hopkins, Yale, UChicago, Cornell, Stanford, Penn, Columbia, Duke, and WashU (in that order). Pretty interesting stuff, likely of interest to students aiming for an academic career.
UChicago devoted a large chunk of their second look to talking about research and the like. Now it’s clear why! Thanks for the interesting link!
 
I think it is shortsighted to conclude that the historical reputation of medical schools does not matter to students. Reputation carries serious weight in academia, opens doors, and impacts residency matching. I’ve seen many accepted applicants on SDN and Reddit turn down full-ride offers from solid mid-tier programs like Albert Einstein to go to Harvard and other elite schools, mostly because of the clout and prestige those names hold in medicine and beyond. Why care about research metrics? Grants are awarded to experts and leaders in their respective fields who have a track record of productivity. When you are surrounded by such people, you get opportunities for mentorship, letters of recommendations from people who are renowned in their field, networking opportunities, and the opportunity to get your name on high impact papers, the sum of which give you a leg up in matching and later in your career.

I agree that metrics such as student wellness are not easy to quantify. Likewise, I would argue that "match list strength" is equally difficult to quantify. What even constitutes a strong match list? Students matching into competitive specialties? Students matching to competitive institutions? Take for instance California students. Most want to stay in California and will readily pass on more prestigious programs in other regions to stay in their state of preference. Is that an indictment of the medical school or the ability of students at the school to match into competitive residency programs? What about schools that deliberately recruit students with an interest in global health, health equity and primary care? Does that mean that the school is worse at matching into competitive specialties because students choose not to match into neurosurgery and dermatology? I think not.. I am curious to hear why GPA/MCAT score impacts the experience of medical students at an institution.
Everything I’ve heard suggests that medical school name stops mattering the moment you make it into residency. At that point, it is your residency program’s name that will follow you throughout your career and make the kinda of differences you mention. Even then, it is only for the subset of people interested in a career that is prestige-conscious or heavily connections-based. Your listed examples are all things I’ve already implicitly considered in my earlier statement which I’ll repeat here: Sure, you might say that both can help the student produce a competitive application for residency. Then why not directly quantify the desired outcome: the school's residency match list?

I agree match lists are difficult to quantify. But unlike wellness, we have actual public data to work with, and it represents the ultimate outcome of students’ time at a medical school. I appreciate that effort was put in to quantifying what I believe to be the most important quality of a medical school.

All the nuances you mentioned are valid. Even at Stanford, I know plenty of people who voluntarily chose to match locally instead of say MGH despite interviewing there. It will never be possible to control for all those things unless schools start disclosing way more information than they currently do.

I didn’t say anything about MCAT/GPA, but since you brought it up, they are objectively the best predictors we have of medical school performance, including Step scores, clerkship grades, etc. If those metrics ever become public, I’m all for dropping MCAT/GPA in favor of those. Schools are understandably stingy with these kinds of metrics. Why share information that could potentially make you look bad if you can just coast on your reputation?
 
What you are essentially measuring is the desirability of the location of the medical school. It's no coincidence that programs such as WashU, Vanderbilt, Mayo Clinic, Yale, and John Hopkins receive considerably fewer applications than their peer institutions, and have to admit more students. It's simply because many applicants don't want to live in Rochester, St. Louis and Baltimore. I don't agree that these programs have a "worse" pick of students, as there is little that separates students who get admitted to T10 programs in metrics such as intelligence, life experiences and abilities.

I absolutely hear you when you say you don't care about the ranking. Unfortunately, it's probably the thing pre-medical students care about the most, lol.

I have to disagree with your assessment that all admits to T10 schools are the same.

re: your other point, take Vanderbilt for example - from the cycle results data on Admit I can tell you that it has one of the highest overlaps in acceptances with other peer T10 institutions. Said another way, applicants who are getting into Vanderbilt are highly, highly likely to receive an acceptance to say Columbia.

I've never heard of Vanderbilt having issues with location, and the likely culprit behind their yield rate being a staggering 28% is the fact that when applicants are faced with the decision of Vandy vs. X, they're always choosing the other T10 school for one reason or another. Why would this metric therefore not be included? It's probably one of the clearest signals for school demand when comparing schools against each other.
 
Sorry for the brief change in topic, but why does admit not have data on uro and ophtho on the residency side @HappyRabbit ?

Also, you may want to remove the rankings from vascular surgery and thoracic surgery. Doximity presents them in alphabetical order, indicating that they are probably not ranked.
 
Sorry for the brief change in topic, but why does admit not have data on uro and ophtho on the residency side @HappyRabbit ?

Also, you may want to remove the rankings from vascular surgery and thoracic surgery. Doximity presents them in alphabetical order, indicating that they are probably not ranked.
Never got around to doing it - will focus more on residency stuff after I finish the essay manager for med school.
 
I don't agree with your first three criteria at all. Why should a school's historical reputation and research metrics matter to a medical student? Sure, you might say that both can help the student produce a competitive application for residency. Then why not directly quantify the desired outcome: the school's residency match list?

I agree with all your other criteria and would love to see them included in the ranking, but only if sufficient data is available. Graduate indebtedness definitely is. Wellness statistics are not standardized as far as I know and probably hard to quantify. Quantifying the quality of affiliated teaching hospitals is a whole other can of worms. Hospital rankings reflect medical care rather than the educational experience, and even residency rankings are not a direct reflection of the med student experience on the wards. I think some combination of those would still be informative to include in the ranking algorithm.

Ultimately, I believe the goal of medical school is to match into your specialty and residency program of choice. That's why I am heavily in favor of using match list strength as the top factor in a ranking of medical schools. Focusing on your first three suggested metrics only incentivizes schools to improve in ways that mean little to nothing to the students attending.

P.S. For those interested, someone published a ranking of medical schools focused solely on research metrics of graduates (impact of publications, NIH grants, awards, etc). It's from 2015, but the top 10 ended up being Harvard, Hopkins, Yale, UChicago, Cornell, Stanford, Penn, Columbia, Duke, and WashU (in that order). Pretty interesting stuff, likely of interest to students aiming for an academic career.
I agree that the goal of medical school is to match into your speciality and residency program of choice. But what happens when your specialty and residency of choice is to go into primary care in a rural area? Do schools get penalized for that? Is holding schools accountable meaning that to "boost their rankings" they need to advise students to pursue competitive specialities at academic institutions only? @HappyRabbit I do hope there is clarity in WHY there is a negative in your rankings for those personal preferences.
 
I agree that the goal of medical school is to match into your speciality and residency program of choice. But what happens when your specialty and residency of choice is to go into primary care in a rural area? Do schools get penalized for that? Is holding schools accountable meaning that to "boost their rankings" they need to advise students to pursue competitive specialities at academic institutions only? @HappyRabbit I do hope there is clarity in WHY there is a negative in your rankings for those personal preferences.
You keep reintroducing this topic, so what exact change are you looking for? Be specific. Anyone who's been watching Admit develop knows the list was based off of US News research rankings (the first iteration of the rankings were literally just copy and pasted), and I don't see many people frothing at the mouth for the return of the primary care rankings. So, assuming Rabbit would want to incorporate primary care preference into what is currently a research/match strength/competitiveness ranking list, how would they do it? Ask the school? They would give whatever answer made them look best. Poll the students? Same issue. So how exactly do you propose accounting for that using data that can be standardized across schools?


Also, I'm still not sold on including primary care preference would even be of value to a ranking system like the one currently in place. @Mr. Macrophage seems supportive of including it so I'd love to hear his reasoning, but a list that factored in both research and primary care preference (were it even possible to factor in the latter) would be the worst of both worlds. Let's say that this rank list puts Columbia at the #1 slot because of the triple threat of ivy league, high research funding, and the Columbia Bassett rural/primary care campus. People going there for research are going to be bummed to see fewer research opportunities than lower ranked schools and people going there for primary care are going to be bummed if they don't get picked for Bassett. You're asking for the apples to be thrown in with the oranges.There's a reason research and primary care rankings were compared separately, and there's a reason the primary care ranking was such a mess that it's ignored to this day. The people hoping to practice in rural towns or stay in their home state have even less reason to consider rankings than the premed interested in academia or research (the areas where rankings could even matter). It strikes me as pointlessness within pointlessness, so hopefully either one of you can show me what I'm missing.
 
I agree that the goal of medical school is to match into your speciality and residency program of choice. But what happens when your specialty and residency of choice is to go into primary care in a rural area? Do schools get penalized for that? Is holding schools accountable meaning that to "boost their rankings" they need to advise students to pursue competitive specialities at academic institutions only? @HappyRabbit I do hope there is clarity in WHY there is a negative in your rankings for those personal preferences.
On average specialty interests among first-year med students (but not outcomes) are pretty normally distributed regardless of the med school with some skew near the top, and one of the key metrics that prove a medical school is successful as an institution is its ability to place medical students into residency programs. If a medical school is not able to match students into surgical specialties, or say place medical students with a deserving profile into top IM programs, what does that say about the quality of the institution? Step 2 score and matching outcomes are some of the most accurate signals that can be used to judge how well medical schools are doing at their job, which is quite literally to match applicants successfully into residency programs.

Your argument can be raised for basically any and all ranking criteria that exist. Why include research funding, for example, if an applicant wants to do FM in a rural area and not go into academics. I'm not saying that there's a perfect set of criteria that exist, but I think what's being used now is probably some of the best metrics that can be used. When it comes to rankings in general, what most people care about is match quality and research opportunities anyway which is why those metrics are included and not say match rate into FM.
 
You keep reintroducing this topic, so what exact change are you looking for? Be specific. Anyone who's been watching Admit develop knows the list was based off of US News research rankings (the first iteration of the rankings were literally just copy and pasted), and I don't see many people frothing at the mouth for the return of the primary care rankings. So, assuming Rabbit would want to incorporate primary care preference into what is currently a research/match strength/competitiveness ranking list, how would they do it? Ask the school? They would give whatever answer made them look best. Poll the students? Same issue. So how exactly do you propose accounting for that using data that can be standardized across schools?


Also, I'm still not sold on including primary care preference would even be of value to a ranking system like the one currently in place. @Mr. Macrophage seems supportive of including it so I'd love to hear his reasoning, but a list that factored in both research and primary care preference (were it even possible to factor in the latter) would be the worst of both worlds. Let's say that this rank list puts Columbia at the #1 slot because of the triple threat of ivy league, high research funding, and the Columbia Bassett rural/primary care campus. People going there for research are going to be bummed to see fewer research opportunities than lower ranked schools and people going there for primary care are going to be bummed if they don't get picked for Bassett. You're asking for the apples to be thrown in with the oranges.There's a reason research and primary care rankings were compared separately, and there's a reason the primary care ranking was such a mess that it's ignored to this day. The people hoping to practice in rural towns or stay in their home state have even less reason to consider rankings than the premed interested in academia or research (the areas where rankings could even matter). It strikes me as pointlessness within pointlessness, so hopefully either one of you can show me what I'm missing.
I’m not really sold on primary care rankings! I just think it’s yet another confounding variable in the current match list ranking (unless HappyRabbit adjusted for it somehow?). Certain schools like Emory and UChicago prioritize more primary care fields than other schools, so they won’t get as many points as schools like Case Western that match an absurd number of competitive specialties. It’s not that UChicago students couldn’t match neurosurgery, it’s that they didn’t want to, for instance.
 
Ultimately, I believe the goal of medical school is to match into your specialty and residency program of choice. That's why I am heavily in favor of using match list strength as the top factor in a ranking of medical schools. Focusing on your first three suggested metrics only incentivizes schools to improve in ways that mean little to nothing to the students attending.

I completely agree with your first statement. I would also agree with your second statement, IF we actually had all the metrics necessary to divine that out of the match lists properly. I'm not sure it can ever be possible, though, as every match is subjective based on the interest of the given class and the capability of individual students. What if everyone at Harvard wanted to match plastics one year? Would it be 100%? If there were a way to determine that for any given school and any given residency, then that would be quite useful. Otherwise, you can just look at trends and pretend that the 5 people who matched Plastics at Harvard were the only ones who wanted to and ended up at the programs they wanted. Who knows? The data doesn't tell us.
 
I completely agree with your first statement. I would also agree with your second statement, IF we actually had all the metrics necessary to divine that out of the match lists properly. I'm not sure it can ever be possible, though, as every match is subjective based on the interest of the given class and the capability of individual students. What if everyone at Harvard wanted to match plastics one year? Would it be 100%? If there were a way to determine that for any given school and any given residency, then that would be quite useful. Otherwise, you can just look at trends and pretend that the 5 people who matched Plastics at Harvard were the only ones who wanted to and ended up at the programs they wanted. Who knows? The data doesn't tell us.
You're describing year-by-year variance in student interests, not aggregate matches over several years. The year-by-year variance, regardless, doesn't tend to fluctuate that dramatically. But if you assess match lists for the prior 5 years, and cast a wider net for specialties (accumulating the top 5-10 most competitive specialty matches, along with top IM, etc.) you can get a much more stable measure of how and where a school tends to match students. You can also use that metric to see how match list strength gradually changes over time, if a new school for instance becomes more established, or an established school becomes more renowned.
 
I agree that the goal of medical school is to match into your speciality and residency program of choice. But what happens when your specialty and residency of choice is to go into primary care in a rural area? Do schools get penalized for that? Is holding schools accountable meaning that to "boost their rankings" they need to advise students to pursue competitive specialities at academic institutions only? @HappyRabbit I do hope there is clarity in WHY there is a negative in your rankings for those personal preferences.
I’m not really sold on primary care rankings! I just think it’s yet another confounding variable in the current match list ranking (unless HappyRabbit adjusted for it somehow?). Certain schools like Emory and UChicago prioritize more primary care fields than other schools, so they won’t get as many points as schools like Case Western that match an absurd number of competitive specialties. It’s not that UChicago students couldn’t match neurosurgery, it’s that they didn’t want to, for instance.
I completely agree with your first statement.
You're describing year-by-year variance in student interests, not aggregate matches over several years.

The rankings and reading match list tea leaves have been beaten to death over the years. Discussion has been respectful again so far but I direct everyone to comment on relevant aspects and feedback of this tool.
 
I agree that the goal of medical school is to match into your speciality and residency program of choice. But what happens when your specialty and residency of choice is to go into primary care in a rural area? Do schools get penalized for that? Is holding schools accountable meaning that to "boost their rankings" they need to advise students to pursue competitive specialities at academic institutions only? @HappyRabbit I do hope there is clarity in WHY there is a negative in your rankings for those personal preferences.
Two thoughts. (1) The majority of medical students change their mind about what they want to do by the time they graduate. (2) Anyone who is absolutely set on a career in primary care in a rural area should ignore rankings altogether.

I like to think of rankings as an admittedly crude and imperfect measure of the number of doors a school keeps open for you. Before anything else, I just want to reiterate that there are only a handful of careers in medicine where training pedigree matters (e.g. becoming dean of med school). The schools topping all the relevant rankings (USNews research, PD research, admit.org) keep more of these careers open mainly because their students match at prestigious residencies. The way I understand things, your med school name means little to nothing in comparison to your residency/fellowship name. I can tell you no one cares where you did undergrad in med school.

As noble as goal as it is to train future rural primary physicians, you can do that from any reputable medical school. These rankings are entirely irrelevant to a student dead set on such a path. For everyone else, I'd argue the more doors a school keeps open, the better it objectively is.

Also, I feel that you are unfairly assuming the algorithm negatively penalizes schools whose students prefer rural primary care. That was not explicitly stated anywhere, and it is more likely that schools which match high proportions of students into high ranking academic residencies receive a boost, one that I argue is justified with my reasoning above.
 
You keep reintroducing this topic, so what exact change are you looking for? Be specific. Anyone who's been watching Admit develop knows the list was based off of US News research rankings (the first iteration of the rankings were literally just copy and pasted), and I don't see many people frothing at the mouth for the return of the primary care rankings. So, assuming Rabbit would want to incorporate primary care preference into what is currently a research/match strength/competitiveness ranking list, how would they do it? Ask the school? They would give whatever answer made them look best. Poll the students? Same issue. So how exactly do you propose accounting for that using data that can be standardized across schools?

On average specialty interests among first-year med students (but not outcomes) are pretty normally distributed regardless of the med school with some skew near the top, and one of the key metrics that prove a medical school is successful as an institution is its ability to place medical students into residency programs. If a medical school is not able to match students into surgical specialties, or say place medical students with a deserving profile into top IM programs, what does that say about the quality of the institution? Step 2 score and matching outcomes are some of the most accurate signals that can be used to judge how well medical schools are doing at their job, which is quite literally to match applicants successfully into residency programs.

Your argument can be raised for basically any and all ranking criteria that exist. Why include research funding, for example, if an applicant wants to do FM in a rural area and not go into academics. I'm not saying that there's a perfect set of criteria that exist, but I think what's being used now is probably some of the best metrics that can be used. When it comes to rankings in general, what most people care about is match quality and research opportunities anyway which is why those metrics are included and not say match rate into FM.
I see your point! I do think match "quality" is completely subjective - for example I'm pretty interested in EM and a lot of the most sought out EM residencies tend to be ones that are community based and not academic based. Either way you are right that there is no perfect system.

I think the reason why I keep bringing up these points is that there is just no acknowledgement that by making these rankings by this criteria (match list outside of home program, academic institution matches, metropolitan locations) - you are insinuating that rural/primary care physicians are less desirable or impressive which I why it feels like I'm really dying on this hill lol. And imo, there are other ways to rank without using match lists that prioritizes prestigious matches and diminishes schools that focus on primary care and rural locations. Especially in a time where that rural medicine is exceedingly lacking throughout this country.

At the end of the day, the rankings can be whatever they need to be and admit.org is an impressive and great tool for students - but I stand by my point that the rankings feel like it just reinforces rural/primary care as being less impressive or desirable than going for prestigious academic institution based roles. And I felt like that is an important point to just keep in mind when publicizing these rankings.

I do think HappyRabbit most likely did their best to control for confounding variables and I do applaud how much time/work they put in and also letting us all have this discussion! We all want to make this the best tool possible if we can.
 
Last edited:
I think the reason why I keep bringing up these points is that there is just no acknowledgement that by making these rankings by this criteria (match list outside of home program, academic institution matches, metropolitan locations) - you are insinuating that rural/primary care physicians are less desirable or impressive which I why it feels like I'm really dying on this hill lol. And imo, there are other ways to rank without using match lists that prioritizes prestigious matches and diminishes schools that focus on primary care and rural locations. Especially in a time where that rural medicine is exceedingly lacking throughout this country.
Less impressive? I guess that depends on what you mean by impressive and from which perspective we're looking at. "Impressive" is certainly too nebulous a concept to warrant any change on Admit's part, either way.

Less desirable? Pretty sure I can demonstrate that primary care and rural medicine very much is less desirable on average. You can prove that to yourself using match data (specifically which programs go unfilled), placement of new attendings by city, medicare reimbursement for FM and primary care IM, the vast incentives provided for rural medicine that go underutilized purely because of how undesirable the prospect is, or even the "physician shortage problem" itself. I really don't think primary care and rural medicine need any help from Admit to look undesirable. I saw this post a while ago. They're paying double the average salary for a PCP yet the discussion centers around the many reasons taking such a job isn't worth it.

Either way, the real impact that ranking rural/primary care focused schools lower is that they are painted as less competitive and have fewer opportunities. Both of which are true. Harvard has their primary care tracks, Columbia has Bassett. Those schools are ranked high because they can do most everything WVU can do, and more.
 
This thread has strayed away from it's original purpose. This happened very soon again after I mentioned this earlier today. This is approaching the level where the thread has run its course and will be closed or certain users will need a break for a time from their on-going back and forth around the same basic ideas.
 
This thread has strayed away from it's original purpose. This happened very soon again after I mentioned this earlier today. This is approaching the level where the thread has run its course and will be closed or certain users will need a break for a time from their on-going back and forth around the same basic ideas.
The topic is very much still on Admit.org and its features (the feature of discussion being the rankings). Unless your concern is that we are no longer on the topic of "Updated Post II Acceptance Rates 2023"? I've noticed SDN moderators like to jump in and exert their will anytime a prolonged discussion takes place, and I wonder if that's one of the reasons SDN falls so far behind in active users as compared to other med/health boards like Reddit. The medical student forums here are ghost towns...
 
The topic is very much still on Admit.org and its features (the feature of discussion being the rankings). Unless your concern is that we are no longer on the topic of "Updated Post II Acceptance Rates 2023"? I've noticed SDN moderators like to jump in and exert their will anytime a prolonged discussion takes place, and I wonder if that's one of the reasons SDN falls so far behind in active users as compared to other med/health boards like Reddit. The medical student forums here are ghost towns...
This thread was created to discuss that site's features and corrections. It has delved into 2+ pages about general match and rankings outside the scope of what methodology the author used for one small aspect of the tool (who has already indicated there are very few ways to adjust it). We have had a plethora of threads on rankings in the past which always end in users attacking each other. There is nothing new to discuss and we have already had to intervene once here for name calling and hostility.
 
The essay manager is now released! You can add as many schools as you would like, add custom essay prompts within each school, write essays directly on Admit, and manage the status of your writing (not started, editing, completed).

1746758662967.png
1746758952249.png
 
The essay manager [...]

Awesome! Love the editable character count and status options.

What about a more generalized "Primary Application" type (folder) instead of just "Personal Statement"
It would encourage people to also use it for their Work & Activities and Other Impactful essays.
 
Awesome! Love the editable character count and status options.

What about a more generalized "Primary Application" type (folder) instead of just "Personal Statement"
It would encourage people to also use it for their Work & Activities and Other Impactful essays.

Will make this change, thanks!
 
Top