T5 vs T20 MSTP Difference?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

lt*@46Xg0YFv1@g^

Full Member
Joined
Apr 3, 2023
Messages
14
Reaction score
3
Hi everyone! Recently I was accepted into a T5 and T20 MSTP program and wanted to know if there would be a difference in the programs for future competitive residencies. The programs in question are UChicago Pritzker and UPenn MSTP. I really love Pritzker so far for a multitude of reasons, but I can definitely feel a difference in prestige from people since Pritzker seems to not be a T10 program. I know I can succeed in either environment, but I don't want to turn down a bump for residency especially since med school prestige is becoming more relevant. Will this impact residency placements, as I am interested in either ortho or ENT and becoming a productive academic-physician.

Members don't see this ad.
 
There is so much overlap between graduates between these two programs. It is more about you, your publications, your grants, etc. This would be dependent upon your drive and your mentor's match more than anything else... The difference in "med school prestige" is minimal and that in itself is overrated. What matters most is what you can accomplish during training...
 
  • Like
Reactions: 1 users
Thank you! I was told though that I should consider the difference is for academic medicine, which is something I want to pursue? Does that inform the decision? This paper seems to really support the conclusions that you are describing
 
Last edited:
Members don't see this ad :)
Any reputable MD/PhD program has the ability to place you at a top residency. When comparing T5 vs T20, there's very little difference in where their graduates match. Go where there's people you want to work with and you'll be happiest spending the next 8 years of your life.
 
In my opinion, as mentioned above, neither institution is going to "hold you back." 7-8 years is a long time, I would go wherever you see yourself thriving and being happy. I would try to attend both second looks and pay attention to whether you like the city, the culture/happiness of current students, and see if you mesh well with other accepted students. Congratulations on a very successful cycle :)
 
ENT and Orthopedics are very different, and pursuing significant research in either on a traditional "academic medicine" track (particularly orthopedics) is difficult and uncommon.

I've heard a version of this from a lot of premeds, usually stemming from a lack of information, and echoed sentiments of what they believe they "should want" from their friends and forum boards like this. It's ok not to know what your goals are before you come in - even if you do, they can (and often do) change.

At your stage, focus on the research/PIs you are interested in, and your preference for the intangibles of the school (location, proximity to friends/family, any other factors that are important to you), and you will be happy and successful. Be open to discovering what you want while you are in school - it's a part of the process.
 
  • Like
Reactions: 1 user
Thank you! I was told though that I should consider the difference is for academic medicine, which is something I want to pursue? Does that inform the decision? This paper seems to really support the conclusions that you are describing
Your link concatenates two hyperlinks.

By the way, all MSTP students have an annual conference where they and the PD's meet everyone else. There aren't cliques in the APSA... as far as I know.
 
Thank you! I was told though that I should consider the difference is for academic medicine, which is something I want to pursue? Does that inform the decision? This paper seems to really support the conclusions that you are describing
Many articles utilize the same flawed USN&WR approach but utilizing deeper level metrics. Their key assumption is that collecting top performing professors within the institution improves the metrics of the average trainee graduate. The purpose for these rankings is to assess the value of that a degree/training from that institution confers to the trainee. If these top performing professors wall themselves in their ivory towers, the average trainee will not see them and/or marginally benefit from their presence. Institutional culture matters more than collecting top performing professors. The article by Goldstein et al. examined the entire pool of MD graduates (not just MDPhD) looking for the average MD using advanced level research productivity metrics, however, unfortunately they capture success of the trainees too late in the development of the graduates to be useful at assessing the current status of the program. In addition, they miss the other missions of the SOM.

In my view, there are four key factors that matter to prospective MD/PhD trainees as they are choosing schools:
1) Ask about the Quality of the interactions of the MD/PhD trainees with those top performing professors who might become your PhD mentors. Quantity [of interactions, top performing professors (more than 5 in your field), your MD/PhD class] is also important but less than quality of those features.​
2) Outcomes of the MD/PhD trainees: residency match, publications and grants, time to degree, attrition.​
3) Strength and quality of the MD/PhD leadership to support you during the 7 to 9 years of training.​
4) Overall culture and happiness of trainees living and training in that environment.​
edit: a bit inaccurate... corrected​
 
Last edited:
  • Like
Reactions: 3 users
This and other articles utilize the same flawed USN&WR approach but utilizing deeper level metrics. Their key assumption is that collecting top performing professors within the institution improves the metrics of the average trainee graduate. The purpose for these rankings is to assess the value of that a degree/training from that institution confers to the trainee. If these top performing professors wall themselves in their ivory towers, the average trainee will not see them and/or marginally benefit from their presence. Institutional culture matters more than collecting top performing professors.
I don't believe this is correct. The study by Goldstein and colleagues does not use research funding or number of top-cited professors in an institution to calculate their rankings - they look instead at the graduates accomplishments through various metrics (manuscripts, awards, clinical trials, etc.)

Many graduates have since affiliated with separate institutions than their alma mater, but their contributions aggregate toward the school they obtained their degree from - not their current or intermittent affiliations since then.

Overall, it is a strong study. They collected data from over 1 million graduates over a time span of >60 years to find those that have been the most successful in academia, and do away with subjective measures with low validity (i.e., questionnaires with <10% response rates from one field, etc.) It is not perfect (as matching manuscripts to profiles from over 1 million graduates required an algorithm from doximity, and this algorithm was not fully accurate), but it is still a good paper.
 
Last edited:
In my view, there are four key factors that matter to prospective MD/PhD trainees as they are choosing schools:
1) Ask about the Quality of the interactions of the MD/PhD trainees with those top performing professors who might become your PhD mentors. Quantity [of interactions, top performing professors (more than 5 in your field), your MD/PhD class] is also important but less than quality of those features.​
2) Outcomes of the MD/PhD trainees: residency match, publications and grants, time to degree, attrition.​
3) Strength and quality of the MD/PhD leadership to support you during the 7 to 9 years of training.​
4) Overall culture and happiness of trainees living and training in that environment.​
This is right, and wise. It can be hard to see how important these factors are before starting (especially the support of your MD/PhD leadership and culture).
 
  • Like
Reactions: 1 user
Goldstein et al. attempted to include in their calculation metric some training outcomes. However, please examine the Chart 1 and table 1 of their publication (see below). Their model seems to be primarily constructed by the current faculty and outcomes of trainees that typically exceed 10 years after training (who becomes HHMI investigator 4 years after completing residency, etc.). As a Neurologist, I take issue with missing Fellow of American Academy of Neurology (FAAN) and/or American Neurological Association (FANA). I got both about 10-15 years after completing residency training. I also take issue with the huge dichotomy between the value of a R01 and any other grant (i.e.: VA Merit Award, DOD grant, NIH R21, P30, etc.). Again, the real problem is how do you measure QUALITY TRAINING for the last 5-10 years of RECENT graduates...

Goldstein model chart 1.png
Goldstein model table 1.png
 
Last edited:
Goldstein et al. attempted to include in their calculation metric some training outcomes. However, please examine the Chart 1 and table 1 of their publication (see below). Their model seems to be primarily constructed by the current faculty and outcomes of trainees that typically exceed 10 years after training (who becomes HHMI investigator 4 years after completing residency, etc.). As a Neurologist, I take issue with missing Fellow of American Academy of Neurology (FAAN) and/or American Neurological Association (FANA). I got both about 10-15 years after completing residency training. I also take issue with the huge dichotomy between the value of a R01 and any other grant (i.e.: VA Merit Award, DOD grant, NIH R21, P30, etc.). Again, the real problem is how do you measure QUALITY TRAINING for the last 5-10 years of RECENT graduates...
If this is a comprehensive table of the full criteria they used for the "awards/honors" category, I agree (and am surprised) by the omission of FAAN/FANA.

From my understanding, they cut-off the analysis for graduates after 2009 (6 years prior to the paper). It's clear that during the first few years after graduation, most graduates are pursuing residency (or a post-doc), and objectively evaluating their autonomous research contribution at this stage does not seem very fruitful. The weight placed on R01 funding may be too high (both in this paper, and our field's perception in general), and I think these weights can be changed with discussion if this were translated this to a yearly analysis. I would predict that the weights would change over time with the perception of the importance of each grant.

Regardless, I think what you are suggesting regarding the immediate (i.e., 5-year post-graduate) results as a measure of training quality is important. Unfortunately, there doesn't seem to be a satisfactory way to measure this (i.e., the "strength" of residency placements is subjective on all levels, and evaluating the manuscripts or grants produced by current medical students, while potentially predictive of future outcomes, does not measure it directly). Listening to individual experiences that students have at their institution - and, if MD/PhD - with their mentors is helpful, though anecdotal. It is also affected in large part by personality matchmaking (including how much insight both a mentor and mentee has into this in advance of the PhD), and it is difficult to evaluate this from a few conversations or outgoing surveys.

I don't believe we need any ranking system at all. But if it is to exist, at the very least, I think this methodology provides a skeleton for a better way to do so.
 
Last edited:
I re-read the article and their model, and edited the comments above to make sure that they reflect the article. I still think that we have more areas of agreement than disagreement... It is fundamentally difficult to assess "potential" of young graduates. Unfortunately, there are MD/PhD programs whom I have personally performed site visits that still live in their faculty reputation and USN&WR rankings. In several site-visits, there was a great disconnect between the past performance and the current status. Furthermore, if you only get "top-notch" talent, and you graduate "top-notch" graduates... what is the "training" or value-added to the outcome?

One of the more profound points in their discussion was:
"For example, graduates from the Albert Einstein College of Medicine of Yeshiva University excelled at obtaining awards and NIH grants, which resulted in a rank of 13 in our analysis as compared with a rank of 34 in USN&WR. The University of California, San Francisco, School of Medicine (UCSF) was ranked fourth by USN&WR, in part because the faculty, not the graduates, excelled in securing NIH grants. Our evaluation of UCSF graduates, however, placed the school at 17 because its graduates achieved fewer and lower-impact publications and grants. This finding highlights the important point that the measurement of faculty grants may not reflect the quality of education provided by a given school."
 
Last edited:
Does the qualify of training in MD/PhD programs mirror that of the training in the life sciences PhD programs at the same institution?

For example, does training offered by Stanford's MD/PhD program mirror offered in Stanford's Biosciences PhD programs?

Is there data on the quality of different universities life sciences PhD programs?
 
Hi everyone! Recently I was accepted into a T5 and T20 MSTP program and wanted to know if there would be a difference in the programs for future competitive residencies. The programs in question are UChicago Pritzker and UPenn MSTP. I really love Pritzker so far for a multitude of reasons, but I can definitely feel a difference in prestige from people since Pritzker seems to not be a T10 program. I know I can succeed in either environment, but I don't want to turn down a bump for residency especially since med school prestige is becoming more relevant. Will this impact residency placements, as I am interested in either ortho or ENT and becoming a productive academic-physician.
If you posted a WAMC thread, could you please post a link to it?
 
Top