Difference between 'top' medical schools and 'lower' tiered schools

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I don't think it's so strange an idea that schools can train medical students at varying levels dependent on department strength. Yale is the #4 school in the nation, but I guarantee you that it's not #4 in all departments. Yale's molecular biology department is better than its chemistry department, so it stands to reason that the biologists it trains is better than the chemists it trains. Yes, these rankings are graduate-level, but it trickles down because the faculty and graduate students reflect that strength. Similarly, hospitals have specific strengths. Cleveland and heart health, for instance.
So you really think someone who does med school at CCLCM will be much better prepared to start the post-MD track towards cardiology than towards, say, neurology? My only experience being at the undergrad level this seems pretty far fetched, because the quality of instruction had absolutely nothing to do with the prof being a big name in their field. Very different strengths sought by a student vs a university department.

I suppose this isn't something a couple of premeds can answer.
 
So you really think someone who does med school at CCLCM will be much better prepared to start the post-MD track towards cardiology than towards, say, neurology? My only experience being at the undergrad level this seems pretty far fetched, because the quality of instruction had absolutely nothing to do with the prof being a big name in their field. Very different strengths sought by a student vs a university department.

Quality of instruction depends on the strength of the department. Chemistry instruction at MIT is much better than chemistry instruction at another school, correlating with the strength of the department.
 
Quality of instruction depends on the strength of the department. Chemistry instruction at MIT is much better than chemistry instruction at another school, correlating with the strength of the department.

I don't think that's totally true. You can have an amazing department with mediocre or even bad didactic quality.
 
Quality of instruction depends on the strength of the department. Chemistry instruction at MIT is much better than chemistry instruction at another school, correlating with the strength of the department.
Did you really experience this? Because I had a huge mix of quality in a bunch of very different departments. I couldn't even hazard a guess based on undergrad class quality as to which WashU departments are stronger/weaker at the graduate level.

This whole conversation just keeps getting more bizarre for me lol
 
I don't go to Harvard medical school, but since I'm here today, thought I'd send a picture.

image.jpeg
 
I don't think that's totally true. You can have an amazing department with mediocre or even bad didactic quality.

Did you really experience this? Because I had a huge mix of quality in a bunch of very different departments. I couldn't even hazard a guess based on undergrad class quality as to which WashU departments are stronger/weaker at the graduate level.

When we make hiring decisions, we take into account a faculty candidate's teaching abilities. Some universities require candidates to give a chalk talk on a basic concept in addition to research talk and proposal. Other departments have various other standards for assessing teaching. Some universities may ignore the teaching and that would lead to a disconnect between research ability and teaching ability altogether but that is not my experience in the places I've been.
 
When we make hiring decisions, we take into account a faculty candidate's teaching abilities. Some universities require candidates to give a chalk talk on a basic concept in addition to research talk and proposal. Other departments have various other standards for assessing teaching. Some universities may ignore the teaching and that would lead to a disconnect between research ability and teaching ability altogether but that is not my experience in the places I've been.
I'm curious what you think about some of the best LACs out there, that are famous for quality educational experience and not at all for being research powerhouses. Do they graduate great chemists? Terrible? The two cancel out? Or are they not actually tied to one another?
 
I'm curious what you think about some of the best LACs out there, that are famous for quality educational experience and not at all for being research powerhouses. Do they graduate great chemists? Terrible? The two cancel out? Or are they not actually tied to one another?

They also attract the people who love to teach, so they are an anomaly when you're comparing them to the research universities. All the same, you generally see fewer science graduate students at the top programs from those schools as opposed to from MIT, CalTech, Berkeley, Harvard, etc. If they don't have access to research labs, that could be a huge issue for graduate school. REUs and summer experiences help offset that, but to a PI, nothing beats a quality, longitudinal lab experience with ownership of project.
 
They also attract the people who love to teach, so they are an anomaly when you're comparing them to the research universities.
They may not be the normal pathway to grad programs sure, but they do make my point that a strong research division is not synonymous with or necessary for great instruction/education in a subject. I still think a med school full of brilliant students and brilliant teachers doesn't need the best program in specialty X to match and impress PDs in specialty X.
 
They may not be the normal pathway to grad programs sure, but they do make my point that a strong research division is not synonymous with or necessary for great instruction/education in a subject. I still think a med school full of brilliant students and brilliant teachers doesn't need the best program in specialty X to match and impress PDs in specialty X.

So you believe that all med schools train their students equally well for whichever specialty the student wants to pursue, without regard to the strength of its clinical departments? So someone doing a core rotation or sub-I at Boston Children's will have the same skills entering a peds residency as someone who did their rotations elsewhere? Isn't that the point of sub-I's? To train at a specific place that is strong for the specific residency you are aiming for?
 
So you believe that all med schools train their students equally well for whichever specialty the student wants to pursue, without regard to the strength of its clinical departments? So someone doing a core rotation or sub-I at Boston Children's will have the same skills entering a peds residency as someone who did their rotations elsewhere? Isn't that the point of sub-I's? To train at a specific place that is strong for the specific residency you are aiming for?
My understanding is away rotations are to personally impress people at the place you're aiming for, not because the PDs of elite programs are so impressed by the skills interns were able to pick up in a rotation at a good hospital.
 
So you believe that all med schools train their students equally well for whichever specialty the student wants to pursue, without regard to the strength of its clinical departments? So someone doing a core rotation or sub-I at Boston Children's will have the same skills entering a peds residency as someone who did their rotations elsewhere? Isn't that the point of sub-I's? To train at a specific place that is strong for the specific residency you are aiming for?

No. The point of a SubI is to see if you fit with that particular program and hopefully do well enough that you are extended an invitation to interview. You should already be good at whatever you're doing before you attempt a SubI. If you don't get an Honors in a SubI, it's not a good look.

One month of acting as a SubI at Boston Children's isn't going to have any significant impact on your skills entering residency.
 
Damn they should call every SDN argument a spooky farm field with all these straw men flying around.

I think OPs questions was answered long ago, there's really no point in rehashing the same point over and over. No point discussing what measure is best for determining quality of schools. Top tier schools will remain top tier schools with little variation despite ranking systems. They give an edge if you want to pursue academic medicine and competitive specialties. I think we've beaten this poor horse long enough; it's long past dead.
 
Damn they should call every SDN argument a spooky farm field with all these straw men flying around.

I think OPs questions was answered long ago, there's really no point in rehashing the same point over and over. No point discussing what measure is best for determining quality of schools. Top tier schools will remain top tier schools with little variation despite ranking systems. They give an edge if you want to pursue academic medicine and competitive specialties. I think we've beaten this poor horse long enough; it's long past dead.
Don't tell me you believe internet threads should all be abandoned as soon as the OP's question is answered???
 
Okay let's see if I can add any sort of insight to this discussion.

If we look at the top schools by USNWR (which, although is a flawed ranking system, still correlate well with a) prestige and b) what people generally consider when they think of top medical schools), we see that most top medical schools are affiliated with USNWR honor roll hospitals (i.e. top hospitals, same premise of flawed but "good enough" ranking system)

1. Harvard - MGH (3), BWH (13)
2. Stanford - Stanford Hopsital (14)
3. Hopkins - JHH (4)
4. UCSF - UCSF Medical Center (7)
5. Penn - HUP (9)
6. WashU - BJH (11)
7. Columbia - NYP (6)
8. Duke - Duke UH (16)
9. Yale - NONE
10. University of Washington - NONE
11. NYU - NYU Langone (10)
12. Michigan - UMich Hospital (18)
13. UChicago - NONE
14. UCLA - UCLA (5), Ceders-Sinai (17)
15. Vanderbilt - NONE
16. Pitt - UPMC (12)
17. Northwestern - NMH (8)
18. Cornell - NYP (6)
19. UCSD - NONE
20. Baylor - Houston Methodist (19)
21. Mt. Sinai - Mt. Sinai Hospital (15)

So of the top 20(ish), only 5 are not affiliated with a top hospital. Two of those are state schools (UWash, UCSD) with regional preferences, so we'll exclude those just to make things less complicated for now. Yale, Vanderbilt, and UChicago all have good-not-great hospitals. In terms of residency rankings, Doximity gives Yale top 10 spots in derm and psych, UChicago a top 10 spot in PM&R, and Vanderbilt top 10 spots in ENT, general surgery, urology, and med/peds combined residency.

So then as a test case, let's take UChicago, a top 10 med school without a top hospital and with only a single residency program making into the top 10 for any specialty (by doximity which isn't a great way to go about it but I'm not familiar with who's who in every single specialty so it was the easiest way to get a general list).

Theoretically, according to your argument, students at UChicago will get inferior training in certain specialties than at schools that have top departments in them. However, if you look at their 2016 match list (https://pritzker.uchicago.edu/sites.../files/uploads/2016 Match Results Website.pdf), they match very well in essentially every specialty.

Let's look at Yale, a top 10 med school without a top hospital and only two top residency programs. Their 2016 match list is stellar (https://forums.studentdoctor.net/threads/match-list-2016.1189265/#post-17544866). If you look at IM, literally only 3 people did not match to a top of the top IM residency. Looking at other specialties yields similar results.

So from this, we can conclude (or informedly speculate) that the clinical training at top medical schools doesn't depend too much on how strong your home department is to the extent that strong residencies will take students from schools without strong programs in that specialty. We can also conclude (or speculate) that the strength of the medical school doesn't necessarily depend on the strength of the affiliated hospital, though there certainly exists a general correlation.

Now, ignore everything I said above because it really doesn't matter. The following is what matters.

When you're applying into a specialty, you have really little exposure to that specialty in medical school for the most part. Someone at Harvard applying into pediatrics will have 6-8 weeks of time on their clinical year rotating at Boston Children's + a few months their 4th year doing elective rotations in pediatrics + a few months at SubIs + probably a bit of research. That's probably less than a year total of experience in peds, and in that time you are just creating a foundation for what you're going to learn during residency. You learn how to do H&Ps, how to write notes, how to present, how to communicate effectively with other specialties or hospitals, how to create a ddx, assessment, and plan, and maybe how to do a few procedures as well. But you do that at every school - someone going into intern year should be at least marginally competent at all or most of those tasks. Two incoming residents at Boston Children's who come from Harvard and East Dakota State College of Caribbean Medicine will likely exit the program just as prepared as the other to practice pediatrics. The fact that they both interviewed and matched at BCH means that they were both qualified enough to start as interns there regardless of how strong their home programs were.

NOW - that is not to say that having a strong home program doesn't have advantages. It gives you access to strong research opportunities, it gives you access to strong mentorship from big names in the field, it gives you access to well known and well respected letter writers, it gives you the opportunity to have a strong home program (statistically you are most likely to match to your home institution in most specialties), and it helps you make connections early.

But it doesn't mean you had better training than someone at a school with a "worse" program. To put it in perspective one last time, give me an intern at 6 months at the worst pediatrics residency program in the US and I will essentially guarantee you that they are more capable than a graduating 4th year med student at Penn who rotated at CHOP.
 
Okay let's see if I can add any sort of insight to this discussion.

If we look at the top schools by USNWR (which, although is a flawed ranking system, still correlate well with a) prestige and b) what people generally consider when they think of top medical schools), we see that most top medical schools are affiliated with USNWR honor roll hospitals (i.e. top hospitals, same premise of flawed but "good enough" ranking system)

1. Harvard - MGH (3), BWH (13)
2. Stanford - Stanford Hopsital (14)
3. Hopkins - JHH (4)
4. UCSF - UCSF Medical Center (7)
5. Penn - HUP (9)
6. WashU - BJH (11)
7. Columbia - NYP (6)
8. Duke - Duke UH (16)
9. Yale - NONE
10. University of Washington - NONE
11. NYU - NYU Langone (10)
12. Michigan - UMich Hospital (18)
13. UChicago - NONE
14. UCLA - UCLA (5), Ceders-Sinai (17)
15. Vanderbilt - NONE
16. Pitt - UPMC (12)
17. Northwestern - NMH (8)
18. Cornell - NYP (6)
19. UCSD - NONE
20. Baylor - Houston Methodist (19)
21. Mt. Sinai - Mt. Sinai Hospital (15)

So of the top 20(ish), only 5 are not affiliated with a top hospital. Two of those are state schools (UWash, UCSD) with regional preferences, so we'll exclude those just to make things less complicated for now. Yale, Vanderbilt, and UChicago all have good-not-great hospitals. In terms of residency rankings, Doximity gives Yale top 10 spots in derm and psych, UChicago a top 10 spot in PM&R, and Vanderbilt top 10 spots in ENT, general surgery, urology, and med/peds combined residency.

So then as a test case, let's take UChicago, a top 10 med school without a top hospital and with only a single residency program making into the top 10 for any specialty (by doximity which isn't a great way to go about it but I'm not familiar with who's who in every single specialty so it was the easiest way to get a general list).

Theoretically, according to your argument, students at UChicago will get inferior training in certain specialties than at schools that have top departments in them. However, if you look at their 2016 match list (https://pritzker.uchicago.edu/sites/pritzker.uchicago.edu/files/uploads/2016 Match Results Website.pdf), they match very well in essentially every specialty.

Let's look at Yale, a top 10 med school without a top hospital and only two top residency programs. Their 2016 match list is stellar (https://forums.studentdoctor.net/threads/match-list-2016.1189265/#post-17544866). If you look at IM, literally only 3 people did not match to a top of the top IM residency. Looking at other specialties yields similar results.

So from this, we can conclude (or informedly speculate) that the clinical training at top medical schools doesn't depend too much on how strong your home department is to the extent that strong residencies will take students from schools without strong programs in that specialty. We can also conclude (or speculate) that the strength of the medical school doesn't necessarily depend on the strength of the affiliated hospital, though there certainly exists a general correlation.

Now, ignore everything I said above because it really doesn't matter. The following is what matters.

When you're applying into a specialty, you have really little exposure to that specialty in medical school for the most part. Someone at Harvard applying into pediatrics will have 6-8 weeks of time on their clinical year rotating at Boston Children's + a few months their 4th year doing elective rotations in pediatrics + a few months at SubIs + probably a bit of research. That's probably less than a year total of experience in peds, and in that time you are just creating a foundation for what you're going to learn during residency. You learn how to do H&Ps, how to write notes, how to present, how to communicate effectively with other specialties or hospitals, how to create a ddx, assessment, and plan, and maybe how to do a few procedures as well. But you do that at every school - someone going into intern year should be at least marginally competent at all or most of those tasks. Two incoming residents at Boston Children's who come from Harvard and East Dakota State College of Caribbean Medicine will likely exit the program just as prepared as the other to practice pediatrics. The fact that they both interviewed and matched at BCH means that they were both qualified enough to start as interns there regardless of how strong their home programs were.

NOW - that is not to say that having a strong home program doesn't have advantages. It gives you access to strong research opportunities, it gives you access to strong mentorship from big names in the field, it gives you access to well known and well respected letter writers, it gives you the opportunity to have a strong home program (statistically you are most likely to match to your home institution in most specialties), and it helps you make connections early.

But it doesn't mean you had better training than someone at a school with a "worse" program. To put it in perspective one last time, give me an intern at 6 months at the worst pediatrics residency program in the US and I will essentially guarantee you that they are more capable than a graduating 4th year med student at Penn who rotated at CHOP.
I appreciate the legwork. I hadn't considered that for residencies that want to see strong research, it likely does matter a lot that your med school have at least a decent associated division where you can find some good productive projects.

Btw, how does one learn to read match lists? Is doximity legit? All I ever see on SDN is "don't even try to read match lists you'll get it dead wrong"
 
I appreciate the legwork. I hadn't considered that for residencies that want to see strong research, it likely does matter a lot that your med school have at least a decent associated division where you can find some good productive projects.

Btw, how does one learn to read match lists? Is doximity legit? All I ever see on SDN is "don't even try to read match lists you'll get it dead wrong"

I don't really know. I would say there is only a single specialty, maybe 2, for which I can interpret a match list with any degree of reliability (and that degree isn't great). You'll learn what programs have what traits when you get more involved in a specialty of interest.
 
I appreciate the legwork. I hadn't considered that for residencies that want to see strong research, it likely does matter a lot that your med school have at least a decent associated division where you can find some good productive projects.

Btw, how does one learn to read match lists? Is doximity legit? All I ever see on SDN is "don't even try to read match lists you'll get it dead wrong"

Doximity is ok for broad stroke tiers, although they tend to sneak in some lower tier programs in their top ranks (Cleveland clinic Im is the biggest offender in my mind). The only way to get really good at reading a match list is to go through the match or be involved with that fields matching process in some way (or have an expert opinion to consult)
 
Okay let's see if I can add any sort of insight to this discussion.

If we look at the top schools by USNWR (which, although is a flawed ranking system, still correlate well with a) prestige and b) what people generally consider when they think of top medical schools), we see that most top medical schools are affiliated with USNWR honor roll hospitals (i.e. top hospitals, same premise of flawed but "good enough" ranking system)

1. Harvard - MGH (3), BWH (13)
2. Stanford - Stanford Hopsital (14)
3. Hopkins - JHH (4)
4. UCSF - UCSF Medical Center (7)
5. Penn - HUP (9)
6. WashU - BJH (11)
7. Columbia - NYP (6)
8. Duke - Duke UH (16)
9. Yale - NONE
10. University of Washington - NONE
11. NYU - NYU Langone (10)
12. Michigan - UMich Hospital (18)
13. UChicago - NONE
14. UCLA - UCLA (5), Ceders-Sinai (17)
15. Vanderbilt - NONE
16. Pitt - UPMC (12)
17. Northwestern - NMH (8)
18. Cornell - NYP (6)
19. UCSD - NONE
20. Baylor - Houston Methodist (19)
21. Mt. Sinai - Mt. Sinai Hospital (15)

So of the top 20(ish), only 5 are not affiliated with a top hospital. Two of those are state schools (UWash, UCSD) with regional preferences, so we'll exclude those just to make things less complicated for now. Yale, Vanderbilt, and UChicago all have good-not-great hospitals. In terms of residency rankings, Doximity gives Yale top 10 spots in derm and psych, UChicago a top 10 spot in PM&R, and Vanderbilt top 10 spots in ENT, general surgery, urology, and med/peds combined residency.

So then as a test case, let's take UChicago, a top 10 med school without a top hospital and with only a single residency program making into the top 10 for any specialty (by doximity which isn't a great way to go about it but I'm not familiar with who's who in every single specialty so it was the easiest way to get a general list).

Theoretically, according to your argument, students at UChicago will get inferior training in certain specialties than at schools that have top departments in them. However, if you look at their 2016 match list (https://pritzker.uchicago.edu/sites/pritzker.uchicago.edu/files/uploads/2016 Match Results Website.pdf), they match very well in essentially every specialty.

Let's look at Yale, a top 10 med school without a top hospital and only two top residency programs. Their 2016 match list is stellar (https://forums.studentdoctor.net/threads/match-list-2016.1189265/#post-17544866). If you look at IM, literally only 3 people did not match to a top of the top IM residency. Looking at other specialties yields similar results.

So from this, we can conclude (or informedly speculate) that the clinical training at top medical schools doesn't depend too much on how strong your home department is to the extent that strong residencies will take students from schools without strong programs in that specialty. We can also conclude (or speculate) that the strength of the medical school doesn't necessarily depend on the strength of the affiliated hospital, though there certainly exists a general correlation.

Now, ignore everything I said above because it really doesn't matter. The following is what matters.

When you're applying into a specialty, you have really little exposure to that specialty in medical school for the most part. Someone at Harvard applying into pediatrics will have 6-8 weeks of time on their clinical year rotating at Boston Children's + a few months their 4th year doing elective rotations in pediatrics + a few months at SubIs + probably a bit of research. That's probably less than a year total of experience in peds, and in that time you are just creating a foundation for what you're going to learn during residency. You learn how to do H&Ps, how to write notes, how to present, how to communicate effectively with other specialties or hospitals, how to create a ddx, assessment, and plan, and maybe how to do a few procedures as well. But you do that at every school - someone going into intern year should be at least marginally competent at all or most of those tasks. Two incoming residents at Boston Children's who come from Harvard and East Dakota State College of Caribbean Medicine will likely exit the program just as prepared as the other to practice pediatrics. The fact that they both interviewed and matched at BCH means that they were both qualified enough to start as interns there regardless of how strong their home programs were.

NOW - that is not to say that having a strong home program doesn't have advantages. It gives you access to strong research opportunities, it gives you access to strong mentorship from big names in the field, it gives you access to well known and well respected letter writers, it gives you the opportunity to have a strong home program (statistically you are most likely to match to your home institution in most specialties), and it helps you make connections early.

But it doesn't mean you had better training than someone at a school with a "worse" program. To put it in perspective one last time, give me an intern at 6 months at the worst pediatrics residency program in the US and I will essentially guarantee you that they are more capable than a graduating 4th year med student at Penn who rotated at CHOP.

Not sure about that last part... there are some very unmotivated and not so great interns out there. One of my smarter med student friend basically had to be the icu intern since the one rotating there did not know what ARDS was
 
So you believe that all med schools train their students equally well for whichever specialty the student wants to pursue, without regard to the strength of its clinical departments? So someone doing a core rotation or sub-I at Boston Children's will have the same skills entering a peds residency as someone who did their rotations elsewhere? Isn't that the point of sub-I's? To train at a specific place that is strong for the specific residency you are aiming for?

I don't think you'll be able to tell based on the program they were coming from. It's a very limited amount of time in the context of training and a lot of the Med students knowledge base at that point will depend on their own motivation. The amount a med student actually gets to do in said program is also very variable.
 
But it doesn't mean you had better training than someone at a school with a "worse" program. To put it in perspective one last time, give me an intern at 6 months at the worst pediatrics residency program in the US and I will essentially guarantee you that they are more capable than a graduating 4th year med student at Penn who rotated at CHOP.
I would have to disagree with this statement. There is a huge heterogeneity in the quality of peds program across the US. A strong peds-bound med student could certainly outshine some interns, myself included.
 
Okay let's see if I can add any sort of insight to this discussion.

If we look at the top schools by USNWR (which, although is a flawed ranking system, still correlate well with a) prestige and b) what people generally consider when they think of top medical schools), we see that most top medical schools are affiliated with USNWR honor roll hospitals (i.e. top hospitals, same premise of flawed but "good enough" ranking system)

1. Harvard - MGH (3), BWH (13)
2. Stanford - Stanford Hopsital (14)
3. Hopkins - JHH (4)
4. UCSF - UCSF Medical Center (7)
5. Penn - HUP (9)
6. WashU - BJH (11)
7. Columbia - NYP (6)
8. Duke - Duke UH (16)
9. Yale - NONE
10. University of Washington - NONE
11. NYU - NYU Langone (10)
12. Michigan - UMich Hospital (18)
13. UChicago - NONE
14. UCLA - UCLA (5), Ceders-Sinai (17)
15. Vanderbilt - NONE
16. Pitt - UPMC (12)
17. Northwestern - NMH (8)
18. Cornell - NYP (6)
19. UCSD - NONE
20. Baylor - Houston Methodist (19)
21. Mt. Sinai - Mt. Sinai Hospital (15)

So of the top 20(ish), only 5 are not affiliated with a top hospital. Two of those are state schools (UWash, UCSD) with regional preferences, so we'll exclude those just to make things less complicated for now. Yale, Vanderbilt, and UChicago all have good-not-great hospitals. In terms of residency rankings, Doximity gives Yale top 10 spots in derm and psych, UChicago a top 10 spot in PM&R, and Vanderbilt top 10 spots in ENT, general surgery, urology, and med/peds combined residency.

So then as a test case, let's take UChicago, a top 10 med school without a top hospital and with only a single residency program making into the top 10 for any specialty (by doximity which isn't a great way to go about it but I'm not familiar with who's who in every single specialty so it was the easiest way to get a general list).

Theoretically, according to your argument, students at UChicago will get inferior training in certain specialties than at schools that have top departments in them. However, if you look at their 2016 match list (https://pritzker.uchicago.edu/sites/pritzker.uchicago.edu/files/uploads/2016 Match Results Website.pdf), they match very well in essentially every specialty.

Let's look at Yale, a top 10 med school without a top hospital and only two top residency programs. Their 2016 match list is stellar (https://forums.studentdoctor.net/threads/match-list-2016.1189265/#post-17544866). If you look at IM, literally only 3 people did not match to a top of the top IM residency. Looking at other specialties yields similar results.

So from this, we can conclude (or informedly speculate) that the clinical training at top medical schools doesn't depend too much on how strong your home department is to the extent that strong residencies will take students from schools without strong programs in that specialty. We can also conclude (or speculate) that the strength of the medical school doesn't necessarily depend on the strength of the affiliated hospital, though there certainly exists a general correlation.

Now, ignore everything I said above because it really doesn't matter. The following is what matters.

When you're applying into a specialty, you have really little exposure to that specialty in medical school for the most part. Someone at Harvard applying into pediatrics will have 6-8 weeks of time on their clinical year rotating at Boston Children's + a few months their 4th year doing elective rotations in pediatrics + a few months at SubIs + probably a bit of research. That's probably less than a year total of experience in peds, and in that time you are just creating a foundation for what you're going to learn during residency. You learn how to do H&Ps, how to write notes, how to present, how to communicate effectively with other specialties or hospitals, how to create a ddx, assessment, and plan, and maybe how to do a few procedures as well. But you do that at every school - someone going into intern year should be at least marginally competent at all or most of those tasks. Two incoming residents at Boston Children's who come from Harvard and East Dakota State College of Caribbean Medicine will likely exit the program just as prepared as the other to practice pediatrics. The fact that they both interviewed and matched at BCH means that they were both qualified enough to start as interns there regardless of how strong their home programs were.

NOW - that is not to say that having a strong home program doesn't have advantages. It gives you access to strong research opportunities, it gives you access to strong mentorship from big names in the field, it gives you access to well known and well respected letter writers, it gives you the opportunity to have a strong home program (statistically you are most likely to match to your home institution in most specialties), and it helps you make connections early.

But it doesn't mean you had better training than someone at a school with a "worse" program. To put it in perspective one last time, give me an intern at 6 months at the worst pediatrics residency program in the US and I will essentially guarantee you that they are more capable than a graduating 4th year med student at Penn who rotated at CHOP.

On Rankings

To add some of my own nuance to this discussion, there are so many underutilized ways to slice this pie. The obsession with USNWR indicates a near-complete dismissal of other 3rd party hospital safety/quality rankings. Examples include Truven Health Analytics (http://100tophospitals.com/Portals/2/assets/TOP_17235_1016_100 Top Hospitals Study_WEB.pdf) and Leapfrog's Hospital Safety Grade (http://www.hospitalsafetygrade.org/), among others. Most of these rankings provide quantifiable, data-driven, and transparent ratings methodologies for their rankings. Many are based primarily measures of clinical standard of care and utilization of appropriate measure/procedures for patient care. None of these metrics are perfect, and none are completely representative of an institution's capacity for clinical care. Yet, they serve as data points for building a nuanced picture of actual health quality at many hospital systems across the nation. Many of these rankings have come under intense scrutiny from the medical community, for both good and bad reasons. In many instances, such rankings systems fail to match what a particular institution's personal methodology for appropriate care. In other circumstances, many hospitals received a wake up call as to deficiencies in their current system (search Cleveland Clinic and Leapfrog for such a story). Regardless, a critical mind must be used to analyze and determine which of such ranking systems are useful. As an example, I am not a fan of LeapFrog's methodology as it heavily relies on hospital self reporting, rather than a third party safety management system.

Selecting Medical Schools

As far as developing an optimum strategy for medical school choice, here are my thoughts. I think there are 2 clear approaches that most applicants will fall into:

1. Going in with a strong desire for a particular field (that will most likely change during medical school)

Say as a naive premed, I have my heart set out on orthopedics. Knowing this, I can actually structure my medical school list/choices based on the following metrics (not in any particular order):

1. Research productivity in orthopedics
2. # of students matching into orthopedics, averaged over number of years of match list data available
3. Clinical ranking of school, as determined by one or multiple hospital safety rankings (see above)
4. USNWR ranking (or just PD/Peer score) to serve as "prestige" factor
5. Other key requirements of medical school choice (location, climate, proximity to family, cost of living, curriculum, etc.)

Now you'll ask, how do I measure and compare productivity in my given field of interest? Thankfully in the US, large amounts of research funding are funneled and directed by the NIH. As such, medical schools receiving NIH funding are reported based off of speciality (see: http://www.brimr.org/NIH_Awards/2016/NIH_Awards_2016.htm). Total research funding can serve as a loose metric for research productivity in that particular specialty. Some research-oriented specialties actually have published academic articles analyzing research productivity across hospitals/programs. These serve as the gold standard to measure research output for a particular school.

Each of the above factors can be weighed differently to form a weighted average for each school, based on how "sure" one is about the particular specialty. If all I want to do in life is ortho, then I can weigh factors 1 and 2 30% each. More conservatively, each factor can have an even 20%, or even have a strong bias toward factor 5. Finally, as cost remains a major factor, dividing the weighted average by the average student indebtedness (see MSAR) or cost of attendance will provide you with a value score for each school. I personally developed my school list using a similar methodology.

This ranking system serves as an optimization for a particular specialty of interest. The key components of a successful application still remain on you. You will still have to work hard, learn a lot, get a competitive step 1 score, garner good clinical grades, have strong sub-I experiences, do related research and have strong interviews in order to earn a spot in said residency.

2. Blank slate in terms of specialty choice (most matriculants, even if they think otherwise)

One piece of advice I received from a former medical student, now resident was this: Going into M1, if there is a chance that you may want to pursue a "competitive" specialty, prepare yourself, as a student, for that specialty from day 1. Now this is another nuanced point. He was not saying lock down on ortho as soon as you set foot into medical school. Rather, he was imparting the common sense that having the stats (Step 1) and requirements handled for a more challenging residency automatically has carry over, in terms of prep, for a less competitive residency. Yes, you will have to completely change your research focus, letters of recommendation, etc. if you decide not to pursue the competitive specialty. The fundamentals of a strong residency application (strong step 1 == strong clinical grades > preclinical grades/rank/AOA), however, will still carry over.

Here we come to the adage "know yourself". There is no clear cut answer to how/when to decide a particular specialty. Some of us may clearly know we would like to pursue a primary care field. In that case, doing a bit of research say into medicals schools affiliated public hospitals (LA County, Grady, etc.) may be a worthwhile experience for someone leaning toward emergency medicine. Others (as most are encouraged to be), will want to truly explore all aspects of medicine before honing in on a particular field during 3rd/4th year of medical school.

This will be a point of trade offs. As one chooses to remain truly open and unbiased toward a specialty, one incurs the opportunity cost of early preparation for a residency application. The time spent "uncommitted" will reduce time/opportunity for research and publication in that field, shadowing and building of strong connections within one's medical school, and a general sense of purpose that may translate into stronger academic/board performance. This may go as far back as making a "sub-optimal" school choice in terms of research productivity/match history into that specialty (arguable point). Statistically, this trade off will be a non-factor for most medical students, as most students, by definition, will not pursue/match into a competitive residency. As such, this is a trade off most will be willing to make. This risk of unnecessarily committing to a field that they later end up hating, just because they felt they needed to commit early is completely irrational.

Yet for those who decide during late 3rd year to pursue neurosurgery, time will be limited and the door may have closed if step 1 score performance was not up to par. As you can seen the trade off may be quite big for someone without the foresight that the may want to pursue an competitive specialty. We come back again to the point: know yourself.

Now for the rubber meets the road for those truly undecided going into medical school. Of the 5 metrics I mentioned before that are factored into school choice, the first two are irrelevant for most entering medical students. So beyond the incorporation of hospital quality data rather than relying entirely on USNWR, what can they do? I recommend some introspection.

Think back to your clinical experiences. Shadowing, volunteering, scribing, EMT, etc. We all surely will have some. Some of us will have a wealth of diversity in such experiences. It is from here that we can begin to get an inkling for our interests. It will be difficult to shut out the voices of family and friends, mentors and others, who may be influencing our path. Their input and impact is crucial, but there is something to be said for "intuition". The greater the diversity of clinical experience, the better idea each of us may have about interests. Granted we are not experiencing the entirety of clinical practice so our preferences are at best an incomplete picture. Yet they are still preferences. And if you sense there is a leaning in one direction or another, investigate it. If it is happens to be a competitive residency, then you can choose to optimize based choice 1 above. If it is rural medicine, structure a school list to that. I am clearly imparting common sense.

Final Thoughts

At the end of the day, it does not matter what some non-trad premed thinks about school choice. What matters are core principles: introspection, embracing complexity/nuance/grey areas, courage (to make decisions and see them through), thinking for yourself, seeking wisdom from those farther along the path. All of the above are much easier said than done. I hope most of us will not allow a singular ranking system to rule us and relinquish our own critical judgement to it.
 
Last edited:
I would have to disagree with this statement. There is a huge heterogeneity in the quality of peds program across the US. A strong peds-bound med student could certainly outshine some interns, myself included.

Not sure about that last part... there are some very unmotivated and not so great interns out there. One of my smarter med student friend basically had to be the icu intern since the one rotating there did not know what ARDS was

Alright, fair point, I was being hyperbolic to try to help illustrate a point, that point being that you learn more about how to actually practice medicine during residency whereas the foundational knowledge stems from (or at least begins to form during) medical school.
 
To add some of my own nuance to this discussion, there are so many underutilized ways to slice this pie. The obsession with USNWR indicates a near-complete dismissal of other 3rd party hospital safety/quality rankings. Examples include Truven Health Analytics (http://100tophospitals.com/Portals/2/assets/TOP_17235_1016_100 Top Hospitals Study_WEB.pdf) and Leapfrog's Hospital Safety Grade (http://www.hospitalsafetygrade.org/), among others. Most of these rankings provide quantifiable, data-driven, and transparent ratings methodologies for their rankings. Many are based primarily measures of clinical standard of care and utilization of appropriate measure/procedures for patient care. None of these metrics are perfect, and none are completely representative of an institution's capacity for clinical care. Yet, they serve as data points for building a nuanced picture of actual health quality at many hospital systems across the nation. Many of these rankings have come under intense scrutiny from the medical community, for both good and bad reasons. In many instances, such rankings systems fail to match what a particular institution's personal methodology for appropriate care. In other circumstances, many hospitals received a wake up call as to deficiencies in their current system (search Cleveland Clinic and Leapfrog for such a story). Regardless, a critical mind must be used to analyze and determine which of such ranking systems are useful. As an example, I am not a fan of LeapFrog's methodology as it heavily relies on hospital self reporting, rather than a third party safety management system.
The problem with using those 3rd party safety/quality rankings is that they have almost no bearing on the actual training quality of the institution themselves. Top-tier hospitals attract very unique and sick people who tend to have worse outcomes. Just because those with poor prognoses seek out expert care at places like Memorial Sloan Kettering and the people there tend to have worse outcomes doesn't mean MSK still isn't one of the best hospitals to train at for surgical oncology. I would argue that these safety metrics, while probably useful for an average patient, is practically meaningless when it comes to the quality of training.
 
@Shreman

Here's where each of your ranking things falls short:

1. Research productivity in orthopedics

Research productivity of the department doesn't necessarily correlate. For example, for Neurosurgery, using your link, Hopkins is ranked 19 and Miami is ranked 10, but I would easily go to Hopkins over Miami in a heartbeat if I were interested in NSGY. Also just because Yale is ranked 3 and Columbia is ranked 7 doesn't mean that Yale will give you an edge over Columbia in terms of matching or vice versa. A more prestigious school will almost always allow the same applicant (hypothetically identical applicants for the sake of argument) to match "more prestigiously" into a field than a less prestigious one (so comparing Yale to Penn State for example). Thus, research productivity by itself isn't a good metric.

2. # of students matching into orthopedics, averaged over number of years of match list data available

This doesn't mean jack. It fluctuates year to year based on interest and you have no way of knowing what's "normal", especially in a smaller field (relatively) like ortho. As long as you have some people matching into that specialty once in a while, you will have a shot. Some schools match into certain specialties particularly consistently or well. A pre-med can, with some research, probably distinguish between a "strong" match list and a "poor" match list in a particularly specialty. If a school consistently (over a period of several years) has a "poor" match list, it might be worth considering. But just looking at number isn't going to tell you anything.

3. Clinical ranking of school, as determined by one or multiple hospital safety rankings (see above)

This doesn't mean jack either. How does this give you any more information than anything else on this list.

4. USNWR ranking (or just PD/Peer score) to serve as "prestige" factor

This might actually matter and can also be used as a proxy for all those other things.

5. Other key requirements of medical school choice (location, climate, proximity to family, cost of living, curriculum, etc.)

This is the stuff that actually matters. Decide where you go to school based mostly on this.
 
My goodness, people are passionate about this. Just go to the medical school that is best for you and do your best. At the MD level your medical school isn't going to keep you out of any specialty.
 
@WedgeDawg

Some defense of my points:

1. Research productivity in orthopedics

I agree, NIH rankings are a loose metric to judge research productivity. As I stated in my post, true research productivity can be found in peer-reviewed journal articles on the subject. For neurosurgery, see this 5-year review: http://thejns.org/doi/pdf/10.3171/2014.10.JNS141025

Research productivity is a deciding factor for this specialty, with basic science publications being prized over clinical papers. As such, factoring in the research productivity ensures you will not be scrambling to publish during 3rd/4th year at a medical school with poor research output.

2. # of students matching into orthopedics, averaged over number of years of match list data available

I agree most pre-med students put too much emphasis on this factor. Yet I maintain there are patterns and trends to be observed here. For example, for neurousrgery, Columbia has an average 5 students/year match rate over the past 3 years. Similarily, Case Western has a 5.7 average over the past 3 years. These are largely different from say a Miami with an average 2.3 or a VCU at 1.3 (national average)

3. Clinical ranking of school, as determined by one or multiple hospital safety rankings (see above)

My argument for this point is that hospital quality rankings differ based on metrics used. Developing your own ranking of clinical quality based on outcomes measurements is worthwhile. The US government/medicaid provides direct hospital comparisons as of July 2017 (https://www.medicare.gov/hospitalcompare/About/Hospital-overall-ratings.html). Alternatively, CareChex provides a comprehensive third party rating system with very clear, objective criterion (http://www.carechex.com/ScoringRatingMethods.aspx)
 
Last edited:
@WedgeDawg

Some defense of my points:

1. Research productivity in orthopedics

I agree, NIH rankings are a loose metric to judge research productivity. As I stated in my post, true research productivity can be found in peer-reviewed journal articles on the subject. For neurosurgery, see this 5-year review: http://thejns.org/doi/pdf/10.3171/2014.10.JNS141025

How is this going to affect you matching? Barrow doesn't have an affiliated med school. Columbia and Florida are tied on that list (and both are higher than Penn) but Columbia and Penn crush Florida in terms of neurosurgery matches. This is not a good metric for choosing where to go to school if you're interested in neurosurgery.

2. # of students matching into orthopedics, averaged over number of years of match list data available

I agree most pre-med students put too much emphasis on this factor. Yet I maintain there are patterns and trends to be observed here. For example, for neurousrgery, Columbia has an average 5 students/year match rate over the past 3 years. Similarily, Case Western has a 5.7 average over the past 3 years. These are largely different from say a Miami with an average 2.3 or a VCU at 1.3 (national average)

Columbia's class size is 160 while Case's class size is 220. Big difference there. You have to control for that.

As a side note, Columbia is known as a neurosurgery powerhouse and actually has graduated more neurosurgeons than any other school AND has more neurosurgeons in academia than any other school (http://www.columbianeurosurgery.org/home/history/ and http://ps.columbia.edu/news/ps-tops-producing-academic-neurologists-and-neurosurgeons). And yet, just to mention your first point again, Columbia is ranked #12 in research index in that study you cited, under Ohio State, but I would take Columbia over Ohio State to match neurosurgery any day of the year.

I'm not saying match lists are useless, but you can't just look at raw number. You have to look at class size as well as where they're matching too, and the last part is not something that premeds really have the tools to know that much about (much of what I know about my specialty of interest comes through internal sources at my school).

3. Clinical ranking of school, as determined by one or multiple hospital safety rankings (see above)

My argument for this point is that hospital quality rankings differ based on metrics used. Developing your own ranking of clinical quality based on outcomes measurements is worthwhile. The US government/medicaid provides direct hospital comparisons as of July 2017 (https://www.medicare.gov/hospitalcompare/About/Hospital-overall-ratings.html). Alternatively, CareChex provides a comprehensive third party rating system with very clear, objective criterion (http://www.carechex.com/ScoringRatingMethods.aspx)

How does this affect where you will match? Answer: it doesn't, and thus shouldn't affect your choice of medical school.
 
Actually Florida and Penn have had identical match rates over the past 3 years (2 students/year), while Penn maintains an average of 20 more matriculants/class. Northwestern, as an example, has a 0.3 student/year match rate, being dwarfed by Miami at 2.3 even though their USNWR rankings are tiers apart. Clearly, input from research producitvity and match lists paints a broader picture. I totally agree that dividing match rate by class size is crucial, and a point I overlooked in my post. Debating these individual school differences is not the goal of my post. Looking at research productivity, there are some obvious answers to be gleaned. Certain medical schools with neurosurgery programs are poorly ranked or unranked. Assuming you have an equal chance of matching just because your medical school contains a neruosurgery department is naive, as demonstrated by these rankings. As with all of these rankings, the differences become minor when you reach the top end (including Ohio State, Penn, and Columbia).

WedgeDawg I think you're missing the bigger picture.

A weighted average of these factors produces results that cannot be summed up by individually debating each school. My brain, nor your brain, can possibly synthesize research rank, match rate, USNWR ranking, and other factors all at once. That is why you create a weighted average. Your point will Columbia is a great illustration of this. There is no way Ohio State would be higher ranked the Columbia when Columbia has nearly triple the match rate of Ohio State. Its superfluous to even debate that. The easiest way for you to see the value of this data would be to share my spreadsheet with weighted averages across all of the above criterion, so you may see for yourself where schools fall when other information is taken into account.

Incorporation of hospital safety data is not intended to increase chances of match. It serves as a proxy of clinical quality. Rather than relying solely on subjective reports from physicians or SDN, one can incorporate national safety data, some of which is endorsed by Medicaid, into school choice. If "clinical quality" is a non-factor in your school choice, feel free to eliminate it.

All of this is beside my key point. My priority system is probably vastly different from yours. Amongst my factors of consideration are cost of living of the medical school location. As such, a school like Columbia will score low. Does that mean that it is an objectively poor school? By no means. My key message was for students to develop their own personal criterion for medical school choice, and gather resources to develop a ranking based on their own (not USNWR's) priorities. I have shown how I have attempted to achieve it, in a clearly imperfect, but useful way.

Edit: Typos
 
Last edited:
How is this going to affect you matching? Barrow doesn't have an affiliated med school. Columbia and Florida are tied on that list (and both are higher than Penn) but Columbia and Penn crush Florida in terms of neurosurgery matches. This is not a good metric for choosing where to go to school if you're interested in neurosurgery.



Columbia's class size is 160 while Case's class size is 220. Big difference there. You have to control for that.

As a side note, Columbia is known as a neurosurgery powerhouse and actually has graduated more neurosurgeons than any other school AND has more neurosurgeons in academia than any other school (http://www.columbianeurosurgery.org/home/history/ and http://ps.columbia.edu/news/ps-tops-producing-academic-neurologists-and-neurosurgeons). And yet, just to mention your first point again, Columbia is ranked #12 in research index in that study you cited, under Ohio State, but I would take Columbia over Ohio State to match neurosurgery any day of the year.

I'm not saying match lists are useless, but you can't just look at raw number. You have to look at class size as well as where they're matching too, and the last part is not something that premeds really have the tools to know that much about (much of what I know about my specialty of interest comes through internal sources at my school).



How does this affect where you will match? Answer: it doesn't, and thus shouldn't affect your choice of medical school.
You're definitely going into neurosurgery, aren't ya?
 
So you believe that all med schools train their students equally well for whichever specialty the student wants to pursue, without regard to the strength of its clinical departments? So someone doing a core rotation or sub-I at Boston Children's will have the same skills entering a peds residency as someone who did their rotations elsewhere? Isn't that the point of sub-I's? To train at a specific place that is strong for the specific residency you are aiming for?


Med schools don't "train" students for anything. The rotations are a bare bones superficial introduction. You learn your craft during and after residency. It takes years.
 
Med schools don't "train" students for anything. The rotations are a bare bones superficial introduction.

So... medical school graduates are no more qualified to start residency than rando off the street who also graduated college but never went to med school?
 
So... medical school graduates are no more qualified to start residency than rando off the street who also graduated college but never went to med school?
Haha, I have seen some people say that's about accurate and you know 0% of what you need at the start of your intern year

I mean, would you say an undergrad degree in chemistry, with no actual lab experience, trains you for wetlab grad work? Obvs not, yet those people do head into PhD programs every year, and those PhD programs do require you have a college degree
 
Haha, I have seen some people say that's about accurate and you know 0% of what you need at the start of your intern year

I mean, would you say an undergrad degree in chemistry, with no actual lab experience, trains you for wetlab grad work? Obvs not, yet those people do head into PhD programs every year, and those PhD programs do require you have a college degree

Since I'm not a medical professional, I can't speak for medicine, but your statement is incorrect about graduate programs. An undergrad degree in chemistry prepares you for graduate-level chemistry work. Does it prepare you to work specifically in the lab? No. But that's why we actively look for undergraduates who have research experience. This is very important for PhD programs. Without the knowledge you get from undergraduate chemistry courses as well as graduate chemistry course, a "chemist" becomes nothing more than a lab tech. You can go about doing procedures like but you are lost when there's nobody to tell you what to do.

It could be the same for medicine. But if medical school doesn't prepare you for residency, then why have medical school in the first place? Why not select the best and brightest straight out of college and train them directly in the specialties?
 
So... medical school graduates are no more qualified to start residency than rando off the street who also graduated college but never went to med school?

No you're putting words in my mouth. I'm saying med school prepares you for residency but residency is what prepares you to function as a doctor. You don't learn to become a doctor in medical school. A July intern is pretty useless and needs close supervision. Doesn't matter where they went to medical school or where they did their sub internship in peds.
 
Med schools don't "train" students for anything. The rotations are a bare bones superficial introduction. You learn your craft during and after residency. It takes years.

No you're putting words in my mouth. I'm saying med school prepares you for residency but residency is what prepares you to function as a doctor. You don't learn to become a doctor in medical school. A July intern is pretty useless and needs close supervision. Doesn't matter where they went to medical school or where they did their sub internship in peds.

The words were already in your mouth before I commented on them. Med schools train students. Not to become fully practicing physicians, but to become competent enough to start residency, where they will learn to become fully practicing and independent physicians.
 
Since I'm not a medical professional, I can't speak for medicine, but your statement is incorrect about graduate programs. An undergrad degree in chemistry prepares you for graduate-level chemistry work. Does it prepare you to work specifically in the lab? No. But that's why we actively look for undergraduates who have research experience. This is very important for PhD programs. Without the knowledge you get from undergraduate chemistry courses as well as graduate chemistry course, a "chemist" becomes nothing more than a lab tech. You can go about doing procedures like but you are lost when there's nobody to tell you what to do.

It could be the same for medicine. But if medical school doesn't prepare you for residency, then why have medical school in the first place? Why not select the best and brightest straight out of college and train them directly in the specialties?
That's odd, my bio degree didn't teach me jack for my research work in a bio lab
 
That's odd, my bio degree didn't teach me jack for my research work in a bio lab

That's because anybody can go into a bio lab, read a few reviews, and understand what's going on. The concept of biology research is generally easy. The concept of chemistry research is harder. You're expected to know what kind of signal a high-spin Co(IV) center would give on the EPR, for instance. Or reach a conclusion based on the spectrochemical series. Which actually requires understanding of ligand field theory, backbonding, etc. Read a chemistry paper. It's a lot different than bio papers you'll read.
 
Top