Do MD-PhD adcoms expect to see specific ranks for applicants in rec letters?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

evasive fish

Presto
2+ Year Member
Joined
Jul 2, 2018
Messages
87
Reaction score
51
Besides things like productivity, being independent, having had ownership in a research project, interpersonal qualities, etc., do MD-PhD adcoms expect research mentors to include specific ranks in their rec letters where they compare the applicant to other mentees/students/techs they've had in the past?

For example, are they expecting to see something like "Among the x number of research tech's I've had, this guy ranks in the top 10%"? In case this is not included in a rec letter, is it common for adcoms to ask a recommender for this rank? Or is this specific rank something that is not necessarily expected in a rec. letter?

Thanks.

Members don't see this ad.
 
I've read at least a few dozen letters for a very competitive, post-bacc research position. The application and selection criteria for the position are analogous to those for MD-PhD (research potential, writing skills, interpersonal fit, and academic excellence).

It wasn't uncommon for the good letters to say "E. Fish is in the top 5% of undergraduates in the major at Harvard." Almost all letters had some variation on the line "I give E.F. my highest possible recommendation." Because they all included this language, it wasn't all that helpful. The most telling information was the specifics, of course: "Eva Fish read over my grant/manuscript and gave me comments that actually made me change stuff."

I certainly don't think it'd hurt if you didn't have the phrasing you mentioned, as long as the letter was clearly glowing in other ways. But maybe some admissions offices who've been doing this for a long time have a more formalized word-coding approach or something like that. Who knows.

I've only seen 2 letters that people have written for me throughout college, so I was never sure of the contents of mine. For MD/PhD apps, because of the stakes, I felt comfortable writing a laundry list of anecdotes/phrases I thought were reasonable to include that I knew I wanted in there when I wrote the initial emails to my writers. Not sure if you're applying right now, but in the future I would highly recommend it—I didn't get any bad reactions, and the worst outcome was that they didn't include something. (Although I've been blessed with really nice mentors... read the situation.)
 
  • Like
Reactions: 1 user
A statement of rank from a professor who has been at multiple institutions including a top R1 institution is always helpful. The real problem is that to label an applicant a top 1% at a nowhere institution. The question would be then, is the applicant able to compete nationally? Thus, unless multiple letters from a < R1 institution swear that the applicant is awesome, that statement would not be as credible as a single PI letter who makes the statement from a R1 institution (and the applicant backs that up with presentations, posters and/or manuscripts).

Statements of rank are seen in about 1/4 LORs. They aren't uncommon. Statements of rank without waiving the right to see a LOR by the applicant are seen as meaningless.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
If you are not at an R1 or elite liberal arts college, the ranking needs to be "top 2%" to carry much weight. Having said that, I am suspect of these rankings; they are obviously subjective, and the incentive for the letter writer is to get their student into the best program possible. The letter writer pays no penalty if they oversell a student, so they inflate the ranking. When I see "Top 1%", I discount that to 10%; "Top 10%" is 25%; and "Top 20%" is 50%. "One of the best" means that there are several individuals who were better. I have read tens of thousands of reference letters, and have seen fewer than 50 that actually say anything objectively negative about an individual, and only a very few that say that the individual would not be a good scientist.

Having been at this for many years, I have the benefit of seeing letters from the same advisors. When I see a letter from a prolific trainer of young scientists like Joan Steitz, I can go back to letters she wrote over the past several years to see how the current letter compares with the past letters. While all these letters are self-plagarized to varying degrees, the best ones comment on specific aspects that are important to us, and differentiate among their advisees by their level of praise and use of modifiers (best, agile, rigorous, etc). At the other end of the spectrum are overworked (or lazy) advisors, who basically use the Microsoft Word find and replace function to recycle letters year after year. I saw that earlier this year when the letters for two different applicants were verbatim, except for the names of the applicants. (Fortunately, they were the same gender, so the pronouns did not have to be changed.) If the letters were to be believed, the individuals worked on identical projects, possessed the same characteristics, had the same potential for a career in science, and were "among the very best" the faculty member had trained.

As someone who has to write quite a few letters for residency applications every year, I understand the allure of writing a generic letter, and also of overselling an individual's abilities. However, I know that my audience is small (maybe a dozen medicine program directors, 5 or 6 pediatrics, 4 neurology, etc.), and they will be reviewing several students from my program every year. If all the letters are the same, they become worthless. Additionally, if I oversell a candidate this year and he struggles in residency, that will be fresh on the residency program director's mind when they read my letters for next year's applicants. I owe it to my students to put the effort into individualizing the letters and to be honest about their abilities. Therefore, I get annoyed when I see fairly useless letters from PI's who only have to write 1 or 2 letters a year to MD-PhD programs.
 
  • Like
Reactions: 4 users
If you are not at an R1 or elite liberal arts college, the ranking needs to be "top 2%" to carry much weight. Having said that, I am suspect of these rankings; they are obviously subjective, and the incentive for the letter writer is to get their student into the best program possible. The letter writer pays no penalty if they oversell a student, so they inflate the ranking. When I see "Top 1%", I discount that to 10%; "Top 10%" is 25%; and "Top 20%" is 50%. "One of the best" means that there are several individuals who were better. I have read tens of thousands of reference letters, and have seen fewer than 50 that actually say anything objectively negative about an individual, and only a very few that say that the individual would not be a good scientist.

Having been at this for many years, I have the benefit of seeing letters from the same advisors. When I see a letter from a prolific trainer of young scientists like Joan Steitz, I can go back to letters she wrote over the past several years to see how the current letter compares with the past letters. While all these letters are self-plagarized to varying degrees, the best ones comment on specific aspects that are important to us, and differentiate among their advisees by their level of praise and use of modifiers (best, agile, rigorous, etc). At the other end of the spectrum are overworked (or lazy) advisors, who basically use the Microsoft Word find and replace function to recycle letters year after year. I saw that earlier this year when the letters for two different applicants were verbatim, except for the names of the applicants. (Fortunately, they were the same gender, so the pronouns did not have to be changed.) If the letters were to be believed, the individuals worked on identical projects, possessed the same characteristics, had the same potential for a career in science, and were "among the very best" the faculty member had trained.

As someone who has to write quite a few letters for residency applications every year, I understand the allure of writing a generic letter, and also of overselling an individual's abilities. However, I know that my audience is small (maybe a dozen medicine program directors, 5 or 6 pediatrics, 4 neurology, etc.), and they will be reviewing several students from my program every year. If all the letters are the same, they become worthless. Additionally, if I oversell a candidate this year and he struggles in residency, that will be fresh on the residency program director's mind when they read my letters for next year's applicants. I owe it to my students to put the effort into individualizing the letters and to be honest about their abilities. Therefore, I get annoyed when I see fairly useless letters from PI's who only have to write 1 or 2 letters a year to MD-PhD programs.
Thanks for this insight!
 
This is why a lot of advisers have their students write their own letters of recommendation for the adviser to sign. As a reviewer, you'll get a highly customized and highly praising letter every time!
 
I don't interview MD/PhD applicants these days, but I do interview candidates for our research-track residency program, and I basically don't pay any attention to the LORs at all. They are all glowing and insist that the applicant is the top student they have ever trained, so they are completely useless for differentiating between applicants. I think the LOR has become an empty exercise and I focus more on measures that can better differentiate between applicants like publication record, quality of writing sample, academic record, and personal presentation in the interview. Some recommenders include these percentile ranks that the OP mentions, others do not. I don't pay much mind either way.
 
  • Like
Reactions: 1 user
Top