- Joined
- Sep 14, 2017
- Messages
- 38
- Reaction score
- 17
This was for surgery, so I can't necessarily speak for the radiology match process.
I'm about halfway through my interviews at this point and have been surprised by a card and a letter from PDs so far. Can't say it has influenced my ranking at this point too much but it certainly impressed me.I received the "ranked to match" emails mostly in January/February, but there was additional communication, even as early as a few hours after an interview, between October - February. I received letters in the mail as well.
I'm about halfway through my interviews at this point and have been surprised by a card and a letter from PDs so far. Can't say it has influenced my ranking at this point too much but it certainly impressed me.
No need to release the step 2 score unless required by any residency at which you interview. To be honest, once you make it to the interview stage, it’s doubtful that a slightly lower step 2 will even be looked at by the program. Of course, if you failed it or massively underperformed and the program is aware, it could hurt. At my program, we actually don’t go backward to get that data, but maybe that’s because we have penciled in and assumed a step 2 number if we didn’t have it when we made the selection for interview. It’s possible the programs that don’t make an assumption about your step 2 will actually go back and get it. However, I don’t think programs do it that way. I actually think that most programs have a mini ranking session immediately after the interview that roughly determines your general position and that any final meetings will only serve to tweak that slightly.
What should I say- just something ambiguous like "I loved your program?" Thank you.
You are implying that your program( and probably every other program) keep records of people that lied to them on their love letters is nothing short of a sad comedy.
I assume you keep these records in a safe place and share it with other PDs during RSNA LOL! to get back at those untrustworthy applicants.
I know you want applicants to be truthful, but they have invested a lot and stakes are high so it is only reasonable to send these sort of letters if they will be happy at your program if they match.
I know you want applicants to be truthful, but they have invested a lot and stakes are high so it is only reasonable to send these sort of letters if they will be happy at your program if they match.
You are implying that your program( and probably every other program) keep records of people that lied to them on their love letters is nothing short of a sad comedy.
If somebody sends you a letter of interest saying I am going to rank you first means if they match at your program they will be extremely happy, nobody is trying to deceive you or take advantage of your trust.
I just don't think it serves you well to outright lie in a love letter--it just spoils your reputation. If you mean to say "if I match at your program, I'd be delighted", then say that--it conveys your enthusiasm adequately. If you mean to say "I'm ranking you #1", fine to say that. If you think that saying "I'm ranking you #1" only means "I'd be delighted to match at your program", you have a problem communicating effectively.
Let me put that into a better context. Most of the time there is no clear cut difference between the top 3 programs that are being ranked and in some cases people change their mind after they send you that "I will rank you 1st email", I guess you don't expect them to send you an email explaining you are no longer their favorite program. On the other hand some PDs of low tier program hate not to be considered as the top choice for an applicant (and I can't blame an applicant who wants to be ranked by the low tier programs just to be safe) and indirectly ask for love letters during the interview(for an instance the PD asked me to send an email before NRMP ROL deadline after the interview if I was still interested in their program . All in all I would rather believe in shades of grey than absolute black and white.
Just an applicant here...but I HIGHLY doubt that extra pub is going to make any difference. I think that sending another email could just make you come across as annoying and it might hurt you more than help youHi Radiology PD,
A week ago, I sent an update e-mail to programs informing them that I had an abstract/poster accepted. This week, I had another publication accepted. Would updating them again so soon be too excessive/annoying? At this point, does it even matter to programs or affect rank list? I already have a research-heavy application (15 publications), so I am leaning toward not updating them. Thanks for your help!
1. What would you think of a candidate who is reapplying this year after not having matched last year? Would this be a red flag for you?
2. Some programs state clearly on their website that they don't accept IMGs. Since your program has almost never hired IMGs, why don't you do the same?
Trying to be as honest as possible. Here are the problems for our program with respect to most DO applicants and certainly all IMG applicants:
1. The perception (again, perception) that we struggled to fill the program if a number of our residents are DO/IMG. Without trying to give away too much, I'm not at Mallinckrodt--my program isn't one of those 15+ residents/year beasts that can have an IMG or DO and everyone decides "dang, that person must be awesome"--instead, for us, because we have a smaller program, the impression will be, "why couldn't they fill with AMGs". Realize that this is not just a perception among applicants--I'd have explain to all my faculty (none of whom are DOs) that we really wanted this DO--they are going to think we had a bad Match.
I just looked up the Schulze method. It apparently assumes that all "unranked" programs are by default lower than "ranked" programs in any given list. That seems like a fundamental flaw when applied to the spreadsheet, where someone may rank BID top and not BWH/MGH because they didn't interview there, but the algorithm automatically assumes that the rank list placed BID over BWH/MGH. It sounds like a flaw that would break any meaning that such a composed list would have, as it assumes that all candidates have access to the same pool of programs (when in reality, each candidate only has access to whatever programs sent him/her an interview).
I'd love to hear your general thoughts on the Schulze Ranklist (on the spreadsheet) that has been compiled by an applicant where the most desirable/competitive programs are ranked in order. More specifically, I am also interested in hearing if you believe the "perception" you were referring to earlier (quoted), actually exists for programs based on this list, or if it is just presumed? An extension to that question is, even if the perception exists, do you think it significantly impacts the competitiveness of a program?
I found the spreadsheet you reference and looked at it. I don't really understand your question and its relationship to the spreadsheet listing of rank choices and the subsequent "ranking" of programs using the Schulze method.
In my quick review, I only found 2 of the people posting their rank lists who indicate that they are IMGs. The places those 2 people interviewed at are fine but not universally considered "top":
CT - Hartford - Hartford Hospital Program - DR (both applicants)
IA - Iowa City - University of Iowa Hospitals and Clinics Program - DR
MA - Boston - Tufts Medical Center Program - DR (both applicants)
MA - Burlington - Lahey Clinic Program - DR
MA - Springfield - UMMS-Baystate Program - DR (both applicants)
MA - Worcester - St Vincent Hospital Program - DR
MA - Worcester - University of Massachusetts Program - DR
NJ - Newark - Rutgers New Jersey Medical School Program - DR
NY - Syracuse - SUNY Upstate Medical University Program - DR (both applicants)
OH - Cleveland - Case Western/University Hospitals Cleveland Medical Center Program - DR
OH - Cleveland - Cleveland Clinic Foundation Program - DR
PA - Philadelphia - Albert Einstein Medical Center Program - IR/DR
PA - Philadelphia - Drexel University College of Medicine/Hahnemann University Hospital Program - DR
TX - Houston - Baylor College of Medicine Program - DR
VT - Burlington - University of Vermont Medical Center Program - DR
I really don't feel like rehashing this issue, see my prior posts. Briefly, as I mentioned earlier, there are 2 main issues when it comes to inviting IMGs to interview:
1. It's hard to evaluate the candidates on paper if you are a program that relatively underweights board scores as a selection criterion (like we've decided to do at my program). Remember, I rely on % honors in core clerkships and relative difficulty of getting honors at that medical school as a metric, and that is practically impossible to determine for IMGs. When you have an abundance of other competitive candidates who are easier to "score", it's easy just to select those candidates for interviews.
2. There is a perception that your program had difficulty recruiting great AMG candidates if you consistently have IMG candidates. This is not a problem for the "big dogs" (like Mallinckrodt or MGH), because the assumption when you see an IMG in one of those residency programs is that the IMG worked there as a research person and was extremely well liked and vetted by the time the person joined the residency. So the fact that Mallinckrodt or MGH have an occasional IMG speaks only to the likelihood that the person spent a year or two doing research there and is incredibly good.
Like I said, one year we ranked an IMG top of our list. The person ended up at MGH, where the person had done research.
As for thoughts on this ranking of programs using the Schulze method, it seems to me the methodology is sound for establishing the relative ranking of programs for the cohort of people who entered data. Keep in mind that since the cohort who entered data have a variety of different personal considerations (geography, IR vs. DR interests, significant other preferences, etc.), the ranking does not really speak to "quality of program". If you as a candidate generally share the same personal considerations as the cohort, then I guess the rankings are legit for you, but they are skewed toward the interests of the cohort that entered data. For example, if there were more people entering data in this spreadsheet from California, and those people wanted to stay in California, then an average program in California is going to rank higher than the most outstanding program in the midwest. Not taking anything away from UC Irvine, I would say this is in play for how UC Irvine is #20 and Mayo Rochester is #35. I'm not buying that UC Irvine is a better program than Mayo Rochester.
What would be far more interesting and useful for future candidates (and not really possible with the small data set) is to have "cohort pools" so that candidates (and even programs themselves) might see how programs stack up for certain pools. Are you a candidate primarily interested in California schools? Ok, look at the list generated by the Schulze method on rank lists for the cohort of candidates (preferably over a few years) who were primarily interested in California schools, and see how the programs got ranked, to determine the "market valuation". Are you a candidate primarily interested in small-to-medium sized programs with ESIR potential? Ok, look at the Schulze-generated list for the cohort with similar interests.
When you mish-mash everyone's personal interests into a big pile, then I guess you can argue these idiosyncratic preferences cancel out--but you need a lot more data than the 100 rank lists that seem to have been used to create this spreadsheet. And all you are really getting is the perception of medical students about the relative desirability of programs. What you really want is the relative ranking of programs based on actual quality of training, not medical student perception of quality of training.
It would be great if NRMP ran the Schulze method on the ranklists and gave the information out to candidates and to PDs. Then it would be each program's job to make their program better or market their program better--it would be a race to the top.
I just looked up the Schulze method. It apparently assumes that all "unranked" programs are by default lower than "ranked" programs in any given list. That seems like a fundamental flaw when applied to the spreadsheet, where someone may rank BID top and not BWH/MGH because they didn't interview there, but the algorithm automatically assumes that the rank list placed BID over BWH/MGH. It sounds like a flaw that would break any meaning that such a composed list would have, as it assumes that all candidates have access to the same pool of programs (when in reality, each candidate only has access to whatever programs sent him/her an interview).
3rd year med student
US MD
Step 1 204
Step 2 Pending
Clerkships 4P, two pending, but with clinical honors and good evaluations in most, honored the shelf on the only one I didn't get clinical honors.
ECs:
Worked throughout med school. I've held lots of leadership positions in med school committees and volunteering. Involvement in SIR (RFS/MSC with leadership positions).
Research: several experiences and publications in IR-adjacent fields and a few case reports.
I don't want this to turn into a "chance me" thread, so I'll try to be as broad as possible in my response.
Each program emphasizes board scores to different degrees, but all programs are concerned about students who "test poorly" because radiology boards is now a computerized test and radiology is a field in which broad knowledge is important. My guess is that a combined step 1 + step 2 score for most IR applicants is going to be over 480, and if below 450, you are likely knocked out of consideration for IR programs right now unless you have some other fantastic hook -- if IR is your interest, you're looking at doing an independent IR residency after DR (either with or without ESIR). For DR programs, much will depend on your step 2 score and any other hooks you can develop.
You are going to have to get beyond the USMLE score filter. Your best bet is usually securing a spot in the DR program at your medical school by getting a hook with the department. Aways are another method to get past the filter, but you are going to need the personality to shine in an away--probably has no bearing on how good you will be as a radiologist, but it's best if you have that magnetic personality that is the perfect combination of respectful, jovial, gets along with other residents/students, fun to be around, etc., etc. Sometimes you can come off as a bull****ter if you try to fake this perfect personality.
Unfortunately, most PDs don't have the bandwidth to look into your clinical clerkship performance to see that you got "clinical honors"--in fact, when you tell me you got "clinical honors" but only passed your clerkships, what it tells me as a PD is that you aren't great at taking tests. Again, don't take this the wrong way, but PDs don't want their residents to be ones that have a problem taking tests, since if you fail the radiology boards, its a problem for the program.
So, you need a hook to get into the better mid-level programs now that your step 1 is well beyond the knock-out filter of most mid-level DR programs. The most important thing you can do is spend every last bit of effort you can muster to blow away Step 2, and then concoct some reasonable reason why you didn't do well in Step 1 which you will relate in a personable, real way during interviews. Then, create a really brief email that you'll send to targeted programs that ask them to consider your "overall" step performance in light of the fact that you did so well on Step 2 before they make a decision about your application. Another possible hook is to latch onto a radiology mentor who has some clout, impress the s**t out of them doing research or even a research year, or in some other way impress that person, and then the radiology mentor can help open some doors for you. In general, ECs aren't much of a hook--leadership in medical student organizations checks a box but don't think this gives you a great hook, UNLESS in the process of doing the EC, you can score a hell of a letter of recommendation from a faculty mentor.
If you don't get 225 or higher on Step 2, it's going to be hard to get into a DR program I think. Not impossible, but hard. You'll have to consider programs with some blemishes, and then work your butt off to become the best radiologist you can be. Good luck.
I don't want this to turn into a "chance me" thread, so I'll try to be as broad as possible in my response.
Each program emphasizes board scores to different degrees, but all programs are concerned about students who "test poorly" because radiology boards is now a computerized test and radiology is a field in which broad knowledge is important. My guess is that a combined step 1 + step 2 score for most IR applicants is going to be over 480, and if below 450, you are likely knocked out of consideration for IR programs right now unless you have some other fantastic hook -- if IR is your interest, you're looking at doing an independent IR residency after DR (either with or without ESIR). For DR programs, much will depend on your step 2 score and any other hooks you can develop.
You are going to have to get beyond the USMLE score filter. Your best bet is usually securing a spot in the DR program at your medical school by getting a hook with the department. Aways are another method to get past the filter, but you are going to need the personality to shine in an away--probably has no bearing on how good you will be as a radiologist, but it's best if you have that magnetic personality that is the perfect combination of respectful, jovial, gets along with other residents/students, fun to be around, etc., etc. Sometimes you can come off as a bull****ter if you try to fake this perfect personality.
Unfortunately, most PDs don't have the bandwidth to look into your clinical clerkship performance to see that you got "clinical honors"--in fact, when you tell me you got "clinical honors" but only passed your clerkships, what it tells me as a PD is that you aren't great at taking tests. Again, don't take this the wrong way, but PDs don't want their residents to be ones that have a problem taking tests, since if you fail the radiology boards, its a problem for the program.
So, you need a hook to get into the better mid-level programs now that your step 1 is well beyond the knock-out filter of most mid-level DR programs. The most important thing you can do is spend every last bit of effort you can muster to blow away Step 2, and then concoct some reasonable reason why you didn't do well in Step 1 which you will relate in a personable, real way during interviews. Then, create a really brief email that you'll send to targeted programs that ask them to consider your "overall" step performance in light of the fact that you did so well on Step 2 before they make a decision about your application. Another possible hook is to latch onto a radiology mentor who has some clout, impress the s**t out of them doing research or even a research year, or in some other way impress that person, and then the radiology mentor can help open some doors for you. In general, ECs aren't much of a hook--leadership in medical student organizations checks a box but don't think this gives you a great hook, UNLESS in the process of doing the EC, you can score a hell of a letter of recommendation from a faculty mentor.
If you don't get 225 or higher on Step 2, it's going to be hard to get into a DR program I think. Maybe impossible. You'll have to consider programs with some blemishes, and then work your butt off to become the best radiologist you can be. Good luck.
Thanks for the general response! In general, what is the "cutoff" step 1 score for most DR programs?
This is what confuses me about your process for assessing applicants, @RadiologyPD. You admit that clerkship grades vary considerably between institution and, at best, you can only guesstimate the relevance of clerkship grades based on the "relative difficulty of getting honors at each medical school". Then you mentioned that any score above a 240 is equivocal for Step examinations. In the context of the modern USMLE examinations, scoring 240, 250, 260+ are very different and represent very different levels of skill and knowledge. We're talking getting a B/B+ (240) vs. getting an A+ (260+) on the end-all-be-all examination for medical students, and the only standardized metric for assessing applicants. Once in a while a student will slip through the cracks, score 230 on all of their practice exams, and have a lucky day and score 260+. This is an anomaly, however, and in most cases, scores on this examination are a very good gauge of the quality of students' basic science foundation. Ask any student what their average on the 8+ NBME/UWSA practice examinations were and I will guarantee most score within 5-10 points of their practice tests, meaning these examinations do accurately portray a student's ability and, make no mistake, there are variations in ability between those who score 240, 250, and 260.Hi RadiologyPD
Thank you for taking the time to answer my question. I was wondering how radiology PDs across the country value clerkship grades as a critical factor in interview/ranking selection given their known highly subjective manner and poor interrater reliability
This is what confuses me about your process for assessing applicants, @RadiologyPD.
Just seems odd to me that you have an extremely useful metric at your disposal for meticulously comparing applicants and you brush it off and dismiss anything above a 240 as basically the same thing as a 260+. Obviously this selection process works at your institution and you are content with the applicants you get, so I'm just offering this up for discussion. I will say that it's honestly a bit insulting to those of us who know that there is a stark difference in the mindset and work ethic between the student who scores 240 and the student who scores 260+. The spreadsheet made it clear that the notion that "everyone scores in this range so 240+ is the cutoff for a great score" is incorrect. The majority of students applying radiology actually score <260.
Thanks for hearing me out.
This is what confuses me about your process for assessing applicants, @RadiologyPD. You admit that clerkship grades vary considerably between institution and, at best, you can only guesstimate the relevance of clerkship grades based on the "relative difficulty of getting honors at each medical school". Then you mentioned that any score above a 240 is equivocal for Step examinations. In the context of the modern USMLE examinations, scoring 240, 250, 260+ are very different and represent very different levels of skill and knowledge. We're talking getting a B/B+ (240) vs. getting an A+ (260+) on the end-all-be-all examination for medical students, and the only standardized metric for assessing applicants. Once in a while a student will slip through the cracks, score 230 on all of their practice exams, and have a lucky day and score 260+. This is an anomaly, however, and in most cases, scores on this examination are a very good gauge of the quality of students' basic science foundation. Ask any student what their average on the 8+ NBME/UWSA practice examinations were and I will guarantee most score within 5-10 points of their practice tests, meaning these examinations do accurately portray a student's ability and, make no mistake, there are variations in ability between those who score 240, 250, and 260.
Knowing that USMLE Step 1 is the staple result of every student's basic sciences education, why would you make the cutoff 240? Why hold clerkship grades to such high regard when, as you admit, they are more subjective and less reliable than Step 1? It's bizarre, because you come off as a PD from a top 20 program, and I would assume you would want applicants who are self-motivated, intuitive, and strong test-takers, the combination of which usually result in a strong (250+) or superb (260+) Step 1 score. Isn't it widely known that charisma, "apparent" work ethic, and personality play the largest role in the subjectivity of clerkship grades, which are all factors that can be assessed at an interview by your selection committee after filtering more stringently using USMLE scores?
Just seems odd to me that you have an extremely useful metric at your disposal for meticulously comparing applicants and you brush it off and dismiss anything above a 240 as basically the same thing as a 260+. Obviously this selection process works at your institution and you are content with the applicants you get, so I'm just offering this up for discussion. I will say that it's honestly a bit insulting to those of us who know that there is a stark difference in the mindset and work ethic between the student who scores 240 and the student who scores 260+. The spreadsheet made it clear that the notion that "everyone scores in this range so 240+ is the cutoff for a great score" is incorrect. The majority of students applying radiology actually score <260.
Thanks for hearing me out.
This is what confuses me about your process for assessing applicants, @RadiologyPD. I will say that it's honestly a bit insulting to those of us who know that there is a stark difference in the mindset and work ethic between the student who scores 240 and the student who scores 260+
I had a couple of questions
1) How do you weigh someone in the bottom quartile from a top 10 US MD vs someone in the top half or top quartile from a mid or lower tier program?
2) A lot of programs, even less reputable ones, have fantastic fellowship lists with graduates going to MGH/UCSF/MIR, but isn't this somewhat the product of radiology being less competitive a few years ago? As radiology residency becomes more competitive, is fellowship competitiveness going to catch up?
3) What is your criteria for choosing chief residents?