Struggling in Clinical Rotations

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

EvanHansenFan

Full Member
2+ Year Member
Joined
Mar 13, 2020
Messages
66
Reaction score
57
Hi everyone,

I am a rising M2 student at a US MD school, one that is designed as a 3-year curriculum.

My school uses a longitudinal model for rotations, and we started rotations end of our M1 year, so about a month ago.

I have rotated in almost all specialties, but some only for a few days given that we will come back to them at the end of our M2 year/all of M3 year.

We have started to get evaluations back from our clinical preceptors, and I have only gotten a Pass for those rotations (our school uses Honors, High Pass, Pass, Fail) grading scheme.

How does a Pass look for residency programs? I am interested in a primary care specialty only.

Members don't see this ad.
 
Hi everyone,

I am a rising M2 student at a US MD school, one that is designed as a 3-year curriculum.

My school uses a longitudinal model for rotations, and we started rotations end of our M1 year, so about a month ago.

I have rotated in almost all specialties, but some only for a few days given that we will come back to them at the end of our M2 year/all of M3 year.

We have started to get evaluations back from our clinical preceptors, and I have only gotten a Pass for those rotations (our school uses Honors, High Pass, Pass, Fail) grading scheme.

How does a Pass look for residency programs? I am interested in a primary care specialty only.
You'll be fine. Talk to your preceptor for feedback on how to improve.
 
  • Like
Reactions: 3 users
Don't worry. You have been sampling different specialties for only a month; a few days isn't enough for anyone to get to know you, so "pass" is just fine.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
Hi everyone,

I am a rising M2 student at a US MD school, one that is designed as a 3-year curriculum.

My school uses a longitudinal model for rotations, and we started rotations end of our M1 year, so about a month ago.

I have rotated in almost all specialties, but some only for a few days given that we will come back to them at the end of our M2 year/all of M3 year.

We have started to get evaluations back from our clinical preceptors, and I have only gotten a Pass for those rotations (our school uses Honors, High Pass, Pass, Fail) grading scheme.

How does a Pass look for residency programs? I am interested in a primary care specialty only.
Explain (in detail) how your rotations are graded.
 
Most of the schools with three year clinical rotations seem like top tier so you’ll be fine
 
I agree with the wise Goro: You cannot change the past, but one important aspect of improving is to ask the people grading you how you can do better. Seek out feedback, and then practice, practice, practice implementing it.
 
  • Like
Reactions: 4 users
I think the advice here is good. Nothing can change your prior evaluations, so it's best to get feedback on what you could do better. How much a "pass" will matter is impossible to say -- some schools have 10% H / 10% HP / 80% P. Some have the reverse. In any case, your grades in your M2 year are going to matter more. Focus on those!
 
  • Like
Reactions: 1 users
I think the advice here is good. Nothing can change your prior evaluations, so it's best to get feedback on what you could do better. How much a "pass" will matter is impossible to say -- some schools have 10% H / 10% HP / 80% P. Some have the reverse. In any case, your grades in your M2 year are going to matter more. Focus on those!
Is that right, M2 grades carry more weight than M3 ?
 
Since the OP goes to a three year school, their second year is their main clinical year. They will apply at the start of their 3rd year
 
  • Like
Reactions: 1 users
Most of us have been there before TC. It's hard to get honors on everything due to how subjective these things are.

Crush your USMLE Step 1 exam. Get the best possible score and start establishing connections with mentors who can get to know you so that they can write you excellent letters of recommendation.

I never thought twice about a candidate that got a "pass" on their rotation however they interviewed well and had glowing LORs and a high step score. Best wishes on your journey
 
  • Like
Reactions: 1 users
On your initial rotations, you are lacking in many basic skills, and so are your classmates. This is normal and frustrating, so get feedback and work on the suggestions. Everyone was in your shoes at one time.
 
  • Like
Reactions: 2 users
Most of us have been there before TC. It's hard to get honors on everything due to how subjective these things are.

Crush your USMLE Step 1 exam. Get the best possible score and start establishing connections with mentors who can get to know you so that they can write you excellent letters of recommendation.

I never thought twice about a candidate that got a "pass" on their rotation however they interviewed well and had glowing LORs and a high step score. Best wishes on your journey
I had a fleeting moment when I thought that maybe Honors in rotations was actually an achievable metric based on something real. It might not have been based in medical acumen, but surely some combination of charisma, intelligence, and working the system could result in consistent honors.

Now that I'm 2 rotations deep in M3, I see this is completely false, at least at my school. The whole "student cured cancer, 3/5" meme is real. I just had a primary care stint and worked with an attending for 3 days. By the end of the 3 days, I was seeing half her patients and writing notes which received minimal-to-no changes. I took on very challenging patients, like recent immigrants, severe developmental delay, etc... I was presenting concisely with reasonable plans. I was affable and well-liked by the staff. I built a genuinely good rapport with the attending. I can point to tons of criteria on our evaluation form which would qualify me for a 5/5. I got straight average evals within 4 hours of leaving her service with minimal comments.

Meanwhile, I worked for 3 days with another attending, gave one completely jumbled presentation, and then received 5/5 in every category.

Another attending who I worked with for 5 days pulled me aside to tell me I was an "astounding good med student for an M4," who was even more blown away when I told him I was an early year M3, never filled out the eval form and won't answer emails.

At this point you can't convince me this isn't random. Step 1 scores don't exist anymore. M1/M2 are P/F. Subjective evals are 70%+ of my M3 clerkship grades. The shelf is anywhere from 15-25%, and another 10% is a subjective oral exam. The admins who set the grading criteria don't understand statistics. They think that because 30-40% of students are getting Honors, it's a decent spread. Nope, it's a stochastic phenomenon and the grades are driven by some combination of insider information (e.g., "Attending X gives 5s, claim you worked with him for 7 days, but attending Y gives 3s, don't send an eval") and dumb luck.

I seriously cannot believe I put aside NINE years of my life on minimal pay, minimal sleep, making countless personal sacrifices to do an MD/PhD only to be judged quantitatively on this flimsy, subjective criteria. This profession is infuriating, and I've never been closer to leaving altogether.
 
Last edited:
  • Like
  • Care
  • Sad
Reactions: 10 users
I had a fleeting moment when I thought that maybe Honors in rotations was actually an achievable metric based on something real. It might not have been based in medical acumen, but surely some combination of charisma, intelligence, and working the system could result in consistent honors.

Now that I'm 2 rotations deep in M3, I see this is completely false, at least at my school. The whole "student cured cancer, 3/5" meme is real. I just had a primary care stint and worked with an attending for 3 days. By the end of the 3 days, I was seeing half her patients and writing notes which received minimal-to-no changes. I took on very challenging patients, like recent immigrants, severe developmental delay, etc... I was presenting concisely with reasonable plans. I was affable and well-liked by the staff. I built a genuinely good rapport with the attending. I can point to tons of criteria on our evaluation form which would qualify me for a 5/5. I got straight average evals within 4 hours of leaving her service with minimal comments.

Meanwhile, I worked for 3 days with another attending, gave one completely jumbled presentation, and then received 5/5 in every category.

Another attending who I worked with for 5 days pulled me aside to tell me I was an "astounding good med student for an M4," who was even more blown away when I told him I was an early year M3, never filled out the eval form and won't answer emails.

At this point you can't convince me this isn't random. Step 1 scores don't exist anymore. M1/M2 are P/F. Subjective evals are 70%+ of my M3 clerkship grades. The shelf is anywhere from 15-25%, and another 10% is a subjective oral exam. The admins who set the grading criteria don't understand statistics. They think that because 30-40% of students are getting Honors, it's a decent spread. Nope, it's a stochastic phenomenon and the grades are driven by some combination of insider information (e.g., "Attending X gives 5s, claim you worked with him for 7 days, but attending Y gives 3s, don't send an eval") and dumb luck.

I seriously cannot believe I put aside NINE years of my life on minimal pay, minimal sleep, making countless personal sacrifices to do an MD/PhD only to be judged quantitatively on this flimsy, subjective criteria. This profession is infuriating, and I've never been closer to leaving altogether.

My best evals came from attendings who actually liked me, regardless of most other parameters (time spent together, presentations, skills I demonstrated, etc).
 
  • Like
Reactions: 4 users
Members don't see this ad :)
My best evals came from attendings who actually liked me, regardless of most other parameters (time spent together, presentations, skills I demonstrated, etc).
Definitely the trend, but some people just give garbage evals or (what they think is) average every time, and your whole grade could get tanked. Some think they're doing you a favor by giving straight 4/5, but at my school that's the "competent" category and sets you up for a "Pass." I thought I genuinely connected with this attending. We would casually talk and she was even giving out personal details, life/career advice, and allowing me a lot of leeway because she trusted me around patients and trusted my histories would be thorough and accurate. 4/5, which swept me right out of Honors range and into borderline between HP/P.

I've had similar experiences with other attendings. It feels correlated but random. At my school you need 4.5/5 to get honors, but realistically 4.6 because the shelf cutoffs are ~95-99th percentile for honors, so you can't make up for a bad eval with a good shelf. So you can get 4.75 averages from 4 attendings, but if the fifth gives you a 3.5 (which is what nearly all med students would get if the attendings graded according to the rubric), you're sunk for honors.

I think it would be better to just divorce the quantitative grades from it all. If it's subjective, keep it subjective. I had a career before med school so I'm used to subjectivity, but people fully understand that an evaluation or letter is just some person's opinion. In medicine we decided to put numbers on top of an inherently non-quantitative process. I guess that's sort of par for the course for academic medicine, where at least half the faculty (and nearly all the faculty who are involved with designing clerkships) do "research" based on surveys.
 
  • Like
  • Care
  • Sad
Reactions: 6 users
Another attending who I worked with for 5 days pulled me aside to tell me I was an "astounding good med student for an M4," who was even more blown away when I told him I was an early year M3, never filled out the eval form and won't answer emails.
Classic
 
  • Like
Reactions: 1 user

My post above references precepting residents, but I would imagine it's similar to med students (though there is more longitudinal continuity in residency). It's really hard to be a conscientious evaluator.
 

My post above references precepting residents, but I would imagine it's similar to med students (though there is more longitudinal continuity in residency). It's really hard to be a conscientious evaluator.
So… it’s the PD’s fault because they didn’t want you to be too nice?

There’s a systemic issue here but i think the blame really lies with the rubric
 
I had a fleeting moment when I thought that maybe Honors in rotations was actually an achievable metric based on something real. It might not have been based in medical acumen, but surely some combination of charisma, intelligence, and working the system could result in consistent honors.

Now that I'm 2 rotations deep in M3, I see this is completely false, at least at my school. The whole "student cured cancer, 3/5" meme is real. I just had a primary care stint and worked with an attending for 3 days. By the end of the 3 days, I was seeing half her patients and writing notes which received minimal-to-no changes. I took on very challenging patients, like recent immigrants, severe developmental delay, etc... I was presenting concisely with reasonable plans. I was affable and well-liked by the staff. I built a genuinely good rapport with the attending. I can point to tons of criteria on our evaluation form which would qualify me for a 5/5. I got straight average evals within 4 hours of leaving her service with minimal comments.

Meanwhile, I worked for 3 days with another attending, gave one completely jumbled presentation, and then received 5/5 in every category.

Another attending who I worked with for 5 days pulled me aside to tell me I was an "astounding good med student for an M4," who was even more blown away when I told him I was an early year M3, never filled out the eval form and won't answer emails.

At this point you can't convince me this isn't random. Step 1 scores don't exist anymore. M1/M2 are P/F. Subjective evals are 70%+ of my M3 clerkship grades. The shelf is anywhere from 15-25%, and another 10% is a subjective oral exam. The admins who set the grading criteria don't understand statistics. They think that because 30-40% of students are getting Honors, it's a decent spread. Nope, it's a stochastic phenomenon and the grades are driven by some combination of insider information (e.g., "Attending X gives 5s, claim you worked with him for 7 days, but attending Y gives 3s, don't send an eval") and dumb luck.

I seriously cannot believe I put aside NINE years of my life on minimal pay, minimal sleep, making countless personal sacrifices to do an MD/PhD only to be judged quantitatively on this flimsy, subjective criteria. This profession is infuriating, and I've never been closer to leaving altogether.
This was so frustrating to read because it reflects my exact experience. Also our school has the exact same grading rubric.

I remember my best friend (one year ahead of me, got all HP/H) said the best thing I can do is to stop trying to get good evals and internally I thought "Wow, he's so jaded. That's not how it's going to really be! I'm going to prove him wrong!".

I kissed my CDs ass every. damn. day. on my first rotation. I checked all the boxes, I jumped on every available patient, I was enthusiastic, I showed up with a gigantic smile, I built the strongest rapport with my patients. I did fumble a few times on my presentations, but it was my first rotation, nothing serious. I was self aware enough to not come across as a gunner.

CD gave me 4/5 across the board which placed me in the High Pass territory, when 80% of students Honor that rotation. I just immediately got so jaded.

Now I show up and try to act interested, but I don't go above and beyond, I don't stay late, I don't jump on new patients, I don't kiss ass, I don't preround. I send evals to anyone who vibed with me about video games or anime or some bs.

Surprise, surprise. My evals have stayed consistent at 4/5 or improved to 5/5 because they like me as a person, not because of my academic acumen or ability to treat patients. What a joke.
 
  • Like
  • Sad
Reactions: 3 users
So… it’s the PD’s fault because they didn’t want you to be too nice?

There’s a systemic issue here but i think the blame really lies with the rubric
The blame lies with trying to turn something subjective into something objective. Just publish the comments. If there are no good objective measures, make some. The burden lies on those who'd like objective data to collect it.

Imagine you're trying to come up with new guidelines for the treatment of a rare disease. A literature search reveals 3 studies, all from different regions of the world. N = 30-100 patients. The criteria is clinician evaluation of overall patient constitution over a one month period in breathing, eating, urinating, and pain. One study says that treatment A the most efficacious treatment available and drastically improves symptoms in all domains. A second study says that treatment B is qualitatively more efficacious than A, but neither had a significant effect in any domain. A third study says that treatment A significantly improves pain, but neither treatment resulted in clinically significant effects in any other domain.

Now imagine along with that you have a well-designed long-term RCT of treatment A vs. placebo that measures number of hospitalizations over 5 years, but N = 15 per group. Treatment A shows a statistically significant improvement of 8 vs. 13 hospitalizations. You have another similar RCT of treatment B vs. placebo that shows no significant effects.

What recommendation would you make? With all the data, you likely give a weak recommendation to A and strongly recommend the use of individual clinical judgement. Without the RCTs you can't outright recommend either treatment.

That's effectively the sort of data we have with med school grades. We had Step 1, which acted as a well-controlled study of something objectively measurable (clinical knowledge), but it had low sample size (single day performance) and didn't measure all necessary variables (MCT =/= clinical performance). Now we have only a large volume of mostly garbage short-term data which comes to wildly different conclusions about fairly irrelevant data. We have institutions like AOA, which have become stochasticity awards for being the select good student who gets a good attending mix and doesn't miss honors on any rotation.

I know it feels to preceptors like they have an objective view on a student. I've been in similar shoes evaluating new PhD students or undergraduates. It feels like you can sit down with someone, work with them for a few days, and get an idea of what they're made of. However, after 5 years of this, even 3 months working with someone every day was hardly an indicator. I can think of multiple students who struggled with very basic concepts who went on to top PhD programs and published in top-tier journals. I can think of multiple students who were punctual, affable, and helpful in the lab who ultimately floundered in a PhD program and mastered out. As with everything human, if you want objective results you need objective data.

Sincerely,

-Person who just missed Honors by 0.01 points after absolutely busting his a** for an attending who gave straight average evals with minimal comments
 
  • Like
  • Care
Reactions: 2 users
I completely agree that subjective evals are difficult to interpret and very prone to bias and inter-evaluator differences. The Med Ed community has proposed all sorts of ways to try to fix this -- most notably Milestones and Entrustable Professional Activities. Both are completely doomed to failure. Milestones are theoretically scaled assessments with specific anchors that are supposed to get all evaluators, after a bit of training, to score the same behaviors the same way. I remember going to a meeting workshop where this was demonstrated -- after an educational session they showed the whole group a clinical interaction, and then had everyone score it using their milestones. They expected everyone to score it the same. No surprise, scores were all over the place. And this was a group of highly motivated Med Ed folks - I would expect it to be worse with those who don't focus on this for a living.

EPA's are equally doomed. They are based upon the question of how much you would trust your evaluee to do some task. This won't standardize either.

These approaches, and IMHO all similar approaches, are doomed because they are trying to assign objective values to something that is inherently subjective. And this will never work, just like trying to put your left glove on your right hand (OK smarty pants, I know you can turn the glove inside out and it will fit. Just go with it....)

But what are we left with? If there is no grading, then when programs go to pick people to interview, how would they pick? Exam scores are perhaps a bit of a help, but clearly fail for the reasons you mention above. Including all comments and telling us to just do a "holistic review" is unrealistic -- some people only have comments that say "good job", some schools cherry pick comments where others don't. LOR's always say that the student is in the top 1% of anyone they have ever worked with. God only knows who wrote your PS.

What's left? Whom you know? The stature of the school you go to? How many pubs you were able to crank out? I'm not a fan of any of those.

I don't have an answer. Probably some sort of committee evaluation - collecting info from as many sources as possible. ideally using the same evaluators / experiences as much as possible.
 
  • Like
  • Sad
Reactions: 1 users
I completely agree that subjective evals are difficult to interpret and very prone to bias and inter-evaluator differences. The Med Ed community has proposed all sorts of ways to try to fix this -- most notably Milestones and Entrustable Professional Activities. Both are completely doomed to failure. Milestones are theoretically scaled assessments with specific anchors that are supposed to get all evaluators, after a bit of training, to score the same behaviors the same way. I remember going to a meeting workshop where this was demonstrated -- after an educational session they showed the whole group a clinical interaction, and then had everyone score it using their milestones. They expected everyone to score it the same. No surprise, scores were all over the place. And this was a group of highly motivated Med Ed folks - I would expect it to be worse with those who don't focus on this for a living.

EPA's are equally doomed. They are based upon the question of how much you would trust your evaluee to do some task. This won't standardize either.

These approaches, and IMHO all similar approaches, are doomed because they are trying to assign objective values to something that is inherently subjective. And this will never work, just like trying to put your left glove on your right hand (OK smarty pants, I know you can turn the glove inside out and it will fit. Just go with it....)

But what are we left with? If there is no grading, then when programs go to pick people to interview, how would they pick? Exam scores are perhaps a bit of a help, but clearly fail for the reasons you mention above. Including all comments and telling us to just do a "holistic review" is unrealistic -- some people only have comments that say "good job", some schools cherry pick comments where others don't. LOR's always say that the student is in the top 1% of anyone they have ever worked with. God only knows who wrote your PS.

What's left? Whom you know? The stature of the school you go to? How many pubs you were able to crank out? I'm not a fan of any of those.

I don't have an answer. Probably some sort of committee evaluation - collecting info from as many sources as possible. ideally using the same evaluators / experiences as much as possible.
Shelf exam scores + return to scored Step 1 + keeping all clinical comments in MSPE?
 
Shelf exam scores + return to scored Step 1 + keeping all clinical comments in MSPE?
Exam performance predicts exam performance. There probably is some value to reporting shelf exams, simply to give students more datapoints to average out a poor testing day. But there is no way to enforce reporting of shelf scores - unless the NBME makes some sort of a transcript of them. Which they won't.

Keeping all comments in the MSPE - I can just see it now. Students appealing to have a sentence or phrase omitted. Plus with no context of the evaluator, it remains very subjective.
 
  • Like
Reactions: 1 users
Exam performance predicts exam performance. There probably is some value to reporting shelf exams, simply to give students more datapoints to average out a poor testing day. But there is no way to enforce reporting of shelf scores - unless the NBME makes some sort of a transcript of them. Which they won't.
Exam performance predicts the ability to set a target and meet a goal. It's a different skill set, but it's at least tested objectively, and the skills required (intelligence, attention to detail, diligence, hard work) are extremely translatable to medicine. Plus, M3 performance predicts M3 performance. It's wildly different from being a resident or attending, especially in academics where clinical skill is often of secondary importance to research output, leadership, etc...

Attendings and even residents rarely understand what's challenging vs. easy. They don't see when a student is given all night to prepare a presentation vs. getting assigned a patient 1 hour before rounds. Evaluators have bad data and very little of it.

That's before getting into gaming the system and non merit-based evaluating. Most resident evals are irrelevant to medicine. Oh, you had good vibes and you're both Karaoke buffs? Honors. You look like the resident's ex? Pass. Then there's the entire eco-system of people who share information on how residents/attendings typically grade.

As a student, you can tell who's doing a good job, working hard, and presenting/managing patients well given the circumstances. There's a massive correlation between the students clearly doing well on the wards and shelf/exam performance. I've yet to meet the fabled med student who is a diagnosis and management wiz and overall rockstar on the wards who doesn't also do very well on the shelf. OTOH, I've met a ton of students who manage to slide into Honors range by selectively curating evals, picking the best rotation sites, and getting buddy-buddy with residents who then complain loudly about how their 68 shelf score knocked them out of overall Honors.

So what kind of data do you want, tangentially relevant data or partially falsified data? Do you want to treat the patient based on an RCT in primates, or do you want to treat the patient based on the heavily tampered data set that's been p-hacked to death?

There's a role for subjective evaluations in M3 grades, but it shouldn't be the end-all-be-all like it is in a post-step 1 world.
 
  • Like
Reactions: 1 users
But what are we left with? If there is no grading, then when programs go to pick people to interview, how would they pick? Exam scores are perhaps a bit of a help, but clearly fail for the reasons you mention above. Including all comments and telling us to just do a "holistic review" is unrealistic -- some people only have comments that say "good job", some schools cherry pick comments where others don't. LOR's always say that the student is in the top 1% of anyone they have ever worked with. God only knows who wrote your PS.

What's left? Whom you know? The stature of the school you go to? How many pubs you were able to crank out? I'm not a fan of any of those.

I don't have an answer. Probably some sort of committee evaluation - collecting info from as many sources as possible. ideally using the same evaluators / experiences as much as possible.

What if residencies kept an index of all of the attendings that get medical students that shows the range and averages of the evals that they give out, so they could at least see that an attending who gave an applicant an average eval is known for giving them to everybody :rofl:
 
What if residencies kept an index of all of the attendings that get medical students that shows the range and averages of the evals that they give out, so they could at least see that an attending who gave an applicant an average eval is known for giving them to everybody :rofl:
Not applicable to residencies, but not a bad idea for a school to do that. Would be helpful in assigning overall grade for a rotation.
 
Interestingly enough, I tried doing this with residency evaluations. This was low stakes, since there are no grades in residency so it really didn't matter. The idea was to take each evaluation score and weight/adjust it based upon the average evaluation score given by the evaluator. So, someone who always gives all 5's would be scaled down, whereas someone who has an average of 3, if you would get a 4 that would be scaled up. In the end there just often aren't enough data points, and if you end up with a large number of "easy graders" you end up with a low/average score no matter what your performance is.
 
This already happens with routine evaluation comments that are at perceived risk of ending up in the MSPE.

My school is reasonably strict about not removing bad comments. They only remove the comment if its blatantly inappropriate or if the attending/resident demonstrated abusive behavior and the med student let the office know well in advance before the end of the rotation. It's crazy how some schools will remove any slightly bad comment and other schools will keep every single comment no matter what.
 
Interestingly enough, I tried doing this with residency evaluations. This was low stakes, since there are no grades in residency so it really didn't matter. The idea was to take each evaluation score and weight/adjust it based upon the average evaluation score given by the evaluator. So, someone who always gives all 5's would be scaled down, whereas someone who has an average of 3, if you would get a 4 that would be scaled up. In the end there just often aren't enough data points, and if you end up with a large number of "easy graders" you end up with a low/average score no matter what your performance is.
A single rotation in M3 is about... 18x shorter than the shortest residency. Where does that leave us as far as sample size for clinical grades in M3?
 
I’ll have to disagree just a bit here. For all the flaws in the data, applicants with straight honors clinically do tend to be pretty darn good residents in my experience. There’s only so much you can game the system and a year is a long time to fake it. Shoot, I’ve seen plenty of students’ performance tank on a single away rotation!

Im less concerned about accuracy and fairness to students and more concerned about how well it works for me on the other side. And for that it seems to be fairly valid. Are some good students getting hosed along the way? Sure. But there are still so many who ace the whole shebang - far more than one can interview.

I’m always fond of the adage that there are three sides to every story: yours, theirs, and the truth. A student says the worked their butt off on a rotation, the attending says “meh,” and I’m always curious what the eye in the sky would think of it all. Hard work does not necessarily equal honors just like it doesn’t always equal a high exam score.

Yes it’s an imperfect system, but it’s not bad. Most students score in the middle and wouldn’t you know it but most interns are all pretty average too. But then there are some students who honor everything they touch and wouldn’t you know it they always seem to be just a cut above the rest of the interns as well.
 
  • Like
Reactions: 1 users
I’ll have to disagree just a bit here. For all the flaws in the data, applicants with straight honors clinically do tend to be pretty darn good residents in my experience. There’s only so much you can game the system and a year is a long time to fake it.

Thank you!!! I totally agree. Even though the data is subjective, students do anywhere from 10-12 rotations that make their way into the MSPE , depending on if the med school includes 4th year rotations.

If the med school gives out a low (say 30%) amount of honors, and you can't honor any of them, then you can't blame the system because you had many many rotations to try to get honors.

Even if the school gives out up to 50% honors, and you are able to honor 9-10 of your rotations, then that means you are a good student because it's a 50-50 toss up every time. You had many many chances to screw up but you were still able to get majority honors.

I think it's the ration of how many honors to how many rotations you have that speaks volumes.
 
I’ll have to disagree just a bit here. For all the flaws in the data, applicants with straight honors clinically do tend to be pretty darn good residents in my experience. There’s only so much you can game the system and a year is a long time to fake it. Shoot, I’ve seen plenty of students’ performance tank on a single away rotation!

Im less concerned about accuracy and fairness to students and more concerned about how well it works for me on the other side. And for that it seems to be fairly valid. Are some good students getting hosed along the way? Sure. But there are still so many who ace the whole shebang - far more than one can interview.

I’m always fond of the adage that there are three sides to every story: yours, theirs, and the truth. A student says the worked their butt off on a rotation, the attending says “meh,” and I’m always curious what the eye in the sky would think of it all. Hard work does not necessarily equal honors just like it doesn’t always equal a high exam score.

Yes it’s an imperfect system, but it’s not bad. Most students score in the middle and wouldn’t you know it but most interns are all pretty average too. But then there are some students who honor everything they touch and wouldn’t you know it they always seem to be just a cut above the rest of the interns as well.
Who cares… everyone becomes a licensed physician as long as they finish the training. This whole BS about who’s better is so pointless and egoistical.
 
Its not egotistical. Most get a driver's license. There are plenty of people I would not ride in a car with. Most patients arent looking for competence, they are looking for excellence. Just because you finish training doesn't mean I want you as my doctor. Lots of people do the bare minimum. A high board score alone suggests you are bright enough, but doesn't mean you have the work ethic or the skill set I'm looking for. There are people I want to take care of me and my family, and then there are people I want to take care of my Mother in Law. Good programs aren't looking for the latter. You don't realize how much havoc matching the wrong person can wreak on a program. They are in the program for years.
 
  • Like
  • Love
Reactions: 5 users
Most patients arent looking for competence, they are looking for excellence.
Most patients don't know their doctor's clinical grades or class rank. Some don't even know where their doctor did residency, or even that medical school and residency are two different things.
 
Well, yeah. But don't think patients aren't vetting their doctors. I get asked all the time for referrals, especially surgeons. I'm pretty sure most patients aren't looking for a doctor who does the bare minimum.
 
  • Like
  • Love
Reactions: 4 users
Who cares… everyone becomes a licensed physician as long as they finish the training. This whole BS about who’s better is so pointless and egoistical.
I think the point is for those who have to train them. When you’re a competitive program, you have options.

Also have to consider the PD perspective. They are typically only given very limited protected time for their PD duties and no matter what your FTEs say, the clinical burden is constantly knocking at the door demanding more time. And remember that a lot of the PD work is for past graduates - every license app or credentialing form you ever fill out will also need your PD to sign off. If you have all strong residents, then there’s minimal additional work. A weaker problem resident creates a LOT of additional work for all involved and negatively impacts everyone in the program.

So yes most everyone finishes training and goes on to practice. But if you have the ability to select better trainees, your job is much easier and your odds of getting a dud are lower.
 
  • Like
  • Love
Reactions: 6 users
I think the point is for those who have to train them. When you’re a competitive program, you have options.

Also have to consider the PD perspective. They are typically only given very limited protected time for their PD duties and no matter what your FTEs say, the clinical burden is constantly knocking at the door demanding more time. And remember that a lot of the PD work is for past graduates - every license app or credentialing form you ever fill out will also need your PD to sign off. If you have all strong residents, then there’s minimal additional work. A weaker problem resident creates a LOT of additional work for all involved and negatively impacts everyone in the program.

So yes most everyone finishes training and goes on to practice. But if you have the ability to select better trainees, your job is much easier and your odds of getting a dud are lower.

In your experience what are some things that have the highest "sensitivity" and "specificity" for detecting problematic residents? Low board scores, going to Caribbean, generic LORs, low class ranking, lackluster MSPE comments?
 
In your experience what are some things that have the highest "sensitivity" and "specificity" for detecting problematic residents? Low board scores, going to Caribbean, generic LORs, low class ranking, lackluster MSPE comments?
There’s the million dollar question! I don’t think anyone knows for sure.

Probably the biggest predictor I’ve seen personally is wildly disparate performance. Someone with preclinical or clinical failures but then a 99th percentile step score, for example. It tends to suggest deeper issues, so it’s something I’m especially sensitive to. Beware someone with wide swings in performance - that tends to suggest underlying issues.

Thankfully most students are good and deliver fairly consistent performance, for better or worse. With them you can usually be somewhat assured iof similar performance moving forward.
 
  • Like
Reactions: 2 users
Hi everyone,

I am a rising M2 student at a US MD school, one that is designed as a 3-year curriculum.

My school uses a longitudinal model for rotations, and we started rotations end of our M1 year, so about a month ago.

I have rotated in almost all specialties, but some only for a few days given that we will come back to them at the end of our M2 year/all of M3 year.

We have started to get evaluations back from our clinical preceptors, and I have only gotten a Pass for those rotations (our school uses Honors, High Pass, Pass, Fail) grading scheme.

How does a Pass look for residency programs? I am interested in a primary care specialty only.
longitudinal model is really tough and really discouraging at first because of the impossibly steep learning curve those first few months, however, you get the advantage of coming back around a few times as your confidence and competence increases and you're set up for really strong improvement with great letters of rec.

are these final evals your worried about or formative/midblock type evals? doesnt really matter, as frustrating as it sounds, read more and push for specific feedback on how to improve. most of the time when you ask for specific feedback youll get some generic nonsense about reading more, expanding the differential, evidence based dx & tx, presenting in particular way etc, but it at least shows youre interested in getting better, make a note of it and if you work with that evaluator again try to incorporate whatever their pet peeve is. better still, ask around and learn their pet peeves ahead of time.
 
  • Like
Reactions: 1 user
Top