That's interesting. But I don't know how much it helps me. What it says is that some residents run into trouble, if that trouble is communication / professionalism then the chance of recovery is not good. If the problem is medical knowledge / ITE scores (which they report as two separate things, but are presumably correlated) or efficiency, then the chance of remediation is better.
This is not terribly surprising. Of all the deficiencies to have, medical knowledge is the easiest to fix. And efficiency usually gets better with time and training.
So if only I could tell who had communication problems prior to matching. If only there were some standardized test on communication... oh wait...
And, if your point was that I should take people regardless of their exam scores because I can often successfully remediate that, remember that my goal is to not have to remediate anyone -- then I can put my full energy into creating a better program!
Anyway, I think there are better articles out there:
Comprehensive Assessment of Struggling Learners Referred to a Graduate Medical Education Remediation Program. - PubMed - NCBI
Looking at the PubMed search for "medical resident remediation", there appears to be a bunch of pubs by the EM folks that I have not looked at yet.
My deans letter didn't say all great things.
Nowadays it's pretty much required to be boarded. So the test is geared towards passing everyone within reason . Anesthesia has 80% pass on orals. WHich is quite low. BUt 80 perecent is a decent number of people. And the ones who dont pass, don't really take it seriously.
I'm not sure what knowledgebase you're basing your assessment that those that don't pass "don't take it seriously". I've graduated residents over the last 10+ years. I get a chance to work with them and watch their study habits for 3 years, see their ITE scores each year, and then their ABIM performance. Most of my residents pass. Some number have failed. A few didn't take it seriously - "I've never failed anything before, I'm sure I'll do fine" is the usual refrain. But most take it very seriously, recognizing it's a huge problem if they fail -- and they still failed. I'm just one program, so I can't really generalize to everyone. But neither can you, and I think my experience base is likely more extensive than yours.
Regarding MSPE's, they are very hard to assess. Some schools state that they put all comments, unedited, into their MSPE's. Some pick and choose -- something seems "out of place" they leave it out. But maybe that one person was the one telling the truth? And commonly there's a bunch of nice sounding comments, and then one concern. And often the concern is vague and hidden: "XYZ's presentations started off as somewhat disorganized, but by the end of the rotation they were at the level expected of a medical student." What does this mean? Does it mean that the student is fine? Or did they start off terrible, and end up just barely good enough to pass? At my own school, I can tell you that either is possible with a statement like this.
what if everyone gets a 250/250 on their boards? Then what...... Add another layer of exams to stratify that application pool further?
This is an interesting comment. It's of course very unlikely that everyone gets a 250. But, if the step is changed to P/F, this is EXACTLY what happens. Everyone who doesn't fail gets a Pass, and all the "scores" are the same. If that happens, options are to add another exam with a score (that might be specialty specific), or I'll just need to make decisions based on something else. It would mean that grades might hold more weight -- no longer could you get a HP in IM and a 250 on S2 and get an interview, now if you get a HP and Pass on S2, you don't get an interview (because I put more weight on the grade). Note that I'm not saying this is good or bad -- all we're doing is shuffling the deck. I only have a maximum number of IV slots, and more applicants that can fit. I need to decide, in some way, whom to interview. Take away the USMLE, and I'll have to use something else. And it might be "school reputation". Or whether I know someone at your school I can call to find out the "real story". Or any number of other factors that are not under your control. Personally, I'd rather have something under your control. But, like, that's just my opinion?
Does IM do milestones? I know they’re done in Peds, but I’m not sure if it was a ACGME wide thing or just the Peds part. Do you think if LCME started doing milestones that schools would report them accurately? Obviously it is in the school’s best interest for the student to match well, so there’s a conflict of interest, hence my curiosity.
Yes, we do. And honestly, I think they are completely useless. It looks like Peds did a much better job with their milestone development. There's already EPA's for graduating medical students (and EPA's are supposed to "fix all the problems with milestones". Which they won't.) But ultimately what I need is some sense of how well each student did. Rather than a single scale, it would be nice to know academically (i.e. exams), clinically (taking care of patients), and "other stuff" which might be research, community engagement, administrative, etc (pick something, you can't do it all). But those last two are very difficult to measure and compare.