Now we're talking. The problem is that the SLOE is just a substitute for a LOR. CORD publishes the SLOE, but its AAMC/ERAS that runs the system that compiles the data for applications. The only way to sort ERAS re: SLOEs is number of EM LORs. The SLOEs themselves are just imputed in as PDF's like any other LOR. It would be awesome if the ERAS allowed you to filter more based on the content of the SLOE instead of the # of SLOEs, but unfortunately they do not.
Well, sure, but if AAMC/ERAS finds a way to incorporate the SVI score into ERAS, they should have no issue finding a way to incorporate the "average SLOE ranking" into ERAS, provided that we find a group willing to rate the SLOEs and distill them into an ASR and provide that information to the AAMC. Just because it's not done that way now doesn't mean it wouldn't be a worthwhile initiative to explore.
Some more thoughts on the SVI:
SAEM's statement on the SVI says"... results ... are promising. ... The best way to determine if the tool adds value to the residency selection process is through the operationalization of the tool. We believe strongly that residency leadership teams must have an opportunity to actually use the tool within their selection processes in order to properly assess its viability."
In what way is this the "best way" to determine if it adds value? What if the residency leadership teams
improperly assess the viability? What if they put too much faith into it? What if they act on bad data and the integrity of the matching process is compromised?
Seems to me that the fair and reasonable "best way" to establish the viability/validity of the SVI would be to make the SVI mandatory but not release the score to programs for the first few cycles. File the scores away for a few years, along with the rest of the applicant data, and ultimately interrogate the data to see if there are correlations between SVI scores and resident performance. If correlations exist, great, I'm sold, let's move forward with implementing it. But in the meantime, let's not operationalize something that hasn't been fully vetted yet! The only vetting they've done is having
last year's applicants voluntarily participate (without sharing their scores with programs). Anyone else at all concerned that the results are skewed as a result of the selection bias inherent in voluntary participation?
Meanwhile, you know what HAS been shown to correlate well with resident performance?
SLOEs!
Side note: In case anyone missed it, the
AAMC says they are "still evaluating feedback from our community and assessing the feasibility of
providing applicants with their numerical score in a way that protects the security of the interview. The AAMC will announce a final decision on before
[sic] the ERAS application season opens in early June."
Needless to say, I hope they decide to make the score available to applicants. I don't know why they haven't decided this yet. My cynical guess is that it's already been decided, and the answer is no, but they're trying to soften the blow by waiting until we've all adjusted to this news already.
On the bright side, I'm reassured that computer scoring will at least not be in play for the Class of 2018, and that they're looking at it carefully to confirm its viability before potentially deploying it for future cycles:
"What role will computer scoring play in delivering Standardized Video Interview scores to program directors?
If the Standardized Video Interview moves beyond the pilot stage and expands into specialties that have large applicant pools, it is unlikely that the AAMC could resource enough professional raters to score the interviews and make them available to program directors by mid-September when ERAS opens. As part of our current research, the AAMC is exploring the possibility of computer scoring as a supplement to human scoring. During this operational pilot for the 2018 ERAS application season, the AAMC is conducting a parallel research project (without implications on applicants’ scores) to explore the possibilities of computer scoring. For the operational pilot, the scores delivered to residency program directors would be provided by human raters only."