How programs are evaluating potential residents.

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Joined
Nov 24, 2007
Messages
8,326
Reaction score
9,338
Hi,
Below you may find some insight into how very competitive programs are actually ranking and selecting their applicants and interviewees.
Good luck to those currently interviewing.
Il Destriero
_______________________________________

SELECTING APPLICANTS FOR RESIDENCY AT
STANFORD
Alex Macario, MD, MBA
November 1st is a
big day every
year because
Deans’ letters for
graduating
medical students
are released to
training
programs via the
Electronic
Residency
Application
Service (ERAS).
The resident
selection
committee can then look at completed applicant
files to determine whom we want to invite to
interview as the first step in the resident-selection
process. Each file also includes USMLE scores,
letters of recommendations, a personal statement,
a CV, and medical school transcripts that contain
both preclinical and clinical grades.
Predicting resident performance based on
medical school performance—Scores for
standardized tests that measure medical school
knowledge (such as the Step 1 USMLE which
assesses learning of basic sciences, whereas the
Step 2-CK assesses clinical knowledge) do
correlate with scores on tests required during
residency, such as the American Board of
Anesthesiology in-training exam traditionally
taken by the residents in July. However, because
Step 2-CK scores are often not available until
after we have selected applicants to interview, we
do not have all of the predictive data we would
like to have. Of note, the future of the content
and timing of administration of the USMLE tests
is under reconsideration (for more details see
http://www.usmle.org/General_Information/rev
iew.html)
Even if we had all scores for standardized tests,
we would still lack vital information about
applicants: Who will excel at clinical performance?
Somewhat surprising to me is that the half dozen
studies published on this topic have found little if
any relationship between medical school
performance and subsequent residency
performance (defined by Webster’s as “the
execution of an action”). In fact, another Program
Director confided to me that some applicants
they ranked very high in the Match ended up
requiring remediation, whereas others who ranked
low ended up being Chief Residents.
If we analyze the generally poor association at
most institutions between how applicants are
rated by the selection committee and their
subsequent performance in residency, it appears
several forces are at work. For example, we know
remarkably little about the following:
• The key factors affecting physician
performance, even for trained practitioners,
• How physician performance should be
defined and measured (The ACGME has
outlined the core competencies in response to
this), and
• The effect of individual physician factors on
performance (e.g., personal learning style,
stress coping mechanisms, the impact of
financial debt, family dynamics and
circumstances, and individual personality
traits).
Further compounding the complexity we face are
these facts:
• The principle of the flat maximum which
states that when comparing the qualifications
of people at the very top of the curve, the
amount of inherent uncertainty in evaluating
The Gas Pipeline, Page 6
their credentials is larger than the measurable
differences among candidates.
• Young physicians grow and mature at
different rates, some reaching their peak
during medical school, whereas others might
not flourish until well into the training
environment of a residency, or even until after
they graduate and enter independent practice.
• Residents face unexpected personal issues (ill
family member or financial pressures, for
example) that affect performance.
• Non-cognitive factors such as attitude,
motivation and interpersonal skills often
differentiate between the best and worst
performers in residency. Deficiencies in these
factors may not be uncovered during short
medical school rotations and certainly not by
results on standardized exams. In particular,
those attributes we particularly value in our
anesthesia graduates such as professionalism,
communication skills, interest in life long
learning and practice self-improvement (4 of
the 6 core competencies defined by ACGME)
cannot be assessed by standardized exams.
Letters pose another challenge for the selection
committee. Because medical schools want the
best residencies for their graduates, deans’ strong
letters of recommendation contain a “wonderland
of positive adjectives,” instead of a more
analytical evaluation. Letters also reflect medical
schools’ different evaluation criteria. A further
complication is poorly written letters (In a 1998
survey of 124 US medical schools, 35% had
unacceptable dean’s letters) or uninformative
letters (Some schools designate >85% of their
class as outstanding or excellent; other schools do
not stratify their students).
We use both the reported grade distribution
within each medical school rotation (especially
those during the core clinical clerkships) and the
narrative comments from that rotation in a
comparative way among all the applicants from
the same school to provide some perspective.
Selection to Alpha Omega Alpha (AOA) during
junior year is a sign that the medical student has
performed superbly. However, adding to the
variability inherent in our evaluation process,
some schools wait until senior year to select AOA
members so these data may not be available to the
selection committee, while several schools
(including Stanford) do not have AOA at all.
Knowing which candidates will excel
especially in anesthesiology—More work is
needed to know with precision what is required in
a candidate to become an outstanding
anesthesiologist. It might be that the association
between medical school success and resident
performance varies by specialty. For instance, do
USMLE Part 2 CK/CS scores better predict
performance in primary care vs. procedural
specialties? A published study of radiology
residents indicates that a medical student who
excels at a less prestigious medical school is just as
likely to excel during his or her radiology
residency as a student with similar credentials
from an elite medical school.
The selection committee—The selection
committee at Stanford (current members are Drs.
Adriano, Brock-Utne, Cornaby, Gaeta,
Honkanen, Howard, Krane, Mackey, Mihm, Pearl,
Philip, Rosenthal, Saidman, and A. Shafer)
devotes valuable time and energy toward selecting
the best and brightest applicants, usually from the
top third of their medical school classes, who will
likely best achieve the Department’s goals to train
outstanding clinicians, future leaders in the
specialty, and clinician-scientists in patient care or
basic science research. Thus, we attempt to
identify applicants most likely to succeed in one
or more of those areas, recognizing that in a
resident class of 22 young physicians there will be
a variety of interests, training paths, and long-term
career goals. A strong class will be a diverse
community of complementary interests,
background, and goals.
The interview days—We have 144 total
interview slots available on each of twelve
interview days spread across December and
January (12 interviews per day). The day starts at 8
AM with 30-minute presentations by me,
followed by Dr. Pearl. During my time with the
applicants, I describe the curriculum, the
University, the four hospitals they will work in,
and mostly the talented people in the department.
The Gas Pipeline, Page 7
I emphasize our obligation to work hard every
day to continuously improve patients’ care, as well
as the Department’s obligation to provide
residents with the environment and resources
they need to achieve their professional potential.
Next, four, different committee members
individually interview each applicant for 30
minutes. At noon, the applicant group has lunch
with current housestaff and then tours the
hospital. The selection committee meets over
lunch and discusses and scores each of that day’s
applicants.
Scoring criteria—The method we use for
ranking applicants for the match is different from
but complementary to that used to grade
applications before deciding whom to interview.
The selection committee assesses each applicant
according to the following criteria: medical school
attended, grades and class rank relative to peers,
the applicant’s understanding and commitment to
the specialty of anesthesia (a very subjective
assessment), research experience and potential for
a future academic career, USMLE scores,
communication skills, compassion & humanism,
and an “other” category that might include
exceptional achievements, their personal
statement, or brilliant letters of recommendation.
From a scientific and methodological point of
view, any scoring instrument to evaluate
applicants should minimize variability in scores
due to multiple individuals involved in resident
selection. “Halo error” is a potential concern,
meaning that assessments of specific traits are
influenced by an overall impression generated by
the candidate. Methods to overcome this error
include making scoring metrics explicit and
limiting the number of attributes for which a
score is assigned. Ideally, specific descriptors
ought to designate what level of performance is
associated with a given score. This may help
minimize confounding of scores by who the
interviewer is and within a single interviewer over
the interview cycle. This year under Dr. Brock-
Utne’s leadership the scoring sheet was revamped.
Some uniform definitions were derived for the
subjective scales that clarified the kind of
applicants we want and attributes we value, while
allowing the most efficient process overall.
Ideally, if three faculty members look at the same
candidate/file then the scores should be similar.
Any reliable instrument must be independent of
the date, time of day, or the order in which
candidates are assessed. It must also meet four
validity criteria:
• Produces results correlated with other objective
measures of performance (criterion validity)
• Predicts future performance (predictive
validity)
• Provides credibility in the eyes of the selection
committee (face validity)
• Identifies characteristics of a candidate
believed to be important (construct validity).
We rely on the interview to assess
interpersonal skills, problem-solving ability,
initiative, commitment to a life of learning and
improvement, and predisposition to
teamwork.
By describing the complexities inherent in
selecting applicants, I hope the reader now
understands that those possessing the highest
grades may not always be those best suited to a
particular program or specialty, particularly given
the practical, applied nature of the skills, attitudes,
and knowledge of an anesthesiologist. I reiterate
that our goal is to recruit those with outstanding
qualifications most likely to meet the
responsibilities of the anesthesia residency and of
anesthesia practice after residency.
I look forward to meeting this year’s cohort of
applicants!
_______________________________________

Members don't see this ad.
 
[

SELECTING APPLICANTS FOR RESIDENCY AT
STANFORD
Alex Macario, MD, MBA
Even if we had all scores for standardized tests,
we would still lack vital information about
applicants: Who will excel at clinical performance?
Somewhat surprising to me is that the half dozen
studies published on this topic have found little if
any relationship between medical school
performance and subsequent residency
performance (defined by Webster’s as “the
execution of an action”). In fact, another Program
Director confided to me that some applicants
they ranked very high in the Match ended up
requiring remediation, whereas others who ranked
low ended up being Chief Residents.
If we analyze the generally poor association at
most institutions between how applicants are
rated by the selection committee and their
subsequent performance in residency, it appears
several forces are at work. For example, we know
remarkably little about the following:
• The key factors affecting physician
performance, even for trained practitioners,
• How physician performance should be
defined and measured (The ACGME has
outlined the core competencies in response to
this), and
• The effect of individual physician factors on
performance (e.g., personal learning style,
stress coping mechanisms, the impact of
financial debt, family dynamics and
circumstances, and individual personality
traits).

I fully agree.
 
[

SELECTING APPLICANTS FOR RESIDENCY AT
STANFORD
Alex Macario, MD, MBA

The effect of individual physician factors on
performance (e.g., personal learning style,
stress coping mechanisms, the impact of
financial debt, family dynamics and
circumstances, and individual personality
traits).

What does Dept have to do with picking future residents?

Is Dept a positive or a negative?

If you profess to be a Dave Ramsey fan and have gone to Financial Peace University Do you stand a better chance of getting into residency.
 
Members don't see this ad :)
Fascinating

Having recently interviewed there, I understand why the interviewers asked me certain questions...

Were you asked to sign a release so they could check your credit report?
 
Hilarity. This comes out of Stanford, yet you still have to score >235 Step I to get an interview there. Ahhh... Pot calling the kettle como?
 
Hilarity. This comes out of Stanford, yet you still have to score >235 Step I to get an interview there. Ahhh... Pot calling the kettle como?

The >235 Step I requirement is definitely not true- I know of people who interviewed there w/ less than 230. However, the dept. chair was kind enough to mention that the avg step I score of the applicants selected to interview was 237 and ~1/4 were AOA.
 
I agree - I'm interviewing there in January and have below a 230 on Step 1 (raised it quite a bit with Step 2, however). I'm not AOA either. I found that the programs really are looking at my entire application and not just one thing.
 
Well, since I'm involved in the candidate selection and interview process at my institution, I can tell you how we do it.

Once you are granted an interview, everyone is essentially "reset" to zero. That is, if you are given an interview, you are on a level playing field. So, it doesn't matter if you had a 250+ on Step 1, or a 190. If you get in the door, you are on equal footing.

All programs care about are a few things:

1) Will this person "fit" in the program. Will they be happy here. Will they be a trouble maker.

2) Will you be able to pass your boards at the end of residency.

3) Will you have the right personality and core clinical competency to be an effective and skilled consultant and practitioner.

It's just that simple. And, the interview process is structured to make an attempt to determine these things in a very short timeframe.

So, if you are a cocky a-hole who comes interviewing at our program as a "back-up" plan, if you demonstrate that you struggle with standardized tests, or you show that you think you are going to "rule the room from the cockpit" you're probably going to get placed low in the rank list, if you make it at all.

We do a pretty good job of weeding the potential problems out, but inevitably one or two squeak through each year. No one wants to deal with problems in a program, but part of residency is to bring you up to standard. Rarely someone gets kicked out after they get a spot and start their training. That's a failure of the program, not necessarily the individual. But, some people put on a good show and somehow make it in.

No one wants someone who's going to bitch and moan the entire three years, show up late for things, not "get it" when it comes to clinical care, and not be willing to pull their own weight. A lot of the interview process, including the mild "pimping" you might get on what you put in your app, is structured to make a determination of these things. So, just remember that you are on display. How you act and respond under the duress of an interview speaks loudly to how you will as a resident in similar situations. And, that is the barometer we use in an attempt to determine that in a short timeframe. May not be a perfect system, but it works pretty damn well at determining who we think we can mold and who we think will try to mold us... the latter ain't going to work in residency.

-copro
 
Top