For those of you that have served on internship admissions committees...

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

psych1420

Full Member
10+ Year Member
Joined
Jul 12, 2013
Messages
29
Reaction score
10
First off, congrats to everyone who just submitted their ranking lists. What a terrific accomplishment for all of and what a month from hell this has been.

Now that we are done, I am starting to wonder about something based on a response by someone in a recent thread: How do internship sites make their rankings?

To what extent have sites already "pre-ranked" our applications, and are primarily assessing interpersonal skills (e.g., are we annoying), good fit, etc. OR is it tabula rasa for most sites, and interview performance is the primary determinant on site ranking? Or is it some ratio of both? Could anyone comment on what this process looks like at their sites (aka what are they doing with those pics they took of us; do they really discuss us as a group?).

I imagine there is probably great variability in this between sites, but i'm curious what the range is like. I certainly could tell some of my interviewers had not read my materials and seemed to know nothing about me, whereas others had read my cv and essays like a hawk. I am especially curious about this question for sites that held "open house/interviews" that only involved two or so "30 minute" brief interviews--it seems like that without some "pre" score, they would have very little data to base their impression of us?

Members don't see this ad.
 
Last edited:
At my residency, they do more a 'tabula rasa' approach like you described. Every applicant is asked the same standardized questions (with room for follow ups if the conversation gets interesting) and then individual answers are rated out of 5. Then a bunch of interpersonal factors are rated out of 5. Everything is scored up and ranked from high to low. If there are significant ties, they're broken by discussion in the training committee meeting. It sounds like major ties don't happen super often and there's enough nuance that the rank order list tends to shake out as relatively straightforwardly as our rank order lists do... a "great" tier, a "good" tier, a "good enough" tier, and the odd few who don't get ranked.
 
I'm at a VA. We broke into teams and evaluated applications. Those that reviewed the files ranked folks ahead of time. Not everyone was reviewed by each team, but it was how we managed cull the mass (this may also be why some times seemed like they read more than others, some people may have read more than others) because those teams were responsible for cutting down to a final interview list. When all (all teams met and talked about any of the cases that were 'maybe should interview' to come to a full group decision. Then when folks came to interview in person, we watched for signs of being able to stand them interpersonally (talking over, interupting, putting down others, etc) and we listen for inappropriate behaviors/awkward disclosures (you would not believe the boundary crossing things people have said). Then everyone ranks again based on interview and we combine those two ranks. Frequently people remain relatively the same. Sometimes people shine through. Other times people go from moderately ok rankings to 'nope. nope. nope'

Yes, we discuss it all in group. Our site didn't take pictures, but I can imagine its to help keep track of who folks are to make sure everyone is on the same page. I didn't find this to be too hard without a picture but I would scribble down notes as I went on each person throughout the day. Also, keep in mind even if we did read the application, it was months before we saw you.

We're very aware that the best predictors of outcomes are not interviews.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Last edited:
  • Like
Reactions: 6 users
This is such a good thread! I'm the sort of applicant who asked training directors how they made decisions, and the variety of responses I got was interesting.
 
I'm not a TD but I've been involved in interviewing and ranking internship applicants.

At our site there was no pre-ranking of applicants who were invited to interview. Informally, we of course reviewed application materials beforehand, and from that some "favorites" or "hopefuls" might have emerged. But the interview was a good opportunity to size up applicants interpersonally and make sure they were able to grasp what our program was about and relate it to their training and career goals.
 
  • Like
Reactions: 1 user
At my internship we helped review applicants and participated in the interviews. We had a very formalized multi-stage methodology of ranking applicants prior to interview and after that used complex algorithms weighing multiple factors including...
Naw, this was what we really did. ;)
eyes-covered-darts.jpg
 
  • Like
Reactions: 1 user
A site I was at had 2 people independently rate each application on a number of factors e.g., relevant experience, desired training, etc. If there was a significant point discrepancy, they attempted to resolve it between the two of them; if that did not work, a subset of the training committee would review (there typically was not-- we were given markers for what different scores should look like). Then, those scores were ordered - the top X amount were invited to interview.

Then each applicant had 3 different interviews with a different combination of 2 psychologists. Again, specific markers, etc. led to numerical scores. These interview scores were added to the application scores and rankings were determined based on their totals. The training committee had a big meeting to discuss major concerns, break ties, etc. Though it was possible to compensate via interview or vice versa, there was not a whole lot of major movement from the application scores. Occasionally there were people who looked great on paper but tanked the interview and so their ranking fell significantly.. but by and large people hovered around the same positions they came in with.
 
  • Like
Reactions: 1 user
We sit down and discuss how we perceived each applicant, then start ranking them relative to each other until we come to a consensus. We consider several areas.
 
This was for one specialized track.

We split up applications, reviewed them and ranked them. Then we gave our top contender list to the other raters, who looked at those applications, and came to a consensus. We finally compiled them into a final list. We looked for things like experience in the VA, experience with certain EBPs, research accomplishments (this was a research-y site), etc. After the interviews, we would adjust the list somewhat but generally it didn't differ that much.

This was a site that had rejected me for an internship interview, so it was pretty weird being on the other side for that reason alone.
 
  • Like
Reactions: 1 users
This was for one specialized track.

We split up applications, reviewed them and ranked them. Then we gave our top contender list to the other raters, who looked at those applications, and came to a consensus. We finally compiled them into a final list. We looked for things like experience in the VA, experience with certain EBPs, research accomplishments (this was a research-y site), etc. After the interviews, we would adjust the list somewhat but generally it didn't differ that much.

This was a site that had rejected me for an internship interview, so it was pretty weird being on the other side for that reason alone.

Interesting. Follow-up question for you: do the specialty tracks usually have the same ratio of spots to interviewees as the general track? For example, if you're interviewing 30 for 3 general track spots, do sites usually then interview 10 for 1 specialty track spot? I ask because my top 3 sites are all specialty tracks with 1 or 2 spots available... which my mind has found as an easy thing to stress/worry over.
 
Interesting. Follow-up question for you: do the specialty tracks usually have the same ratio of spots to interviewees as the general track? For example, if you're interviewing 30 for 3 general track spots, do sites usually then interview 10 for 1 specialty track spot? I ask because my top 3 sites are all specialty tracks with 1 or 2 spots available... which my mind has found as an easy thing to stress/worry over.

I can't answer that since this site has all specialty tracks. But I would imagine that the ratio is the same.
 
  • Like
Reactions: 1 user
I can't answer that since this site has all specialty tracks. But I would imagine that the ratio is the same.
Adding that from what I know of a site like this (all specialty tracks), each specialty track truly does review, evaluate, and rank separately.
 
I've been on an internship committee in a counseling center for 2 cycles, and got the sense that the very strong and very weak candidates were typically easy to identify while we had much more difficulty with the in-between rankings. Post-interview, we did not seem to look at their paper applications as much with the rare exception being a concern about the amount of clinical hours an applicant had. APA does require some uniformity with regards to interview questions so it isn't necessarily just an assessment of interpersonal skills although that is certainly a part of it. One of the big underlying questions we had was how flexible the interviewees were with regards to thinking about cultural competency, outreach, ethics, case conceptualization, theoretical orientation etc. - that was based on past experience that inflexible trainees often had difficulty thriving in a counseling center setting with many competing demands.
 
Highest preference was given to students from schools/labs/mentors that we had good interns from in the past, with a hood reference from that mentor (it's very interesting how overall positive letters of reference from the same person can subtly convey difference in opinion- things like "in the top 10% of students I've ever worked with" vs "top 5%" are meaningful when the rest of the letter is basically identical. In a typical year, we'd fill 50-75% of our slots with these applicants. Students from known programs were less of a wildcard- we knew the strengths and limitations of their training, and could plan accordingly regarding seminars, supervision needs, etc. we often new who was going to apply long before receiving the applications, and had often been introduced to the student beforehand (such as at ABAI or AABT conference).

Otherwise, direct experience with our type of work (behavioral consultation; special education;adult outpatient) was very important. We had major and minor placements that lasted all year, so an ability to fit well with a minor placement was another factor. Applicants were generally first ranked by their major choice, and then we'd discuss minor placement options. There were a lot of applicants who might have strong background and interest in, say behavioral work in a special education school, but no experience with community mental health. If an applicant could do both, it made it easier for us, and they were often ranked higher.

Not much stake was given to the interview. Every now and then a faculty would bring up something about an applicant's interview performance, choice of outfit, behavior at lunch, etc. Someone would quickly point out the lack of research related to using such factors in decision making. In more heated sessions, someone might have also pointed out the concerned faculty's ill fitting suit or spillage of soup at the company Christmas party, and suggest that it was totally unrelated to said faculty's clinical proficiency.
 
  • Like
Reactions: 1 users
Top