Interview:Acceptance ratio: report on # interviews with known results

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

What is your post interview acceptance rate so far? (# acceptances/# interviews)

  • 0-9%

    Votes: 25 11.1%
  • 10-19%

    Votes: 3 1.3%
  • 20-29%

    Votes: 13 5.8%
  • 30-39%

    Votes: 16 7.1%
  • 40-49%

    Votes: 16 7.1%
  • 50-59%

    Votes: 35 15.6%
  • 60-69%

    Votes: 26 11.6%
  • 70-79%

    Votes: 18 8.0%
  • 80-89%

    Votes: 12 5.3%
  • 90-100%

    Votes: 61 27.1%

  • Total voters
    225

HumbleMD

hmmmm...
10+ Year Member
15+ Year Member
Joined
Sep 22, 2006
Messages
2,574
Reaction score
32
Hi guys. So often people ask about post interview acceptance rates for reasons of knowing where and to how many schools to apply. It's been asked before, but we still seem to assume an average %50 rate, but I've been starting to wonder if it's more bimodal (people either get in almost everywhere they interview or almost nowhere). Yes, it's a rough estimate with a million confounding factors and will result in statistically questionable results but I'd love to see if my prediction is correct.
 
Good thread... but you may want to repeat it after the March decisions come out. As it is, I voted by including only those results I already had.
 
Yes, it's a rough estimate with a million confounding factors and will result in statistically questionable results but I'd love to see if my prediction is correct.

i don't understand. if your results are from a biased sample of applicants, how will you know if your prediction is correct?

edit: i'm assuming your bimodal prediction is meant to generalize to the entire applicant population.
 
i don't understand. if your results are from a biased sample of applicants, how will you know if your prediction is correct?

edit: i'm assuming your bimodal prediction is meant to generalize to the entire applicant population.

Hi guys. So often people ask about post interview acceptance rates for reasons of knowing where and to how many schools to apply. It's been asked before, but we still seem to assume an average %50 rate, but I've been starting to wonder if it's more bimodal (people either get in almost everywhere they interview or almost nowhere). Yes, it's a rough estimate with a million confounding factors and will result in statistically questionable results but I'd love to see if my prediction is correct.

Oh booh. Why must people on SDN always find faults with any poll put out there? It's meant to be mostly for the SDN population that asks questions hinging on this rate. I'll let you conduct an SRS with N>40 for me. Shall I contact AMCAS so you can get a list of phone numbers to call?

And I thought I threw in the qualifier that these were going to be very rough statistics. It's not like I'm going to publish a paper or bet my life on it. Jeesh, take a chill pill.
 
Oh booh. Why must people on SDN always find faults with any poll put out there? It's meant to be mostly for the SDN population that asks questions hinging on this rate. I'll let you conduct an SRS with N>40 for me. Shall I contact AMCAS so you can get a list of phone numbers to call?

And I thought I threw in the qualifier that these were going to be very rough statistics. It's not like I'm going to publish a paper or bet my life on it. Jeesh, take a chill pill.

ok, then in that case you can't really know if your "prediction is correct". the sdn population that is curious about this rate is typically curious about this rate for the entire applicant population, because they'd like to use it to assess their own chances. any poll on sdn that tries to make this generalization is using a biased sample that should produce biased results. my problem isn't with a biased poll that can give us fun numbers to look at, it's with the language some people use to describe the results. i'd love to hand those poll starters my chill pill.
 
I like the idea Humble

My experience, fwiw:

I was 50% post interview (2 acceptances, 2 waitlists)

However, I declined 5 interviews at schools that were less competitive than the 4 I interviewed at. I know nothing is a guarantee, but I am pretty sure my % would be higher had I interviewed at those schools.
 
ok, then in that case you can't really know if your "prediction is correct". the sdn population that is curious about this rate is typically curious about this rate for the entire applicant population, because they'd like to use it to assess their own chances. any poll on sdn that tries to make this generalization is using a biased sample that should produce biased results. my problem isn't with a biased poll that can give us fun numbers to look at, it's with the language some people use to describe the results. i'd love to hand those poll starters my chill pill.

😴
statistics are nothing to be afraid of. Indeed, people need to interpret results correctly and not give them too much weight, but sometimes its the best we can do. But honestly, has there been a poll you haven't denigrated on SDN? and I'm still waiting on the results from your properly attained SRS...😉
 
the sdn population that is curious about this rate is typically curious about this rate for the entire applicant population, because they'd like to use it to assess their own chances.

Actually, that just means this poll is useful.

If SDNers use it to assess their own chances, then this poll is perfect for that, since anybody using it will be an SDN member, and thus part of the biased sampling pool.

On the other hand, if the OP is planning on publishing this to non-SDNers (which he explicitly stated he is not) then there would be problems.

Otherwise, I think this is a perfectly sound poll with one small exception.

0%'s and 100%'s may be people with 0 or only 1 interviews, which somewhat invalidates their response.

Then again, it's an internet poll, which means people may not even be voting accurately/honestly, so let's just have some fun, k?🙂
 
😴
statistics are nothing to be afraid of. Indeed, people need to interpret results correctly and not give them too much weight, but sometimes its the best we can do. But honeslty, has there been a poll you haven't denigrated on SDN? and I'm still waiting on the results from your properly attained SRS...😉

i agree, statistics can be misleading, however in more likely cases, they are more useful than not
 
I like the idea Humble

My experience, fwiw:

I was 50% post interview (2 acceptances, 2 waitlists)

However, I declined 5 interviews at schools that were less competitive than the 4 I interviewed at. I know nothing is a guarantee, but I am pretty sure my % would be higher had I interviewed at those schools.

Not necessarily. If you indeed considered these schools "lesser" then the "backup school" effect may have taken place, and they may have figured out you're probably not going to go.
 
i am batting 1000% baby. i have turned down most of the late interviews though that would be for the waitlist only.
 
😴
statistics are nothing to be afraid of. Indeed, people need to interpret results correctly and not give them too much weight, but sometimes its the best we can do. But honestly, has there been a poll you haven't denigrated on SDN? and I'm still waiting on the results from your properly attained SRS...😉

in this age of cell phones and no-call lists, phone sampling has largely gone out the window, as it's srtongly susceptible to bias. but whatever.

the more important point is that if aamc were willing to give me confidential applicant data for 40 applicants, then i would actually do the best thing possible and ask them to give us the *actual* distribution for the *entire population* of applicants! they have the numbers for the entire population, after all. if they were willing to show them in a distribution, then it would be much better, easier, and cheaper than *any* sample estimate. last time i looked, the stats they show don't do this. maybe they should.
 
Nine interviews but only six known results. 3 acceptances and 3 waitlists. So 50 percent at this point.
 
The more fundamental problem I see with possible misinterpretations of the data, is that due to the small number of interviews relative to a "10 point scale" (100% split into groups of 10 percents each), there will be certain statistical regions that most applicants are unable to access.

Say, you have 2 interviews... you can only access 0-9, 50-59, and 90-100.

3 interviews? 0-9, 30-39, 60-69, 90-100

4 interviews? 0-9, 20-29, 50-59, 70-79, 90-100

5 interviews? 0-9, 20-29, 40-49, 60-69, 80-89, 90-100

As you can see, up to the 5 interview stage (not bothering to go on to higher levels), you still have not accessed the 10-19 region at all. Hence, there will be an inherent dip in the distribution in the 10-19% zone since all individuals with 5 or less interviews cannot enter it.

Even worse, the regions that will be most over-represented will be the ones accessible by all users, such as 0-9% and 90-100%. This will inherently create a bimodal distribution that will select against the "real" results. Even 50% is selected against, since anybody with an odd number of interviews can never enter the zone (until they reach something like 7 interviews, in which case 4/7 yields upper 50's).

Hence, the natural distribution for people with 5 interviews or less (assuming no real correlation) will be biased to look like:

||||| 0-9
10-19
|| 20-29
| 30-39
| 40-49
|| 50-59
|| 60-69
| 70-79
| 80-89
||||| 90-100

Quantum mechanics anyone?

*Sorry for my excessive ranting, it's just that there are a relatively large number of published papers in clinical research that make inappropriate conclusions on biased data such as this set, not due to small sample sizes, but due to the number of possible responses available from the participants. As a result, there are decisions being made right now in the medical field that are based on completely inaccurate statistics, and are probably resulting in unnecessary patient deaths.
 
Not necessarily. If you indeed considered these schools "lesser" then the "backup school" effect may have taken place, and they may have figured out you're probably not going to go.

Haha, what? Can you explain this so called "backup school effect"?
 
The more fundamental problem I see with possible misinterpretations of the data, is that due to the small number of interviews relative to a "10 point scale" (100% split into groups of 10 percents each), there will be certain statistical regions that most applicants are unable to access.

Say, you have 2 interviews... you can only access 0-9, 50-59, and 90-100.

3 interviews? 0-9, 30-39, 60-69, 90-100

4 interviews? 0-9, 20-29, 50-59, 70-79, 90-100

5 interviews? 0-9, 20-29, 40-49, 60-69, 80-89, 90-100

As you can see, up to the 5 interview stage (not bothering to go on to higher levels), you still have not accessed the 10-19 region at all. Hence, there will be an inherent dip in the distribution in the 10-19% zone since all individuals with 5 or less interviews cannot enter it.

Even worse, the regions that will be most over-represented will be the ones accessible by all users, such as 0-9% and 90-100%. This will inherently create a bimodal distribution that will select against the "real" results. Even 50% is selected against, since anybody with an odd number of interviews can never enter the zone (until they reach something like 7 interviews, in which case 4/7 yields upper 50's).

Quantum mechanics anyone?

You have a lot of time on your hands 😉

My numbers are really distorted. I have been accepted to 5/7 (waitlisted at 2), but the schools I have yet to hear from are way out of my league and therefore almost certainly rejections.
 
The more fundamental problem I see with possible misinterpretations of the data, is that due to the small number of interviews relative to a "10 point scale" (100% split into groups of 10 percents each), there will be certain statistical regions that most applicants are unable to access.

Say, you have 2 interviews... you can only access 0-9, 50-59, and 90-100.

3 interviews? 0-9, 30-39, 60-69, 90-100

4 interviews? 0-9, 20-29, 50-59, 70-79, 90-100

5 interviews? 0-9, 20-29, 40-49, 60-69, 80-89, 90-100

As you can see, up to the 5 interview stage (not bothering to go on to higher levels), you still have not accessed the 10-19 region at all. Hence, there will be an inherent dip in the distribution in the 10-19% zone since all individuals with 5 or less interviews cannot enter it.

Even worse, the regions that will be most over-represented will be the ones accessible by all users, such as 0-9% and 90-100%. This will inherently create a bimodal distribution that will select against the "real" results. Even 50% is selected against, since anybody with an odd number of interviews can never enter the zone (until they reach something like 7 interviews, in which case 4/7 yields upper 50's).

Hence, the natural distribution for people with 5 interviews or less (assuming no real correlation) will be biased to look like:

||||| 0-9
10-19
|| 20-29
| 30-39
| 40-49
|| 50-59
|| 60-69
| 70-79
| 80-89
||||| 90-100

Quantum mechanics anyone?

*Sorry for my excessive ranting, it's just that there are a relatively large number of published papers in clinical research that make inappropriate conclusions on biased data such as this set, not due to small sample sizes, but due to the number of possible responses available from the participants. As a result, there are decisions being made right now in the medical field that are based on completely inaccurate statistics, and are probably resulting in unnecessary patient deaths.

i think an earlier poll, from about a month or two ago, used 5% increments for each category. op, i say we do a poll after may 15 using 5% increments.
 
Actually I have a midterm in 3 hours😛 The fact that I'm on here making posts like these makes me sad🙁

You're going to Mayo...you're not allowed to be sad anymore 😛
 
Haha, what? Can you explain this so called "backup school effect"?

It's probably not the same with everybody, but if I went into a school that I thought I had a chance getting into or, just really wanted to go to, I would enter my interview with a vibrant smile on my face that said so.

If I were at at a backup school, after seeing shoddy facilities, finding out some of the students were on their third try on the USMLE's and seeing the vacant expressions of disregard on the faces of the interview day coordinators, I might enter my interview with a look of worry that I might end up there😛

Not that I interviewed at any of the latter places (or even that such places exist in US schools), but I can imagine it happening😛 Then again, I'm not very good at faking anything😛
 
i think an earlier poll, from about a month or two ago, used 5% increments for each category. op, i say we do a poll after may 15 using 5% increments.

Er, well technically that would increase the number of inaccessible regions on the ~histogram and make the results even more erratically skewed...
 
Er, well technically that would increase the number of inaccessible regions on the ~histogram and make the results even more erratically skewed...

i don't understand how this matters. a person can only occupy one slot in the histogram, anyway, and this is a function of both #interviews and #acceptances.

but i see that what i wrote didn't address your original concern. my concern was actually that smaller increments give a smoother histogram with less rounding noise.
 
Not that I interviewed at any of the latter places (or even that such places exist in US schools), but I can imagine it happening😛 Then again, I'm not very good at faking anything😛

I see what you are saying, but I am really good at faking things 😉
 
Haha, well, sad for my midterm that I'm about to take, not about life in general😛

You are not allowed to be sad in Mayo? What about this school makes everyone so happy? Also, I've heard that Mayo gives out a lot of financial aid.. can someone tell me about what kind of school Mayo is?
 
3/3 - 100%

But I turned down interviews after I had my first acceptance in October, so there were 2 more interviews I could have gotten rejected from in November, and one in February.
 
3/3 - 100%

But I turned down interviews after I had my first acceptance in October, so there were 2 more interviews I could have gotten rejected from in November, and one in February.
I have a better acceptance rate.

8/8= 100% which is better than 3/3.

I also declined 3 other interviews......or should I say 3 other acceptances.
 
I have a better acceptance rate.

8/8= 100% which is better than 3/3.

I also declined 3 other interviews......or should I say 3 other acceptances.

Go sulk over your A-'s some more.
 
Go sulk over your A-'s some more.

Ignore him, he's a troll. He also claims to have been accepted to Yale, which is quite a trick since no one has been accepted there yet.
 
Ignore him, he's a troll. He also claims to have been accepted to Yale, which is quite a trick since no one has been accepted there yet.
Right........because I am sure the adcoms consults with you, personally, before they send any acceptances out.🙄
 
*Sorry for my excessive ranting, it's just that there are a relatively large number of published papers in clinical research that make inappropriate conclusions on biased data such as this set, not due to small sample sizes, but due to the number of possible responses available from the participants. As a result, there are decisions being made right now in the medical field that are based on completely inaccurate statistics, and are probably resulting in unnecessary patient deaths.

Oh brother. It's a flipping post on an internet forum. You people are why the majority of Americans dislike math and don't even attempt to understand the most basic concepts of statistics and probability. Even with the selection bias, we can still see some interesting trends, such as the fact that there indeed is a heavy number of people with near-perfect acceptance rates.

Also, I was hoping people would just reply, and maybe have comments, but not mention their rate in a post. I don't want this to turn into a pissing match/brag-fest. Brag-fests make me want to :barf:
 
so, will you tell us your findings afterwards then? I'm itching to know.
 
You people are why the majority of Americans dislike math and don't even attempt to understand the most basic concepts of statistics and probability. Even with the selection bias, we can still see some interesting trends, such as the fact that there indeed is a heavy number of people with near-perfect acceptance rates.

hey, wait a second. i thought that i was what's *wrong* with america because i questioned stuff. remember? now i'm responsible for americans not knowing math, also? jeez, this is a lot to carry.

and the whole point of biased results is that the results aren't accurate, i.e. shouldn't be interpreted.
 
hey, wait a second. i thought that i was what's *wrong* with america because i questioned stuff. remember? now i'm responsible for americans not knowing math, also? jeez, this is a lot to carry.

and the whole point of biased results is that the results aren't accurate, i.e. shouldn't be interpreted.

Almost all statistically sampled results are biased (I have yet to see a faultless, perfect social survey). Real world statistics versus ideal ones from a stats textbook are very different. One just needs to be aware of the biases present and interpret results apropriately within that known context.
 
i think an earlier poll, from about a month or two ago, used 5% increments for each category. op, i say we do a poll after may 15 using 5% increments.

Using 5% increments and maybe extra slots to differentiate between the 0s, i.e. 0 of 1, 0 of 2, etc.
 
Oh brother. It's a flipping post on an internet forum. You people are why the majority of Americans dislike math and don't even attempt to understand the most basic concepts of statistics and probability.

Correction. The people who dislike math and don't even attempt to understand the most basic concepts of statistics and probability are the reason why "us people" must exist.
 
i don't understand how this matters. a person can only occupy one slot in the histogram, anyway, and this is a function of both #interviews and #acceptances.

That's exactly why it matters😛

If you still don't understand the argument, just do a test run based on fake randomly sampled data, and you'll see the enormous selection bias arise.

Or you can just observe the selection bias appearing already from the data collected on the forum.

selectionbias.gif


This doesn't mean that the poll itself is bad, it's a great poll in terms of collecting data! (percentage of success is much more relevant than only invites or only acceptances in many cases)

It's just that the data needs to be normalized prior to assessment. At first glance, one might believe that the distribution is tri-modal, when actually, post normalization, is approximately a bell curve.
 
That's exactly why it matters😛

If you still don't understand the argument, just do a test run based on fake randomly sampled data, and you'll see the enormous selection bias arise.

Or you can just observe the selection bias appearing already from the data collected on the forum.

selectionbias.gif


This doesn't mean that the poll itself is bad, it's a great poll in terms of collecting data! (percentage of success is much more relevant than only invites or only acceptances in many cases)

It's just that the data needs to be normalized prior to assessment. At first glance, one might believe that the distribution is tri-modal, when actually, post normalization, is approximately a bell curve.

I'm just curious: how would you normalize this? I can follow your logic and I think it's very sound. However, I'm not sure how one would set about normalizing it since my stats reeeeeeeeeeeeeeeeally suck👎
 
I'm just curious: how would you normalize this? I can follow your logic and I think it's very sound. However, I'm not sure how one would set about normalizing it since my stats reeeeeeeeeeeeeeeeally suck👎

Well, normalization in this case would require more data.

If you do the whole "sampling bias" thing I was doing earlier, going up the ranks of people with 0, 1, 2, 3, 4, 5 interviews and determining how many categories they fall under, as you approach infinite you will eventually get an even distribution.

Problematically, on the way to infinite the sampling bias will be constantly changing (even as you exceed 10 interviews, you get into regions where you have multiple hits in a single histogram region), so you would need to find out what the range of interviews are (minimum = 0, maximum = the maximum number of interviews an individual taking the poll has). You could then approximate the sampling bias by doing what I did in the earlier post for 0 to the maximum number of interviews a person has (there's actually some guidelines for this, but I've long since forgotten them). In this case, I would probably take an interview maximum at around 15. Although there are certainly plenty of people who received or possibly attended more than 15 interviews, they are a relatively small population that can be ignored for something like this.

Then divide each category by its relative weighting.

This, of course, assumes a completely random sampling of individuals, which is never attainable, but statistics is all about "good enough" measures.
 
Wouldn't it be nice if someone could just put all this stuff in a book? Oh wait, they did. But I'm sure an SDN poll is much more reliable.
 
Wouldn't it be nice if someone could just put all this stuff in a book? Oh wait, they did. But I'm sure an SDN poll is much more reliable.

Where can you find published data on individual's sucess ration in interviews? I'd love to see it. Pyrois, I'm understanding your categorical sampling bias (to explain the low responses in 10-49 and 70-79), however I'm trying to understand why the graph would approach a completely normal distribution, especially with the high number of 90-100% responses which are all most probably 100% responses?
 
I hope to vote in this pole in the upcoming weeks. Hopefully more people will chime in to update the statistics on interview:acceptance rate. 🙂
 
i interviewed at a school that said that they accepted exactly 50% of interviewed applicants.
 
The more fundamental problem I see with possible misinterpretations of the data, is that due to the small number of interviews relative to a "10 point scale" (100% split into groups of 10 percents each), there will be certain statistical regions that most applicants are unable to access.

Say, you have 2 interviews... you can only access 0-9, 50-59, and 90-100.

3 interviews? 0-9, 30-39, 60-69, 90-100

4 interviews? 0-9, 20-29, 50-59, 70-79, 90-100

5 interviews? 0-9, 20-29, 40-49, 60-69, 80-89, 90-100

As you can see, up to the 5 interview stage (not bothering to go on to higher levels), you still have not accessed the 10-19 region at all. Hence, there will be an inherent dip in the distribution in the 10-19% zone since all individuals with 5 or less interviews cannot enter it.

Even worse, the regions that will be most over-represented will be the ones accessible by all users, such as 0-9% and 90-100%. This will inherently create a bimodal distribution that will select against the "real" results. Even 50% is selected against, since anybody with an odd number of interviews can never enter the zone (until they reach something like 7 interviews, in which case 4/7 yields upper 50's).

Hence, the natural distribution for people with 5 interviews or less (assuming no real correlation) will be biased to look like:

||||| 0-9
10-19
|| 20-29
| 30-39
| 40-49
|| 50-59
|| 60-69
| 70-79
| 80-89
||||| 90-100

Quantum mechanics anyone?

*Sorry for my excessive ranting, it's just that there are a relatively large number of published papers in clinical research that make inappropriate conclusions on biased data such as this set, not due to small sample sizes, but due to the number of possible responses available from the participants. As a result, there are decisions being made right now in the medical field that are based on completely inaccurate statistics, and are probably resulting in unnecessary patient deaths.

Well, normalization in this case would require more data.

If you do the whole "sampling bias" thing I was doing earlier, going up the ranks of people with 0, 1, 2, 3, 4, 5 interviews and determining how many categories they fall under, as you approach infinite you will eventually get an even distribution.

Problematically, on the way to infinite the sampling bias will be constantly changing (even as you exceed 10 interviews, you get into regions where you have multiple hits in a single histogram region), so you would need to find out what the range of interviews are (minimum = 0, maximum = the maximum number of interviews an individual taking the poll has). You could then approximate the sampling bias by doing what I did in the earlier post for 0 to the maximum number of interviews a person has (there's actually some guidelines for this, but I've long since forgotten them). In this case, I would probably take an interview maximum at around 15. Although there are certainly plenty of people who received or possibly attended more than 15 interviews, they are a relatively small population that can be ignored for something like this.

Then divide each category by its relative weighting.

This, of course, assumes a completely random sampling of individuals, which is never attainable, but statistics is all about "good enough" measures.

That was awesome. Another approach (besides trying to normalize the results) would be to change the bins. It reduces our specificity but increases our accuracy and is arguably more realistic than anyone on SDN going through the work of normalization...maybe Sector9 would do it I don't know.
 
i interviewed at a school that said that they accepted exactly 50% of interviewed applicants.

Many schools are this way. The question is whether or not there is a tendency for the same 50% of interviewees to get the offers most of the time. I would suspect that there is at least SOME tendency in this direction because there are a hand full of applicants who look great on paper but bomb social interactions, such as those at interview days (e.g., your stereotypical 3.8+/40+ applicant with average ECs and negligible social skills or negligible sense of humility). As a result, you are likely to have, perhaps, 410 applicants each year (on average) who, despite looking stellar on paper with their 3.8+/33+ stats, get rejected at every single school to which they apply. One would assume these students generally get interviewed at quite a few schools on average, but apparently they get rejected. They make up a small, but still significant, fraction of the total applicant pool at about 1% of the total pool and 12% of the students with their elite numbers.
 
Also, this poll is nearly 5 years old at this point.
 
Top