Are there any studies demonstrating the efficacy of admissions committees?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

EarpBars

Full Member
7+ Year Member
Joined
Feb 2, 2016
Messages
15
Reaction score
1
I've done a cursory search for any systematic and cited studies showing whether admissions committees at US allopathic med schools are indeed effective but wasn't able to find any compelling research that indicates that the admissions committees themselves are an influential or effective part of medical education itself.

There is, of course, countless studies that suggest the effectiveness of certain principles employed by med school adcoms (e.g., class diversity, MCAT and GPA scores, etc.) but there's a paucity of evidence to suggest that the role of adcoms is really very effective beyond having a random admissions process based on these aforementioned factors.

What I'm basically asking is, what is the value of an admissions committee when MCAT, GPA, ECs, and MMI interviews (not traditional interviews, as their value is ambiguous) are supposed to predict the success of students in a medical curriculum? How do we know that randomly picking students with a certain profile will yield identical or superior results compared to those "thoughtfully" selected by adcoms? Any enlightenment would be tremendously appreciated.
 
Wouldn't you need people to weigh out the significance of those factors? For example, you could not design a program that weighs out quality of ECs, like you could with MCAT and GPA. Personal statements and in-person character are another thing a computer couldn't evaluate. An admission committee is basically a group that weighs these factors.

Note: I say computer because from your post it sounds like you want a more objective system of ranking/admitting applicants, when a good part of the evaluation, based on what I've seen and read, is subjective.
 
Let me propose something radical:

Lottery tickets. You get a supplemental application if you meet minimum criteria for admission and that gets you one ticket. If you have a GPA above the school's mean you get a second ticket. If you have an MCAT above the school's mean you get a second ticket. If you have whatever other characteristics the school highly values (a gap year of full-time research experience, military veteran, Peace Corps volunteer, etc) you get another ticket. Tickets are pulled until the class fills. Now that would be random!
 
PubMed is your friend

http://www.ncbi.nlm.nih.gov/pubmed/22891906
http://www.ncbi.nlm.nih.gov/pubmed/26313700
http://www.ncbi.nlm.nih.gov/pubmed/17651477
http://www.ncbi.nlm.nih.gov/pubmed/26330210
http://www.ncbi.nlm.nih.gov/pubmed/26488572
http://www.ncbi.nlm.nih.gov/pubmed/23574402
http://www.ncbi.nlm.nih.gov/pubmed/12634215

While it may not be published, Admissions and Clinical deans know how well their students do, and adjust admissions criteria accordingly.

And of course, how are you defining "effective"?

Getting a Class that can pass all their coursework/Step I/Step II

Gain residency?

Gain competitive residencies?

Graduate on time (ie, in four years?)

Have a low attrition rate?

Go into specialties rather than Primary Care?

The whole point of Admissions is no get people who will be good medical students, but will be good doctors. 4.0 automatons are a dime-a-dozen.



I've done a cursory search for any systematic and cited studies showing whether admissions committees at US allopathic med schools are indeed effective but wasn't able to find any compelling research that indicates that the admissions committees themselves are an influential or effective part of medical education itself.

There is, of course, countless studies that suggest the effectiveness of certain principles employed by med school adcoms (e.g., class diversity, MCAT and GPA scores, etc.) but there's a paucity of evidence to suggest that the role of adcoms is really very effective beyond having a random admissions process based on these aforementioned factors.

What I'm basically asking is, what is the value of an admissions committee when MCAT, GPA, ECs, and MMI interviews (not traditional interviews, as their value is ambiguous) are supposed to predict the success of students in a medical curriculum? How do we know that randomly picking students with a certain profile will yield identical or superior results compared to those "thoughtfully" selected by adcoms? Any enlightenment would be tremendously appreciated.
 
^agree with @TehTeddy
@LizzyM You...you mean...that's not already how it goes?

Also, efficacy and effectiveness are different! (Not that you used them incorrectly, per se.) 🙂
 
Let me propose something radical:

Lottery tickets. You get a supplemental application if you meet minimum criteria for admission and that gets you one ticket. If you have a GPA above the school's mean you get a second ticket. If you have an MCAT above the school's mean you get a second ticket. If you have whatever other characteristics the school highly values (a gap year of full-time research experience, military veteran, Peace Corps volunteer, etc) you get another ticket. Tickets are pulled until the class fills. Now that would be random!

The Dutch used this system until 1999, when popular (populist?) uproar forced the government to enact changes.
IIRC a study evaluating performance found no notable differences between hand-picked students and lottery-selected students.
 
Let me propose something radical:

Lottery tickets. You get a supplemental application if you meet minimum criteria for admission and that gets you one ticket. If you have a GPA above the school's mean you get a second ticket. If you have an MCAT above the school's mean you get a second ticket. If you have whatever other characteristics the school highly values (a gap year of full-time research experience, military veteran, Peace Corps volunteer, etc) you get another ticket. Tickets are pulled until the class fills. Now that would be random!

I am in favor of this method.
 
Thank @Goro for those links, some of them are helpful

Let me expand upon my initial inquiry. I do not doubt the significance of soft skills and personality traits that are important factors in the success of a future physician. In fact, from my naive understanding, I strongly support the MMI process over the traditional interview for these reasons. I honestly think it's only a matter of time until most other schools will adopt the MMI format.

My question is, can factors such as GPA, MCAT, ECs, ethicity, MMI performance, be used to select medical students without having to consult an admissions committee itself? Therefore, by quantifying all of these measures (even subjective characteristics), we can generate some score that will determine whether a student is admitted or not. Additionally, assuming that the MMI format will be proven to be superior, is it possible that this aspect of the admissions process will also become standardized? For example, instead of candidates having to travel all across the country to interview at some school, they can go to a local "testing center" and do their MMI interview there? Of course, interviewer bias and other factors would somehow have to be controlled eventually. Theoretically, this would be much more cost effective than having candidates travel to schools prior to knowing whether they would be admitted. They would only really be traveling to a school where they have already been admitted and are only there to learn more about the school itself. I think for most applicants, this would cut the traveling down in half or so.

I found the CASPer test to be evidence that these sorts of factors are trying to be incorporated earlier into the selection process (prior to having the candidate have to travel around on the interview trail). IMO, there's a lot of incentive here, after all, there could be a lot of money involved in standardizing MMI's.
 
Let me propose something radical:

Lottery tickets. You get a supplemental application if you meet minimum criteria for admission and that gets you one ticket. If you have a GPA above the school's mean you get a second ticket. If you have an MCAT above the school's mean you get a second ticket. If you have whatever other characteristics the school highly values (a gap year of full-time research experience, military veteran, Peace Corps volunteer, etc) you get another ticket. Tickets are pulled until the class fills. Now that would be random!


But honestly, I would be extremely interested in the results if one school did this one year. It wouldn't need to be a medical school. Maybe dental or pharmacy or veterinary or some podiatry school which has had major budget cuts and can't afford a whole slew of admissions staff and interviewers. Have one software engineer write a program and one secretary field phonecalls. That is your admissions office for a year. See what happens.

I would predict the student body in general would be indistinguishable from one chosen "normally", plus a small percentage of students who never would have been accepted by a "normal" admissions staff. You'd probably have a couple more students drop out than last year, but otherwise any differences would be masked by the number of students.



Now, who'll volunteer to go through with this?

In the name of science, of course.
 
Is the mission of an adcom to picked applicants who will successfully complete a medical curriculum or is it to produce physicians who will serve in a variety of communities? How do we classify and define these varieties of communities, populations, and other stakeholders? Is every physician a standard set of values and characteristics or is there wide variety in experiences, education, attitude, etc.? How would we try to break down and analyze all the characteristics of applicants and of societal positions for physicians into a set of metrics to measure and compare?

I think the efficacy of an adcom should be to choose students who will meet the needs of the community (local, national, international, depending on the missions of the institution). Certain qualities, such as propensity to serve in underserved areas, etc, would be selected based on historical trends (i.e., data dependent). Essentially, the adcom can be reduced into an algorithm that systematically draws from historical data and information.

Although not totally related, but from I've read, huge amounts of data in healthcare is being transformed into "useless data" into "useful knowledge" that are used to make decisions, much of which is done via algorithms and analysis. I don't see it being much of a stretch to incorporate these principles into medical school admissions.
 
No amount of numbers on paper can tell you if a person is immature, a babbling idiot, a sociopath, a person who panics in difficult situations, a dissembler, anti-social, an outright liar, a fool, or is lacking in the humanistic domains we require.

Hence, the need of the Adcom to provide a subjective human element into a very humanistic process and career.

I have to teach students. And so, I don't want some "testing center" to do my interviewing for me. I trust the judgment of my fellow faculty colleagues and students to do it, if I can't do it.

Thank @Goro for those links, some of them are helpful

Let me expand upon my initial inquiry. I do not doubt the significance of soft skills and personality traits that are important factors in the success of a future physician. In fact, from my naive understanding, I strongly support the MMI process over the traditional interview for these reasons. I honestly think it's only a matter of time until most other schools will adopt the MMI format.

My question is, can factors such as GPA, MCAT, ECs, ethicity, MMI performance, be used to select medical students without having to consult an admissions committee itself? Therefore, by quantifying all of these measures (even subjective characteristics), we can generate some score that will determine whether a student is admitted or not. Additionally, assuming that the MMI format will be proven to be superior, is it possible that this aspect of the admissions process will also become standardized? For example, instead of candidates having to travel all across the country to interview at some school, they can go to a local "testing center" and do their MMI interview there? Of course, interviewer bias and other factors would somehow have to be controlled eventually. Theoretically, this would be much more cost effective than having candidates travel to schools prior to knowing whether they would be admitted. They would only really be traveling to a school where they have already been admitted and are only there to learn more about the school itself. I think for most applicants, this would cut the traveling down in half or so.

I found the CASPer test to be evidence that these sorts of factors are trying to be incorporated earlier into the selection process (prior to having the candidate have to travel around on the interview trail). IMO, there's a lot of incentive here, after all, there could be a lot of money involved in standardizing MMI's.
 
No amount of numbers on paper can tell you if a person is immature, a babbling idiot, a sociopath, a person who panics in difficult situations, a dissembler, anti-social, an outright liar, a fool, or is lacking in the humanistic domains we require.

Hence, the need of the Adcom to provide a subjective human element into a very humanistic process and career.

I have to teach students. And so, I don't want some "testing center" to do my interviewing for me. I trust the judgment of my fellow faculty colleagues and students to do it, if I can't do it.

I think I get what you're saying and where you're coming from. Although, I don't think it should preclude the possibility of consulting experts (e.g., psychologists, etc.) who probably have a lot more experience on these facets of human behavior and psychology. I don't doubt the good intentions of adcoms and volunteer students and faculty who spend their time conducting interviews, but I'm just saying that there could be improvement. Whether such an alternative system is financially feasible is another debate, but I think there could be mechanisms in place that better protect the candidate as well as the med school community. I'm just saying that there is always an opportunity to improve and I'm not saying what I proposed should be one of them.

But, the way I interpreted the implementation of the CASPer rest is that it's one more way of assessing softer skills before the candidate makes the financial commitment to travel to the school for an interview. There's a lot of applicants who are severely disadvantaged by the current application process and this is one way to help even the very unbalanced playing field. Unfortunately, due to the principles of supply and demand, applicants are at the mercy of the status quo of med school admissions.
 
How to define, classify and measure all aspects of this would rely on the experience and opinions of adcom members. For example, how would one one measure and compare the propensity to serve in an underserved area with the vast array of experiences that applicants would bring to the process. At some point each experience has to be examined and weighed by someone to even get to a crude value of low, medium or high propensity to serve in an underserved area. Now combine that with some 10, 20, or 30 traits we may want in a doctor. Are we to go thru some scoring across every EC to see which of these traits and the amount that is presented by the applicant? How do we judge which experience is worth more or less? Do we want prospective physicians chosen based on an algorithm that is a mathematical approximation of a psycho-social reality? Except on a very crude scale and evaluation to compare across the thousands of experiences across, would this be possible nor useful. It is why judgement is needed in the collective form of an adcom to compare across this wide array.

While I admire the intentions of adcoms, the question remains whether this process actually has any effect on producing desirable physicians for society. Although in theory it may seem that such a thoughtful process should produce excellent physicians, there's a lack of studies to show that the selection process is any better than randomly selecting candidates with certain quantifiable traits (MCAT, GPA, MMI evaluations). Rating the value of ECs can be a little more complex but such a task can probably also be evaluated in some quasi-systematic way.

Basically, my concern is that the system followed by adcoms have not been proven more effective than randomly selecting candidates who exhibit some minimum cutoff in the areas traditionally evaluated in the admissions process.

In other words, if this was a scientific study, we should design an experiment that demonstrates whether the decision-making process of adcoms is more or less effective than some negative control (the negative control being, for example, random selection of candidates who meet some minimum criteria).
 
A lot of the things you're discussing (and apparently won't let go of) come from residency, something medical education can help prepare you for, but is largely powerless to influence once you're there. As several people have stated here, and they ought to know: "Residency is where you hone your craft".



While I admire the intentions of adcoms, the question remains whether this process actually has any effect on producing desirable physicians for society. Although in theory it may seem that such a thoughtful process should produce excellent physicians, there's a lack of studies to show that the selection process is any better than randomly selecting candidates with certain quantifiable traits (MCAT, GPA, MMI evaluations). Rating the value of ECs can be a little more complex but such a task can probably also be evaluated in some quasi-systematic way.

Basically, my concern is that the system followed by adcoms have not been proven more effective than randomly selecting candidates who exhibit some minimum cutoff in the areas traditionally evaluated in the admissions process.

In other words, if this was a scientific study, we should design an experiment that demonstrates whether the decision-making process of adcoms is more or less effective than some negative control (the negative control being, for example, random selection of candidates who meet some minimum criteria).
 
While I admire the intentions of adcoms, the question remains whether this process actually has any effect on producing desirable physicians for society. Although in theory it may seem that such a thoughtful process should produce excellent physicians, there's a lack of studies to show that the selection process is any better than randomly selecting candidates with certain quantifiable traits (MCAT, GPA, MMI evaluations). Rating the value of ECs can be a little more complex but such a task can probably also be evaluated in some quasi-systematic way.

Basically, my concern is that the system followed by adcoms have not been proven more effective than randomly selecting candidates who exhibit some minimum cutoff in the areas traditionally evaluated in the admissions process.

In other words, if this was a scientific study, we should design an experiment that demonstrates whether the decision-making process of adcoms is more or less effective than some negative control (the negative control being, for example, random selection of candidates who meet some minimum criteria).

Prior to the 1980's there were actually some medical schools who admitted students based on algorithms. Interviews were not even required, if you can believe it. My understanding is that the push for metrics above all else subsided a bit when it became apparent that a not-so-insignificant proportion of students admitted under that paradigm were people who, for instance, could memorize the Merck Manual but could not tie their own shoes or speak to members of the opposite sex. Or proclaimed, without any sense of irony, a deep desire to pursue pediatric gynecology. In some ways I believe the current emphasis on communication skills are an echo of that era being rebuked.

While I would agree that scientific study of the admissions process would be desirable, one as to approach a well entrenched status quo from a slightly different angle. Instead of asking if there are studies to support the effectiveness of the admissions process, you should ask if there are studies that suggest the ineffectiveness of the admissions process.

Are there?
 
US MD schools have very low attrition and very high residency placement. Seems like they are effective.

I would be wary of attributing these outcomes to adcoms only. It could very well be largely due to another factor, such as the applicant pool or med school curriculum, etc.

Sounds like Farva is back from vacation.

I can assure you I'm not him :laugh: although I know he likes to stir the pot. I don't expect this discussion to change anything about the admissions process but it's somewhat interesting to question the process, because, after all, that's an important part of advancing the system.

I personally don't believe there's zero positive effect of the Adcom decision making process in producing quality physicians. However, I think it's worth while to consider whether how it could be improved for the benefit for all.
 
I would be wary of attributing these outcomes to adcoms only. It could very well be largely due to another factor, such as the applicant pool or med school curriculum, etc.

Ultimately, they are selecting students that are succeeding in med school and becoming doctors. That is their purpose.
 
Last edited:
To put this question more succinctly, I'll write out my statement like an equation:

Adcom decision making + MCAT + GPA + ECs + MMI performance + residency training + ... --> x1
No Adcom decision making + MCAT + GPA + ECs + MMI performance + residency training + ... --> x2

Where x1 and x2 are measures of the quality of physicians.

The question is, which is greater? x1 or x2?

Obviously life is not as simple as the equations above, and for argument's sake, we can assume terms like "quality of physician", refers to how effective physicians are in improving the quality of healthcare in the community.

So, the question is whether or not adcom decision making is actually effecting some positive influence on the quality of physicians in their training or if there's actually no difference at all.

I don't envy the responsibilities of the adcom... they're basically being asked to predict the future based on past performance, which is not a trivial feat.
 
Honestly, you have to figure... who's gonna fund this research? Highly doubtful any med school would fund something like this, since they probably only have something to lose.

Unfortunately, politics is pervasive in the research arena and so you have to consider this aspect even when reading studies that promote certain findings.
 
One of the glaring issues with the admissions process is that adcoms are likely unaware of the relative merit of specific extracurricular activities. For example, the significance and difficulty of certain research contributions can only really be understood and appreciated by experts in that field. It's really unlikely that a member of the adcom would have any real understanding of how much that applicant has worked on that particular effort or how much creativity, etc. is required. Even reducing a publication to its "impact factor" usually leaves a lot of the discussion out that more or less does a disservice to both the med school and candidate.

I supposed, at the end of the day, the only factor that is truly standardized is the MCAT. Although not perfect, it's, to my knowledge, the fairest and most objective measure used to compare all applicants...
 
Do you think we're idiots??? I don't have to be an expert in physiology to understand that if an applicant has published in a physiology journal that this is a significant accomplishment. And the desire for applicants who have research experience is to show us that they understand something about the scientific method.

You're going to have to do better in claiming that we don't understand the significance of particular extracurriculars

One of the glaring issues with the admissions process is that adcoms are likely unaware of the relative merit of specific extracurricular activities. For example, the significance and difficulty of certain research contributions can only really be understood and appreciated by experts in that field. It's really unlikely that a member of the adcom would have any real understanding of how much that applicant has worked on that particular effort or how much creativity, etc. is required. Even reducing a publication to its "impact factor" usually leaves a lot of the discussion out that more or less does a disservice to both the med school and candidate.

I supposed, at the end of the day, the only factor that is truly standardized is the MCAT. Although not perfect, it's, to my knowledge, the fairest and most objective measure used to compare all applicants...
 
Do you think we're idiots???

This is preallo's view of medical school admissions, of which @mimelim replies:

mimelim said:
You should start by learning a little bit about the admissions process before you start calling parts of it absurd. After that, then one can have a reasonable discussion.
mimelim said:
You highlight a major issue with pre-med advising. If it doesn't smell right, question it. Almost everything in the admissions process is done for a reason and it actually a pretty logical, albeit somewhat subjective process. If someone tells you something like, "it is random" or "do this, even though it doesn't make sense." you should be wary.
 
This is a weird topic. It would be warranted if US MD schools had horrible attrition rates, which they do not. Most people admitted to med school succeed. I guess you could argue there is a portion of students who don't get in and would have succeeded if they had, but who cares? It is not absurd for schools to select the students with the highest stats, most research, most volunteering, and most 'other' accomplishments.
 
Do you think we're idiots??? I don't have to be an expert in physiology to understand that if an applicant has published in a physiology journal that this is a significant accomplishment. And the desire for applicants who have research experience is to show us that they understand something about the scientific method.

You're going to have to do better in claiming that we don't understand the significance of particular extracurriculars

My question is how the merits of one's extracurricular activities and achievements are compared. I agree that co-authorship on a publication is impressive but one co-authorship is not equal to another, and understanding the nuances between different publications (and more broadly, extracurricular achievements in general) would require insight that is probably beyond most admissions committees.

Furthermore, there's an issue when ECs on the AMCAS application can be very easily embellished. I would argue that there are few aspects of the med school application that can be taken at face value, that's all.
 
My question is how the merits of one's extracurricular activities and achievements are compared. I agree that co-authorship on a publication is impressive but one co-authorship is not equal to another, and understanding the nuances between different publications (and more broadly, extracurricular achievements in general) would require insight that is probably beyond most admissions committees.

Furthermore, there's an issue when ECs on the AMCAS application can be very easily embellished. I would argue that there are few aspects of the med school application that can be taken at face value, that's all.

Point 1: That is what an interview is for.

Point 2: That is also what an interview is for, and the AAMC and schools do spot check questionable sounding activities (ask METTA WORLD PEACE).
 
My question is how the merits of one's extracurricular activities and achievements are compared. I agree that co-authorship on a publication is impressive but one co-authorship is not equal to another, and understanding the nuances between different publications (and more broadly, extracurricular achievements in general) would require insight that is probably beyond most admissions committees.

Furthermore, there's an issue when ECs on the AMCAS application can be very easily embellished. I would argue that there are few aspects of the med school application that can be taken at face value, that's all.

There is no universal standard of what makes a good med school applicant. Each school, every admissions meeting even, has its own criteria and weighting of criteria they use to make a decision, so in that sense it is a little random. There is no precise, objective measure of applicant quality, so for people in the middle the sorting process of where you get accepted is somewhat random, but there is a logic to each decision.
 
There is no universal standard of what makes a good med school applicant. Each school, every admissions meeting even, has its own criteria and weighting of criteria they use to make a decision, so in that sense it is a little random. There is no precise, objective measure of applicant quality, so for people in the middle the sorting process of where you get accepted is somewhat random, but there is a logic to each decision.

I agree that a narrative/rationale for certain decisions can be made, although I would argue that being about to formulate a logical argument doesn't necessarily yield the best results. In other words, a hypothesis can be sound and logical, but it doesn't make it true.

This is a weird topic. It would be warranted if US MD schools had horrible attrition rates, which they do not. Most people admitted to med school succeed. I guess you could argue there is a portion of students who don't get in and would have succeeded if they had, but who cares? It is not absurd for schools to select the students with the highest stats, most research, most volunteering, and most 'other' accomplishments.

I completely see what you're saying here, but I would argue that correlation does not equal causation. Like @LizzyM eluded to earlier in this thread, one way to understand the effectiveness of adcom decision making would be to compare them to some randomized decision making of some group of candidates that have passed some minimum threshold.
 
I agree that a narrative/rationale for certain decisions can be made, although I would argue that being about to formulate a logical argument doesn't necessarily yield the best results. In other words, a hypothesis can be sound and logical, but it doesn't make it true.
I would agree with this, it is perhaphs not the best way to go about the decisions, it is just the way it is
 
I completely see what you're saying here, but I would argue that correlation does not equal causation. Like @LizzyM eluded to earlier in this thread, one way to understand the effectiveness of adcom decision making would be to compare them to some randomized decision making of some group of candidates that have passed some minimum threshold.

What would the point of that be? There is nothing that needs to be improved.
 
Any publication is a valuable accomplishmest, whether it's in Cell or the Kansas J of Botany, as long as they're on Pubmed. People who have pubs are rare, we just want students who understand what Science is like.

As for embellishments, we're pretty good at sniffing out bs artists. Admissions Deans are starting to do due diligence and verify hiurs, I hear as well.

My question is how the merits of one's extracurricular activities and achievements are compared. I agree that co-authorship on a publication is impressive but one co-authorship is not equal to another, and understanding the nuances between different publications (and more broadly, extracurricular achievements in general) would require insight that is probably beyond most admissions committees.

Furthermore, there's an issue when ECs on the AMCAS application can be very easily embellished. I would argue that there are few aspects of the med school application that can be taken at face value, that's all.
 
I agree that a narrative/rationale for certain decisions can be made, although I would argue that being about to formulate a logical argument doesn't necessarily yield the best results. In other words, a hypothesis can be sound and logical, but it doesn't make it true.



I completely see what you're saying here, but I would argue that correlation does not equal causation. Like @LizzyM eluded to earlier in this thread, one way to understand the effectiveness of adcom decision making would be to compare them to some randomized decision making of some group of candidates that have passed some minimum threshold.
I think one nuance that is somewhat missing from this discussion is that the holistic, subjective admissions process is not JUST for the purposes of selecting individuals that will succeed in medical school and as physicians. Adcoms are also putting together a learning environment and community that will in turn impact the individual members.

A group of students with diverse interests and experiences teaches each other and cultivates the whole lot into better doctors who are cognizant of difference and enriched by it. This would be very hard to prescribe in a formula.

The value of a well-selected class should be greater than the sum of its parts.
 
Last edited:
@EarpBars The fatal flaw is that once you try to quantify success markers, you go down the rabbit hole of over-anaylsis. For instance, you make no mention of age. Is that a positive or negative? Supposedly that marks a sense of real world experience and maturity, right? Or maybe it just means that they didn't do well as an undergrad, dicked around for a few years, did some grade replacement and are now 26 but still below the curve. Sometimes subjective markers and qualitative analysis is needed. As someone who has spent the last three years hiring employees, I know the importance of the face-to-face. No one gets a golden ticket for having strong grades or the right reference.

What it sounds like your big issue is that of scarcity. Are adcoms effectively choosing the right applicants or are they missing out on "better" applicants. The answer is "yes". But that's because not all med schools are driven by the same mission. @Curioso06 brings up research publications, but some schools aren't researched focused and don't prioritize those who want to pursue academic medicine.

Kansas J of Botany
I see you've finally gotten around to reviewing my application.
 
Top