Do donations influence med school decisions?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Doggo

Full Member
7+ Year Member
Joined
Aug 28, 2016
Messages
366
Reaction score
288
Just read an article on Propublica about donations and Harvard undergrad admissions (with a focus on Jared Kushner). I haven't thought about this sort of thing in a long time, but I was wondering if anyone knew of any good recent books/articles about med schools and donations?

Members don't see this ad.
 
Lol

Edit: sorry I didn't actually intend to press the post button. I've no knowledge on this so just ignore me
 
Yeah probably. I just realized Mt. Sinai is named after that billionaire investor Carl Icahn (have seen him a lot on the news lately). If his future progeny ever wanted to go medical school they'd all at least be a shoo in there...probs.
 
Members don't see this ad :)
Definitely. Not that they would necessarily accept a crap student because their parents donated, but I know a person whose parents donated $250,000 and they happened to be in the range of average matriculants, and coincidentally got accepted. Of course they say the donation had nothing to do with it.
 
Last edited:
If you somehow happen to have a different last name from your parents who are donors, are you obligated to let the school know you are you?
We can see the names of your parents in the primary.
 
I have a buddy who applied to med school this cycle with pretty low stats (Caribbean type stats). He got rejected from almost every school he applied too, got nervous, called his orthopedic surgeon brother who was best friends with a person whose family owned a medical school and was told "he's in". Not a donation but nepotism for sure.
 
I have a buddy who applied to med school this cycle with pretty low stats (Caribbean type stats). He got rejected from almost every school he applied too, got nervous, called his orthopedic surgeon brother who was best friends with a person whose family owned a medical school and was told "he's in". Not a donation but nepotism for sure.

His family owned a medical school..?
 
Definitely. It had better have 7+ figures tho. Or enough to build a new wing/building.

Lesson here kids, often who you know is more important than what you know.
 
Definitely. It had better have 7+ figures tho. Or enough to build a new wing/building.

Lesson here kids, often who you know is more important than what you know.
The lesson i took away was :
1. Be born rich.
2. Dont be poor.
3. Always follow rules 1 and 2.
 
Last edited:
Members don't see this ad :)
Whether it's medicine or any other position, networking and who you know takes you a long way. If you ask around, you'll find that many people got their current jobs because they knew someone who recommended them for the job. Few people nowadays get a job by cold-applying to everything. It's definitely there in medical admissions - it would be foolish to think that if your parents donate a lot of money to a medical school (a lot) that you wouldn't have at least a little leg up - but it's less pronounced I think because it would reflect poorly on the school if they let in somebody who obviously has no potential for medicine and that person ends up killing a patient down the line.
 
Whether it's medicine or any other position, networking and who you know takes you a long way. If you ask around, you'll find that many people got their current jobs because they knew someone who recommended them for the job. Few people nowadays get a job by cold-applying to everything. It's definitely there in medical admissions - it would be foolish to think that if your parents donate a lot of money to a medical school (a lot) that you wouldn't have at least a little leg up - but it's less pronounced I think because it would reflect poorly on the school if they let in somebody who obviously has no potential for medicine and that person ends up killing a patient down the line.
Thats why they have the STEP I and residency. If someone has a 25 on the MCAT and cant get in competitively, but gets legacied or bought in, that person will probably pass the STEP I and be ok in residency provided they put in the work and are not complete douche nozzles. If you have less than 25 on the MCAT, that is a riskier gamble for the medical school.
 
Thats why they have the STEP I and residency. If someone has a 25 on the MCAT and cant get in competitively, but gets legacied or bought in, that person will probably pass the STEP I and be ok in residency provided they put in the work and are not complete douche nozzles. If you have less than 25 on the MCAT, that is a riskier gamble for the medical school.

There's no arbitrary cut-off on the MCAT that makes you able to pass the Step 1 and do well in residency. There is only a weak correlation between MCAT and Step 1 scores (see: "The Predictive Validity of the MCAT for Medical School Performance and Medical Board Licensing Examinations: A Meta-Analysis of the Published Research"). Pass rate on the Step 1 is an important metric and the weaker the applicant academically, the higher risk for the school of the applicant not passing the Steps.
 
There's no arbitrary cut-off on the MCAT that makes you able to pass the Step 1 and do well in residency. There is only a weak correlation between MCAT and Step 1 scores (see: "The Predictive Validity of the MCAT for Medical School Performance and Medical Board Licensing Examinations: A Meta-Analysis of the Published Research"). Pass rate on the Step 1 is an important metric and the weaker the applicant academically, the higher risk for the school of the applicant not passing the Steps.
That is not what the AAMC is pimping. They say 500 and greater is good to go. Also you forget to mention the moderate correlation is the best predictor we have. So that is the only one that can be used. Why do you think medical schools continue to focus on the test?

from the paper you linked:
"MCAT as a whole shows relatively consistent and good predictive validity findings for performance in both medical school and on licensing examinations"
 
That is not what the AAMC is pimping. They say 500 and greater is good to go. Also you forget to mention the moderate correlation is the best predictor we have. So that is the only one that can be used. Why do you think medical schools continue to focus on the test?

from the paper you linked:
"MCAT as a whole shows relatively consistent and good predictive validity findings for performance in both medical school and on licensing examinations"

Can you provide a link to this study please? I am not calling you out I am just genuinely curious to see this data.
 
That is not what the AAMC is pimping. They say 500 and greater is good to go. Also you forget to mention the moderate correlation is the best predictor we have. So that is the only one that can be used. Why do you think medical schools continue to focus on the test?

They would like 500 and greater to be good for medical school but the average admitted student MCAT is still above 500. The correlation is not a very good predictor. The study uses a classic trick - if you want the correlation to appear bigger, you report the r value. But what you should be interested in is the coefficient of determination, or r^2. The r they report is 0.6, making the r^2 = 0.36. One can make up arbitrary criteria for what counts as "small" or "medium" or "moderate" but what the numbers mean is that the majority of what determines Step 1 score is accounted for factors other than MCAT (as an aggregate, they claim that the MCAT accounts for 44% of the variance on Step 1).

Just because it's the only predictor doesn't mean it's a good one. Your argument is tantamount to saying, "Oh, I have a test that can predict whether you're going die with 40% efficacy. This is the only test available. This test says you're going to die. Therefore, you're going to die." There's a logical leap from the test to the conclusion you're making that is not supported by the data - it's simply not powerful enough.

What that means when making an argument about whether someone with a 25 MCAT is going to pass Step 1 is that one should look elsewhere, to factors that have not been measured (and likely are unable to be measured) such as medical school curriculum, personal motivation for pursuing medicine, etc. Just because they got a low MCAT doesn't mean that they're not going to pass the Step 1. In fact, it doesn't even mean that one can say with >50% certainty that the individual is not going to pass Step 1. All it means is that one has to look elsewhere to determine whether that individual will pass, while acknowledging that the low MCAT could be a contributing factor.

from the paper you linked:
"MCAT as a whole shows relatively consistent and good predictive validity findings for performance in both medical school and on licensing examinations"

Further, when quoting from a work, it is deliberately deceptive to quote only fragments of sentences. Here's the context:

"Although the MCAT as a whole shows relatively consistent and good predictive validity findings for performance in both medical school and on licensing examinations, there was considerable variation on the four different subtests. In particular, the biological sciences subtest had the only adjusted medium effect-size value on measures of medical school performance. In predicting performance on the medical board licensing examination measures, only the biological sciences and verbal reasoning subtests maintained adjusted medium effect-size values across the first two and all three Step examination respectively."

In other words, even when conceding the low coefficient of determination, total MCAT score doesn't even matter so much as the biological and verbal subtest scores.
 
Adcoms nationwide are ignoring this

That is not what the AAMC is pimping. They say 500 and greater is good to go. Also you forget to mention the moderate correlation is the best predictor we have. So that is the only one that can be used. Why do you think medical schools continue to focus on the test?

from the paper you linked:
"MCAT as a whole shows relatively consistent and good predictive validity findings for performance in both medical school and on licensing examinations"
 
Just read an article on Propublica about donations and Harvard undergrad admissions (with a focus on Jared Kushner). I haven't thought about this sort of thing in a long time, but I was wondering if anyone knew of any good recent books/articles about med schools and donations?

Wow I just read more up on this and Jared Kushner's dad didn't even attend Harvard! Must be nice to be super rich...
 
1. They would like 500 and greater to be good for medical school but the average admitted student MCAT is still above 500. The correlation is not a very good predictor. The study uses a classic trick - if you want the correlation to appear bigger, you report the r value. But what you should be interested in is the coefficient of determination, or r^2. The r they report is 0.6, making the r^2 = 0.36. One can make up arbitrary criteria for what counts as "small" or "medium" or "moderate" but what the numbers mean is that the majority of what determines Step 1 score is accounted for factors other than MCAT (as an aggregate, they claim that the MCAT accounts for 44% of the variance on Step 1).

1. A . Just because schools are not adhering to the reccomendation doesnt mean the AAMC didnt make that recommendation with passing boards in mind. Schools still sticking to the highest mcat matriculants they can wrangle is just testament to their belief in the predictive ability of the test on higher board scores.

1.B They are using cohen as cited in the paper for buckets of weak, moderate, and strong. You claim it is weak,I am sure the field medal folks are going to be knocking on your door. I am sure adcoms would rather take @aldol16's interpreation vs a peer reviewed paper's.

2. Just because it's the only predictor doesn't mean it's a good one. Your argument is tantamount to saying, "Oh, I have a test that can predict whether you're going die with 40% efficacy. This is the only test available. This test says you're going to die. Therefore, you're going to die." There's a logical leap from the test to the conclusion you're making that is not supported by the data - it's simply not powerful enough.

If you had a 44% chance of getting hit by a bus if you bought an iphone today, would you buy one? Would you rather they use tea leaves? If they truely thought it wasnt predictive enough we would see a wider variation of the matriculant mcat scores including lower deciles. This is simply not the case.


3. What that means when making an argument about whether someone with a 25 MCAT is going to pass Step 1 is that one should look elsewhere, to factors that have not been measured (and likely are unable to be measured) such as medical school curriculum, personal motivation for pursuing medicine, etc. Just because they got a low MCAT doesn't mean that they're not going to pass the Step 1. In fact, it doesn't even mean that one can say with >50% certainty that the individual is not going to pass Step 1. All it means is that one has to look elsewhere to determine whether that individual will pass, while acknowledging that the low MCAT could be a contributing factor.

It doesnt take a rocket surgeon to figure out that people who perform well on a high stakes, career defining, multiple choice test once will probably do that again. See reasons above.

Further, when quoting from a work, it is deliberately deceptive to quote only fragments of sentences. Here's the context:

"Although the MCAT as a whole shows relatively consistent and good predictive validity findings for performance in both medical school and on licensing examinations, there was considerable variation on the four different subtests. In particular, the biological sciences subtest had the only adjusted medium effect-size value on measures of medical school performance. In predicting performance on the medical board licensing examination measures, only the biological sciences and verbal reasoning subtests maintained adjusted medium effect-size values across the first two and all three Step examination respectively."

In other words, even when conceding the low coefficient of determination, total MCAT score doesn't even matter so much as the biological and verbal subtest scores.

It is not deceptive when it literally means the same thing I was saying.

I am unsure how you did on your verbal section, but that is not what it is saying. It says the predictive validity is good and relatively consistent, however the subsets are a different story.
Since you are having trouble, here is another instance in the paper:
"
The MCAT total has a large predictive validity coefficient (r 0.66; 43.6% of the variance) effect size for USMLE Step 1, and medium validity coefficients for USMLE Step 2 (r 0.43; 18.5% of the variance) and USMLE Step 3 (r 0.48; 23.0% of the variance"
 
1. A . Just because schools are not adhering to the reccomendation doesnt mean the AAMC didnt make that recommendation with passing boards in mind. Schools still sticking to the highest mcat matriculants they can wrangle is just testament to their belief in the predictive ability of the test on higher board scores.

1.B They are using cohen as cited in the paper for buckets of weak, moderate, and strong. You claim it is weak,I am sure the field medal folks are going to be knocking on your door. I am sure adcoms would rather take @aldol16's interpreation vs a peer reviewed paper's.

No, schools don't admit people based on MCAT score. MCAT is one part of the calculus, yes, but there are many other important factors that are just as important to predicting success in medical school and on the Steps. That's the point.

Again, anybody can arbitrarily define what is "weak," "moderate," or "strong." I do not claim it is weak. In fact, I don't recall using the word "weak" anywhere in my post. I am not going to argue what should be considered "strong" or "moderate" or "weak." I'm only interested in what the numbers say. The numbers very clearly show that MCAT score only accounts for ~40% of the variance on the Step 1, meaning that >50% of Step 1 score is explained by other factors.

If you had a 44% chance of getting hit by a bus if you bought an iphone today, would you buy one? Would you rather they use tea leaves? If they truely thought it wasnt predictive enough we would see a wider variation of the matriculant mcat scores including lower deciles. This is simply not the case.

You keep resorting to the narrow-minded "we are invariably bound to using this measure since there is no other measure." No. Find another measure. Use qualitative characteristics that may account for the other 56% of variance. Innovate. That's why med schools don't make decisions based on MCAT scores.

The problem most people don't seem to get about statistics is that statistics applies on a macro-level analysis of a sample or population. It does not apply to individual people. Even if you say that the MCAT accounts for 40% of a Step 1 score, that doesn't mean that for Student A who gets a 25 on his/her MCAT score, that will determine 40% of his/her Step 1 score.

We do see a wide array of matriculant MCAT scores. Have you not seen the ranges of matriculant scores at schools - including top ones?

It doesnt take a rocket surgeon to figure out that people who perform well on a high stakes, career defining, multiple choice test once will probably do that again. See reasons above.

On the sample/population level, this might be true but statistics don't apply to individuals.

I am unsure how you did on your verbal section, but that is not what it is saying. It says the predictive validity is good and relatively consistent, however the subsets are a different story.
Since you are having trouble, here is another instance in the paper:
"
The MCAT total has a large predictive validity coefficient (r 0.66; 43.6% of the variance) effect size for USMLE Step 1, and medium validity coefficients for USMLE Step 2 (r 0.43; 18.5% of the variance) and USMLE Step 3 (r 0.48; 23.0% of the variance"

Suggesting that I'm dumb because I obviously did poorly on my verbal? Oh, yeah, I absolutely did terrible on my verbal. It's out there!

Anybody can define what is "large." I would encourage you to look at the numbers instead of generalizing based on how other people interpret the numbers. r = 0.66, r^2 = 0.36 = medium correlation at best.
 
There's no arbitrary cut-off on the MCAT that makes you able to pass the Step 1 and do well in residency. There is only a weak correlation between MCAT and Step 1 scores (see: "The Predictive Validity of the MCAT for Medical School Performance and Medical Board Licensing Examinations: A Meta-Analysis of the Published Research"). Pass rate on the Step 1 is an important metric and the weaker the applicant academically, the higher risk for the school of the applicant not passing the Steps.

No, schools don't admit people based on MCAT score. MCAT is one part of the calculus, yes, but there are many other important factors that are just as important to predicting success in medical school and on the Steps. That's the point.

Again, anybody can arbitrarily define what is "weak," "moderate," or "strong." I do not claim it is weak. In fact, I don't recall using the word "weak" anywhere in my post. I am not going to argue what should be considered "strong" or "moderate" or "weak." I'm only interested in what the numbers say. The numbers very clearly show that MCAT score only accounts for ~40% of the variance on the Step 1, meaning that >50% of Step 1 score is explained by other factors.

Please see your own quote above.


You keep resorting to the narrow-minded "we are invariably bound to using this measure since there is no other measure." No. Find another measure. Use qualitative characteristics that may account for the other 56% of variance. Innovate. That's why med schools don't make decisions based on MCAT scores.

The problem most people don't seem to get about statistics is that statistics applies on a macro-level analysis of a sample or population. It does not apply to individual people. Even if you say that the MCAT accounts for 40% of a Step 1 score, that doesn't mean that for Student A who gets a 25 on his/her MCAT score, that will determine 40% of his/her Step 1 score.
That is not the point, the point is we make decisions based on the best data available. Exceptions will be made sure there is a possibility that a 20 may end up getting a 260 on their step I or ace med school. However the following data says otherwise. Using current tools that are available and always trying to find better tools is not mutually exclusive. Outliers absolutely exist and variation absolutely occurs naturally but being able to reliably tell who the outliers will be is a task that may be outside the capability of most people. Even if you take away scoring relationship between STEP I and MCAT , the sheer ability to pass and survive is also predicted well by the MCAT.

https://www.aamc.org/download/410078/data/mcatacademicmedicinearticles.pdf


upload_2016-11-19_21-59-51.png




We do see a wide array of matriculant MCAT scores. Have you not seen the ranges of matriculant scores at schools - including top ones?
On the sample/population level, this might be true but statistics don't apply to individuals.
The following population level data really goes in the face of what you are saying. Yes people get admitted with lower decile scores, However the likelihood is smaller and smaller and this is with a perfect GPA.

upload_2016-11-19_21-52-33.png



Suggesting that I'm dumb because I obviously did poorly on my verbal? Oh, yeah, I absolutely did terrible on my verbal. It's out there!

Anybody can define what is "large." I would encourage you to look at the numbers instead of generalizing based on how other people interpret the numbers. r = 0.66, r^2 = 0.36 = medium correlation at best.
I dont think you are dumb, I may have lost my composure after you said I was being deceptive. I apologize. The paper really does claim moderate to large relationship which was opposite to your claim of weak.

The continued reliance on the MCAT especially higher decile scores for admission only underscores my point that people give this data credence even if it is not perfect. I suppose this entire argument is moot considering if the adcom is taking money in exchange for an admission relying on the evidence available may not be on the top of its agenda.[/QUOTE]
 
Last edited:
@aldol16 and @libertyyne

whoa whoa whoa, fellas/ladies, we're on the same side here. What are we, a legislative body? This is a decent forum for decent people. Now, shake hands and make-up.
 
Please see your own quote above.

I apologize - I thought you were referring to my post in response to your analysis of the paper. I didn't know you were referring to my original comment. Yes, I should modify that to say "medium at best." But again, what's medium is arbitrarily defined - even by statisticians. Whether there's a practical correlation is what's important here.

That is not the point, the point is we make decisions based on the best data available. Exceptions will be made sure there is a possibility that a 20 may end up getting a 260 on their step I or ace med school. However the following data says otherwise. Using current tools that are available and always trying to find better tools is not mutually exclusive. Outliers absolutely exist and variation absolutely occurs naturally but being able to reliably tell who the outliers will be is a task that may be outside the capability of most people. Even if you take away scoring relationship between STEP I and MCAT , the sheer ability to pass and survive is also predicted well by the MCAT.

My point is not about predicting outliers. You misunderstand. My point is that statistics cannot be used to argue that a particular individual who scores a 25 on his or her MCAT has any particular odds of scoring above a certain score on the Step 1. I believe @gonnif made clear this statistical analysis in another post (sorry if it wasn't you - I remember the post distinctly but if it wasn't you, perhaps you remember it too and can link it). Statistics describe a sample or a population as a whole well but do not apply to specific people who earn a certain score on their MCAT. The analogy I love is from physics - Schrodinger's cat. The cat is in the box and it is in a certain state practically but if you only know that 50% of the cats are alive and 50% of the cats are dead, then you can only say that the cat is both alive and dead (sorry for butchering the physics, physicists) until you open the box. The statistics can only tell you that one variable might predict another with 40% certainty in the sample as a whole but that means nothing for the individual. For the individual, statistics do not apply. A common fallacy is applying statistics to an individual as something with predictive value - statistics have no predictive value for a specific person. Only for the population as a whole.

The following population level data really goes in the face of what you are saying. Yes people get admitted with lower decile scores, However the likelihood is smaller and smaller and this is with a perfect GPA.

Uhhh, your data literally shows that those in the 24-26 category have an acceptance rate of 37%. That's a higher rate than undergraduate acceptance at the Ivies. Those in the 21-23 bin also have a ~20% acceptance rate. Higher than most Ivies for undergrad (except maybe Penn but they're really more of a state school (kidding)).
 
I apologize - I thought you were referring to my post in response to your analysis of the paper. I didn't know you were referring to my original comment. Yes, I should modify that to say "medium at best." But again, what's medium is arbitrarily defined - even by statisticians. Whether there's a practical correlation is what's important here.

If you are going to claim that a peer reviewed article is making a mistake in asssigning buckets, even though they use cohen et all for the definition, then back that claim up with a source. I am not going to take a post from @aldol16 over the interpretation provided by a peer reviewed article unless there is evidence to back it up.

My point is not about predicting outliers. You misunderstand. My point is that statistics cannot be used to argue that a particular individual who scores a 25 on his or her MCAT has any particular odds of scoring above a certain score on the Step 1. I believe @gonnif made clear this statistical analysis in another post (sorry if it wasn't you - I remember the post distinctly but if it wasn't you, perhaps you remember it too and can link it). Statistics describe a sample or a population as a whole well but do not apply to specific people who earn a certain score on their MCAT. The analogy I love is from physics - Schrodinger's cat. The cat is in the box and it is in a certain state practically but if you only know that 50% of the cats are alive and 50% of the cats are dead, then you can only say that the cat is both alive and dead (sorry for butchering the physics, physicists) until you open the box. The statistics can only tell you that one variable might predict another with 40% certainty in the sample as a whole but that means nothing for the individual. For the individual, statistics do not apply. A common fallacy is applying statistics to an individual as something with predictive value - statistics have no predictive value for a specific person. Only for the population as a whole.

I believe what you are referring to is an ecological fallacy. The problem is that the analysis is on the individual level not an aggregate level so I am unsure if that criticism applies to as large a degree as you are stating. Therefore my argument about variation on the individual level where 90% of a person's USMLE score may be correlated to the MCAT or -10 % may be correlated to the MCAT, but people will have a hard time telling who the outliers in the data will be and more often then not it will be positive and close to 0.44 . The UP data further makes my point.

Uhhh, your data literally shows that those in the 24-26 category have an acceptance rate of 37%. That's a higher rate than undergraduate acceptance at the Ivies. Those in the 21-23 bin also have a ~20% acceptance rate. Higher than most Ivies for undergrad (except maybe Penn but they're really more of a state school (kidding)).
Would you rather I use the <20 bucket? Here is the aggregate data. If they truely didnt care, this effect would not be as prominent. Iam really confused about the pupose of the mcat then. shouldnt they just do away with it if it doesnt say something useful about the applicant?
upload_2016-11-20_8-0-20.png
 
Last edited:
This got away from the topic that was initially prompted.

On the note of the initial poster, I am aware of a girl who got in to a great school in Michigan with a 16 on the MCAT and an average GPA. Her parents had just donated $X to the school and here she is...on her way to being a doctor, granted she can pass the Step exams.
 
This got away from the topic that was initially prompted.

On the note of the initial poster, I am aware of a girl who got in to a great school in Michigan with a 16 on the MCAT and an average GPA. Her parents had just donated $X to the school and here she is...on her way to being a doctor, granted she can pass the Step exams.
Is this a newer school?
 
If you are going to claim that a peer reviewed article is making a mistake in asssigning buckets, even though they use cohen et all for the definition, then back that claim up with a source. I am not going to take a post from @aldol16 over the interpretation provided by a peer reviewed article unless there is evidence to back it up.

Buckets? I'm saying to look at the data for yourself and don't say that since Cohen says it's true, then it must be. Look at the numbers. You can spin the words any way you like. The numbers don't change. Is a coefficient of 0.36 a good correlation to you? If it is, then I have no argument with you. You looked at that number and you think it's high whereas I think it's low. There's no argument there. I simply believe that the MCAT is not a very good predictor of Step 1 scores if it only accounts for ~40% of the variance on those scores. I am not interested in arguing about the semantics of "moderate" or "strong" with you. I am only interested in how you interpret the data point - r^2 = 0.36. If we look at it and interpret it in two different ways, then there's nothing more to discuss.

I believe what you are referring to is an ecological fallacy. The problem is that the analysis is on the individual level not an aggregate level so I am unsure if that criticism applies to as large a degree as you are stating. Therefore my argument about variation on the individual level where 90% of a person's USMLE score may be correlated to the MCAT or -10 % may be correlated to the MCAT, but people will have a hard time telling who the outliers in the data will be and more often then not it will be positive and close to 0.44 . The UP data further makes my point.

The point is that it has nothing to do with outliers. Outliers are only outliers when viewed in context of a data set - a sample or a population, for example. At the individual level, saying that you want to predict who is the outlier (which I agree, is impossible) has no meaning because you're dealing with a single data point. The individual either is or he is not. There's no in between. So the individual can take the MCAT and the Step 1 and the MCAT might determine his Step 1 score 40% but that number - 40% - means nothing to the individual. Put another way, one might be able to qualitatively say that the MCAT will in part determine any given individual's Step 1 score but it is impossible to say by how much - the number 40% is only meaningful when describing a sample statistic.

Would you rather I use the <20 bucket? Here is the aggregate data. If they truely didnt care, this effect would not be as prominent. Iam really confused about the pupose of the mcat then. shouldnt they just do away with it if it doesnt say something useful about the applicant?

I believe the word you're looking for is "bins." But that's beside the point. You're missing the point. I am not arguing that the MCAT "doesn't say something useful" about the applicant. It does. I'm not saying that adcoms "truely [sic] didnt care." They do. The MCAT is correlated (whatever the strength - we have reached the end of the argument there) with Step 1 score but the correlation cannot be taken as "strong" - even using the words found in the conclusion of that paper (I believe they use "medium" mostly). In any case, I'm not interested in the semantics - again. MCAT explains 40% of the variance of Step 1 scores and therefore is a useful measure. This is where you misunderstand me. I'm not saying the MCAT is useless. The MCAT is one useful metric with its limitations. The limitation being that it does not explain the majority of Step 1 score variance. In other words, it is only one of many factors that determines how well a student does on the Step 1. It accounts for, again, only ~40% of the variance. This is why people don't get admitted solely on the basis of MCAT scores. In fact, there are many important metrics that adcoms use. GPA, community service, and leadership experiences are just as important as the MCAT because those metrics can also determine how successful a student will be in medical school.

Now, why don't med schools do away with the MCAT? That's an excellent question. One of the main reasons must be that the AAMC has a monopoly on it and they don't want to see it go. They'll lose revenue - fees for registering for the test, fees for official study materials, etc. Another reason is that at the end of the day, adcoms need a measure to compare how academically prepared students are who come from very different backgrounds. Notice how I didn't say "adcoms need a measure to predict how well a student will do on Step 1." An academically-prepared student will, of course, tend to do better on the Step 1 but many other factors go into that. In fact, >50% of Step 1 score will be determined by those other factors. Finally, another factor might be that certain rankings take into account MCAT scores when ranking medical schools. So it becomes a game of chicken - who's going to be the first med school to eliminate the MCAT and risk dropping in the ranking (until the rest follow suit)?

In fact, I do know of several deans who want to do away with MCAT and even Step 1. Making Step 1 pass/fail is an excellent way of reducing medical student stress. But then if you ask residency directors, they'll want those scores because they can use it to distinguish applicants - even though someone who scores a 240 on the Step 1 won't necessarily be a worse neurosurgeon than someone who scores a 260.

Put another way, Mount Sinai was one of the first schools to eliminate the MCAT for its FlexMed track. If the MCAT is so all-important for predicting Step 1 success as you claim, then why would they get rid of it? Why are they so confident that they can predict medical student success without the MCAT?
 
Buckets? I'm saying to look at the data for yourself and don't say that since Cohen says it's true, then it must be. Look at the numbers. You can spin the words any way you like. The numbers don't change. Is a coefficient of 0.36 a good correlation to you? If it is, then I have no argument with you. You looked at that number and you think it's high whereas I think it's low. There's no argument there. I simply believe that the MCAT is not a very good predictor of Step 1 scores if it only accounts for ~40% of the variance on those scores. I am not interested in arguing about the semantics of "moderate" or "strong" with you. I am only interested in how you interpret the data point - r^2 = 0.36. If we look at it and interpret it in two different ways, then there's nothing more to discuss.



The point is that it has nothing to do with outliers. Outliers are only outliers when viewed in context of a data set - a sample or a population, for example. At the individual level, saying that you want to predict who is the outlier (which I agree, is impossible) has no meaning because you're dealing with a single data point. The individual either is or he is not. There's no in between. So the individual can take the MCAT and the Step 1 and the MCAT might determine his Step 1 score 40% but that number - 40% - means nothing to the individual. Put another way, one might be able to qualitatively say that the MCAT will in part determine any given individual's Step 1 score but it is impossible to say by how much - the number 40% is only meaningful when describing a sample statistic.



I believe the word you're looking for is "bins." But that's beside the point. You're missing the point. I am not arguing that the MCAT "doesn't say something useful" about the applicant. It does. I'm not saying that adcoms "truely [sic] didnt care." They do. The MCAT is correlated (whatever the strength - we have reached the end of the argument there) with Step 1 score but the correlation cannot be taken as "strong" - even using the words found in the conclusion of that paper (I believe they use "medium" mostly). In any case, I'm not interested in the semantics - again. MCAT explains 40% of the variance of Step 1 scores and therefore is a useful measure. This is where you misunderstand me. I'm not saying the MCAT is useless. The MCAT is one useful metric with its limitations. The limitation being that it does not explain the majority of Step 1 score variance. In other words, it is only one of many factors that determines how well a student does on the Step 1. It accounts for, again, only ~40% of the variance. This is why people don't get admitted solely on the basis of MCAT scores. In fact, there are many important metrics that adcoms use. GPA, community service, and leadership experiences are just as important as the MCAT because those metrics can also determine how successful a student will be in medical school.

Now, why don't med schools do away with the MCAT? That's an excellent question. One of the main reasons must be that the AAMC has a monopoly on it and they don't want to see it go. They'll lose revenue - fees for registering for the test, fees for official study materials, etc. Another reason is that at the end of the day, adcoms need a measure to compare how academically prepared students are who come from very different backgrounds. Notice how I didn't say "adcoms need a measure to predict how well a student will do on Step 1." An academically-prepared student will, of course, tend to do better on the Step 1 but many other factors go into that. In fact, >50% of Step 1 score will be determined by those other factors. Finally, another factor might be that certain rankings take into account MCAT scores when ranking medical schools. So it becomes a game of chicken - who's going to be the first med school to eliminate the MCAT and risk dropping in the ranking (until the rest follow suit)?

In fact, I do know of several deans who want to do away with MCAT and even Step 1. Making Step 1 pass/fail is an excellent way of reducing medical student stress. But then if you ask residency directors, they'll want those scores because they can use it to distinguish applicants - even though someone who scores a 240 on the Step 1 won't necessarily be a worse neurosurgeon than someone who scores a 260.

Put another way, Mount Sinai was one of the first schools to eliminate the MCAT for its FlexMed track. If the MCAT is so all-important for predicting Step 1 success as you claim, then why would they get rid of it? Why are they so confident that they can predict medical student success without the MCAT?

1) You say the correlation is weak.
2) Then you say you never said the correlation is weak.
3) you say the authors bins are arbitrary.
4) Then you never respond to why using established definitions via cohen et all is problematic.
5) you claim the MCAT is not an important predictor of Medical school success.
6) Then you claim the MCAT is not useless
7) Then you claim you know of several deans who want to get rid of it. and claim flexmed prooves the mcat is useless.

Pick a side, make it brief.
e65.gif
 
Last edited:
1) You say the correlation is weak.
2) Then you say you never said the correlation is weak.
3) you say the authors bins are arbitrary.
4) Then you never respond to why using established definitions via cohen et all is problematic.
5) you claim the MCAT is not an important predictor of Medical school success.
6) Then you claim the MCAT is not useless
7) Then you claim you know of several deans who want to get rid of it. and claim flexmed prooves the mcat is useless.

Not my fault you live in a binary world.

1 and 2 are my fault.

With respect to (3), I have said time and again that any qualitative description is arbitrary. What separates "weak" from "moderate"? Unless Cohen is Hercules and decides to physically hold the two apart, it is an arbitrary (though likely reasonable) description. Therefore, I have said to look at the numbers for yourself. That's something you should learn to do whenever you're looking at science (don't mean to be paternalistic here - I tell my students this all the time because it helps them when they're past undergrad and actually doing science).

With respect to (4), I did respond. Qualitative descriptions are always ambiguous. "Weak" or "moderate." "Good" or "Excellent." I asked you to bypass the qualitative descriptions you are fixated on to look at the numbers. You refuse to. I have nothing more to say to you regarding this.

With regard to (5) and (6), you should stop living in a binary world. Just because the MCAT isn't a good predictor of Step 1 scores doesn't mean that it's useless. It's one metric that can be used to measure an applicant's ability in the sciences. But that doesn't determine Step 1 success - at least not 56% of it.

With regard to (7), I argue that even if I concede points (5) and (6) to you, your data doesn't show that adcoms admit or reject students because they believe that a poor MCAT score is a good predictor of a bad Step 1 score (or fail). Students who have higher MCATs may tend to be better overall applicants. To my knowledge, no study has isolated the effect of MCAT on adcom decisions. That's why I cite the example of FlexMed. At least one school believes that the MCAT isn't the best predictor of success because otherwise, they would not have eliminated it as a requirement for that track.
 
B/MD programs have produced uneven results. They are often a way for the undergrad institution to matriculate students with promise.
Whether FlexMed turns out to be a flop or a success remains to be seen.
Even CN"U" has a program designed to trap promising undergrads. In my experience, picking physicians in high school has not been a good methodology for what we expect in the US.
 
Whether FlexMed turns out to be a flop or a success remains to be seen.

FlexMed is based on HuMed for which there is data. Step 1 scores only slightly lower for HuMed but confidence intervals overlap and are, frankly, quite broad for both. HuMed students not statistically more likely to fail Step 1 than non-HuMed students.

Challenging Traditional Premedical Requirements as Predictors of Success in Medical School: The Mount Sinai School of Medicine Humanities and Medicine Program
 
Last edited:
FlexMed is based on HuMed for which there is data. Step 1 scores only slightly lower for HuMed but confidence intervals overlap and are, frankly, quite broad for both. HuMed students not statistically more likely to fail Step 1 than non-HuMed students.

Challenging Traditional Premedical Requirements as Predictors of Success in Medical School: The Mount Sinai School of Medicine Humanities and Medicine Program
The real issue is not USMLE scores for these programs.
Maturity and acquisition of core competencies does not routinely occur and yet they are already accepted to medical school.
 
The real issue is not USMLE scores for these programs.

Of course - that part is to address any critics above.

Maturity and acquisition of core competencies does not routinely occur and yet they are already accepted to medical school.

Quite to the contrary, I would argue - at least for FlexMed and HuMed. The point is to select students who demonstrate those competencies because they feel that that is a better indicator of medical student success than scores. http://alphaomegaalpha.org/pharos/PDFs/2012-1-Kase-Muller.pdf

In fact, that's one of my points. Med schools should move towards competency-based admissions and away from raw scores because MCAT is not so great a predictor of success as certain intangible traits - so-called competencies.
 
Of course - that part is to address any critics above.



Quite to the contrary, I would argue - at least for FlexMed and HuMed. The point is to select students who demonstrate those competencies because they feel that that is a better indicator of medical student success than scores. http://alphaomegaalpha.org/pharos/PDFs/2012-1-Kase-Muller.pdf

In fact, that's one of my points. Med schools should move towards competency-based admissions and away from raw scores because MCAT is not so great a predictor of success as certain intangible traits - so-called competencies.
Sadly, these competencies can not be expected or identified in the age group from which these programs select.
 
FlexMed is based on HuMed for which there is data. Step 1 scores only slightly lower for HuMed but confidence intervals overlap and are, frankly, quite broad for both. HuMed students not statistically more likely to fail Step 1 than non-HuMed students.

Challenging Traditional Premedical Requirements as Predictors of Success in Medical School: The Mount Sinai School of Medicine Humanities and Medicine Program
"Admission decisions are based on high school and college transcripts, two personal essays, three letters of recommendation, SAT scores (minimum verbal score of 650, or ACT equivalent), and two interviews at Mount Sinai.

Once accepted, students must maintain a minimum GPA of 3.5. They forego organic chemistry, physics, calculus, and the MCAT, but they must achieve a minimum grade of B in biology and general chemistry (two semesters each).

After completing their junior year, students are required to spend an eight-week summer term at Mount Sinai. The summer experience includes clinical service rotations in all specialties, seminars in medical topics (e.g., bioethics, health policy, palliative care), and an abbreviated course in the “Principles of Organic Chemistry and Physics Related to Medicine” (six credit-hours for organic chemistry, two credit-hours for physics). This course covers basic principles and complies with the requirement that all graduates of medical schools chartered by the University of the State of New York must have passed courses in these subjects before receiving the MD degree. Students complete weekly examinations, which are graded pass/fail.

On completing their undergraduate degree, accepted students are encouraged to take a year off before matriculating.

During the summer before they matriculate, students may attend an optional Summer Enrichment Program (SEP) that attempts to acclimate incoming HuMed students to the medical school curriculum and environment. Approximately 75% of the matriculating HuMed cohort participate each year. The SEP curriculum includes overviews of biochemistry, anatomy, embryology, cell physiology, and histology. Examinations are self-assessments and are reviewed in class. Students do not receive grades."










So let me get this straight, the solution to removing emphasis of a standardized test (MCAT) is to ......................rely upon an earlier standardized test (SAT).
tim-and-eric-mind-blown.gif



And everything else reads like an intervention to me, did the medical students selected through the normal process also receive that intervention? Was there any difference in other resources available to the humed students vs those accepted through normal methods? Seems like the Humed folks had a 3 month headstart for medical school acclimation and a summer program their junior years, yet still managed to have lower step scores and leave school for non-scholarly leave of absences more often?
 
Last edited:
Sadly, these competencies can not be expected or identified in the age group from which these programs select.

Why wouldn't you expect a 20 year-old to have these competencies but a 22 year old to have them?
 
So let me get this straight, the solution to removing emphasis of a standardized test (MCAT) is to ......................rely upon an earlier standardized test (SAT).

Do you want to make the argument that the SAT is correlated to Step 1 scores? Because that's the only way your argument survives here. Mount Sinai thinks that it can select students who will become successful doctors (including being successful on the Step 1) without using MCAT. And so far, they have been successful - from HuMed data, these students did not fail Step 1 at a higher rate than "traditional" students, as distinguishable using statistical analysis.

And everything else reads like an intervention to me, did the medical students selected through the normal process also receive that intervention? Was there any difference in other resources available to the humed students vs those accepted through normal methods? Seems like the Humed folks had a 3 month headstart for medical school acclimation and a summer program their junior years, yet still managed to have lower step scores and leave school for non-scholarly leave of absences more often?

The intervention in this case had to be part of the design because HuMed students hadn't taken those pre-med courses whereas traditional students had. Making traditional students sit through an accelerated 8-week course covering material they had already learned would be pointless.

Step 1 score confidence intervals overlap. Which makes me question how they got the p value o.0039 or so. But consider where these students are coming from and what the Step 1 tests. It tests basic science in the context of medicine. Wouldn't you expect scientists to do better just because they have been trained in basic science? The fact that only 6 points separates the two cohorts and that there's a 20 point error in the measurement makes me think that there is a large range of scores in both cohorts.

In any case, there's no indication that these students tend to struggle more than their peers academically. Sure, they do take non-scholarly leaves of absence at a 8% higher rate but you have to consider when they take those leaves. If in the pre-clinical years, then perhaps you're right - they struggle. But if in the latter years, then it could be due to a multitude of other reasons, like questioning whether clinical medicine is for them. What makes me think it's not so much that they can't handle it is that they don't present with any more "serious academic difficulty" than their counterparts. One could argue that this is simply due to the fact that few med schools fail students and rarely for more than one class but without more data, we can't tell yet.
 
Do you want to make the argument that the SAT is correlated to Step 1 scores? Because that's the only way your argument survives here. Mount Sinai thinks that it can select students who will become successful doctors (including being successful on the Step 1) without using MCAT. And so far, they have been successful - from HuMed data, these students did not fail Step 1 at a higher rate than "traditional" students, as distinguishable using statistical analysis.
You are missing the forest for the trees. They do not want to abandon the MCAT because of its lack of predictive power. They want to abandon it because of the USNews and Research Arms race for higher MCATs.

There is literally only one study that I could find that relates SAT to USMLE, however it is step 2. It did find a positive correlation, and it was in the verbal section.

https://www.ncbi.nlm.nih.gov/pubmed/8615936
Here is another pertinent study that talks about the positive correlation between the MCAT and SAT.
https://www.ncbi.nlm.nih.gov/pubmed/8466617

The intervention in this case had to be part of the design because HuMed students hadn't taken those pre-med courses whereas traditional students had. Making traditional students sit through an accelerated 8-week course covering material they had already learned would be pointless.

Step 1 score confidence intervals overlap. Which makes me question how they got the p value o.0039 or so. But consider where these students are coming from and what the Step 1 tests. It tests basic science in the context of medicine. Wouldn't you expect scientists to do better just because they have been trained in basic science? The fact that only 6 points separates the two cohorts and that there's a 20 point error in the measurement makes me think that there is a large range of scores in both cohorts.

In any case, there's no indication that these students tend to struggle more than their peers academically. Sure, they do take non-scholarly leaves of absence at a 8% higher rate but you have to consider when they take those leaves. If in the pre-clinical years, then perhaps you're right - they struggle. But if in the latter years, then it could be due to a multitude of other reasons, like questioning whether clinical medicine is for them. What makes me think it's not so much that they can't handle it is that they don't present with any more "serious academic difficulty" than their counterparts. One could argue that this is simply due to the fact that few med schools fail students and rarely for more than one class but without more data, we can't tell yet.

This is an older study from the program but it might shed some light on their cut-offs. http://journals.lww.com/academicmed...inai_Humanities_and_Medicine_Program_.40.aspx

"The undergraduate science/math background of students entering MSSM through the H&M program consists of one year each of biology and chemistry and a short summer course at MSSM, “Physics and Organic Chemistry Relevant to Medicine.” This differs from the premed science/math requirements for all other students matriculating at MSSM, namely one year each of biology, chemistry, organic chemistry, physics, and math. The data in Table 1 show that a significantly higher proportion of H&M students had at least one course failure in the basic science years than did the students with traditional premed science backgrounds, who were either humanities majors or science majors. Over 75% of the course failures of H&M students occurred in the first semester of year one, where there were nine failures in biochemistry, six in embryology, six in cell biology, and five in gross anatomy (data not shown). Among the 20 H&M students who failed one or more courses, nine students failed multiple courses, with the range being up to four courses. In the second basic science year, the proportion of H&M students with at least one course failure decreased, with no single course having a disproportionate number of failures.

LargeThumb.00001888-200010001-00040.TT1.jpeg

Table 1
Image Tools

Compared with their classmates, the H&M students had a higher failure rate on the USMLE Step 1 examination (Table 1), although all these students eventually passed it (data not shown). In an attempt to determine whether failure on the Step 1 examination could be predicted from data available at the time of acceptance into the H&M program, we analyzed the correlation of these students' SAT scores with their performances on the Step 1 examination. Neither Verbal SAT (R2 = 0.08) nor Math SAT (R2 = 0.07) scores correlated with the Step 1 examination score. However, all students who failed the Step 1 examination had Verbal SAT scores ≤ 650."

They did not find a correlation, however they did implement somewhat of a cutoff for SAT verbal scores in their selection criteria for future cohorts. So they are literally doing what this entire conversation started off as. They are picking a number where they believe the risk is too high for failure on a standardized test. That is in effect the argument I made when saying adcoms probably rely on an arbitrary MCAT cutoff for risk of failure in Med School. Why arent they believing in the inability of the test to predict future performance here like you would have us believe? They are substituting one standardized test for another in hopes of avoiding the arms race for MCAT scores for school rank jockeying and perhaps a more interesting class. As if USNEWS will just roll over, they will probably start publishing with the SAT data as they undoubtedly do for colleges leading to the same problems.

All that headache for somewhat lackluster results, I am still scratching my head as to the underlying benefit to society and how it improves patient outcomes of physician performance. The MCAT on the other hand is pretty simple. Score greater than a 32, you are in all liklihood going to be able to complete medical school without a problem. You could eliminate pre-reqs and just rely on the MCAT score. A few schools have actually gone that route.

A more interesting criticism of the system would have been to question the efficacy of the USMLE in predicting MD performance and quality of care. This is the last I will say of this.
 
Last edited:
You are missing the forest for the trees. They do not want to abandon the MCAT because of its lack of predictive power. They want to abandon it because of the USNews and Research Arms race for higher MCATs.

Uhhh, probably not a good move if that's true. US News Rankings are based primarily on research - that's why nobody is ever going to beat HMS with their four pre-eminent academic medical centers. MCAT is weighted 13% in the research rankings and only about 10% in the primary care rankings (http://www.usnews.com/education/best-graduate-schools/articles/medical-schools-methodology).

Compared with their classmates, the H&M students had a higher failure rate on the USMLE Step 1 examination (Table 1), although all these students eventually passed it (data not shown). In an attempt to determine whether failure on the Step 1 examination could be predicted from data available at the time of acceptance into the H&M program, we analyzed the correlation of these students' SAT scores with their performances on the Step 1 examination. Neither Verbal SAT (R2 = 0.08) nor Math SAT (R2 = 0.07) scores correlated with the Step 1 examination score. However, all students who failed the Step 1 examination had Verbal SAT scores ≤ 650."

Why are you citing an almost two-decade-old study when a newer one is clearly available and shows data to the contrary? That is, there is no statistical difference between HuMed students and their traditional peers in USMLE Step 1 failure rate. I cited the newer study above.

They did not find a correlation, however they did implement somewhat of a cutoff for SAT verbal scores in their selection criteria for future cohorts. So they are literally doing what this entire conversation started off as. They are picking a number where they believe the risk is too high for failure on a standardized test. That is in effect the argument I made when saying adcoms probably rely on an arbitrary MCAT cutoff for risk of failure in Med School. Why arent they believing in the inability of the test to predict future performance here like you would have us believe? They are substituting one standardized test for another in hopes of avoiding the arms race for MCAT scores for school rank jockeying and perhaps a more interesting class. As if USNEWS will just roll over, they will probably start publishing with the SAT data as they undoubtedly do for colleges leading to the same problems.

No, you're ignoring the fact that there are likely many confounding variables with students who have verbal SAT <= 650. In order to show that A has predictive power over B, there has to be a graded dose-response relationship. This is why dose-response curves are so important in medicinal chemistry. It means nothing if your drug works at 60 mg dose but then drops off completely at everything below that. That just means that your drug isn't very predictable and pharmaceutical executives understandably don't like that. They would say that there's no dose-response relationship here on the SAT. 650 is a high score on the SAT. In fact, it's the 89th percentile. Does that surprise you?

It surprises me. Basically, they're saying that there's no relationship between candidates' SAT scores and USMLE Step 1 score but everybody who failed the Step 1 was below the 89th percentile on the SAT. In a group of 100 people, that covers 88 people of the group. That's like saying, "Everybody who was rejected from med school A was a U.S. citizen." Well, that's a bit obvious since that includes most people who applied to that school in the first place and stacks the deck in your favor. This only emphasizes the point that the SAT is a very poor predictor of Step 1 performance.

Also, for the reason I mentioned above, med schools probably aren't as concerned about MCAT average as you think (except maybe mid- to low-tier schools). MCAT score is weighted only ~10% in the USNews ranking. Research-related factors account for the lion's share. That means that schools get a lot more bang for their buck in focusing on getting more NIH funding than raising the average MCAT score of its students.

A more interesting criticism of the system would have been to question the efficacy of the USMLE in predicting MD performance and quality of care. This is the last I will say of this.

That would be interesting and I believe there is no data on this except in specific specialties (OB/GYN comes to mind). It's hard to measure and I suspect there is no effect at all.
 
Top