If you are going to claim that a peer reviewed article is making a mistake in asssigning buckets, even though they use cohen et all for the definition, then back that claim up with a source. I am not going to take a post from
@aldol16 over the interpretation provided by a peer reviewed article unless there is evidence to back it up.
Buckets? I'm saying to look at the data for yourself and don't say that since Cohen says it's true, then it must be. Look at the numbers. You can spin the words any way you like. The numbers don't change. Is a coefficient of 0.36 a good correlation to you? If it is, then I have no argument with you. You looked at that number and you think it's high whereas I think it's low. There's no argument there. I simply believe that the MCAT is not a very good predictor of Step 1 scores if it only accounts for ~40% of the variance on those scores. I am not interested in arguing about the semantics of "moderate" or "strong" with you. I am only interested in how you interpret the data point - r^2 = 0.36. If we look at it and interpret it in two different ways, then there's nothing more to discuss.
I believe what you are referring to is an ecological fallacy. The problem is that the analysis is on the individual level not an aggregate level so I am unsure if that criticism applies to as large a degree as you are stating. Therefore my argument about variation on the individual level where 90% of a person's USMLE score may be correlated to the MCAT or -10 % may be correlated to the MCAT, but people will have a hard time telling who the outliers in the data will be and more often then not it will be positive and close to 0.44 . The UP data further makes my point.
The point is that it has nothing to do with outliers. Outliers are only outliers when viewed in context of a data set - a sample or a population, for example. At the individual level, saying that you want to predict who is the outlier (which I agree, is impossible) has no meaning because you're dealing with a single data point. The individual either
is or he
is not. There's no in between. So the individual can take the MCAT and the Step 1 and the MCAT might determine his Step 1 score 40% but that number - 40% - means nothing to the
individual. Put another way, one might be able to qualitatively say that the MCAT will in part determine any given individual's Step 1 score but it is impossible to say by how much - the number 40% is only meaningful when describing a sample statistic.
Would you rather I use the <20 bucket? Here is the aggregate data. If they truely didnt care, this effect would not be as prominent. Iam really confused about the pupose of the mcat then. shouldnt they just do away with it if it doesnt say something useful about the applicant?
I believe the word you're looking for is "bins." But that's beside the point. You're missing the point. I am not arguing that the MCAT "doesn't say something useful" about the applicant. It does. I'm not saying that adcoms "truely [sic] didnt care." They do. The MCAT is correlated (whatever the strength - we have reached the end of the argument there) with Step 1 score but the correlation cannot be taken as "strong" - even using the words found in the conclusion of that paper (I believe they use "medium" mostly). In any case, I'm not interested in the semantics - again. MCAT explains 40% of the variance of Step 1 scores and therefore is a useful measure. This is where you misunderstand me. I'm not saying the MCAT is useless. The MCAT is
one useful metric with its
limitations. The limitation being that it does not explain the majority of Step 1 score variance. In other words, it is only
one of
many factors that determines how well a student does on the Step 1. It accounts for, again, only ~40% of the variance. This is why people don't get admitted solely on the basis of MCAT scores. In fact, there are many important metrics that adcoms use. GPA, community service, and leadership experiences are just as important as the MCAT because those metrics can also determine how successful a student will be in medical school.
Now, why don't med schools do away with the MCAT? That's an excellent question. One of the main reasons must be that the AAMC has a monopoly on it and they don't want to see it go. They'll lose revenue - fees for registering for the test, fees for official study materials, etc. Another reason is that at the end of the day, adcoms need a measure to compare how academically prepared students are who come from very different backgrounds. Notice how I didn't say "adcoms need a measure to predict how well a student will do on Step 1." An academically-prepared student will, of course, tend to do better on the Step 1 but many other factors go into that. In fact, >50% of Step 1 score will be determined by those other factors. Finally, another factor might be that certain rankings take into account MCAT scores when ranking medical schools. So it becomes a game of chicken - who's going to be the first med school to eliminate the MCAT and risk dropping in the ranking (until the rest follow suit)?
In fact, I do know of several deans who want to do away with MCAT and even Step 1. Making Step 1 pass/fail is an excellent way of reducing medical student stress. But then if you ask residency directors, they'll want those scores because they can use it to distinguish applicants - even though someone who scores a 240 on the Step 1 won't necessarily be a worse neurosurgeon than someone who scores a 260.
Put another way, Mount Sinai was one of the first schools to eliminate the MCAT for its FlexMed track. If the MCAT is so all-important for predicting Step 1 success as you claim, then why would they get rid of it? Why are they so confident that they can predict medical student success without the MCAT?