Step 1 exam score variation vs MCAT

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

cmshopeful

Full Member
10+ Year Member
15+ Year Member
Joined
May 5, 2008
Messages
46
Reaction score
0
How would you compare the score of 210 vs 220 vs 230 vs 240 to say MCAT scores? I've heard that MCAT score of about 8 is the avg and for USMLE being about 215-218. If that's true, I guess 215-218 on USMLE would be similar to 24 on MCAT. Any insights?
 
Last edited:
How would you compare the score of 210 vs 220 vs 230 vs 240 to say MCAT scores? I've heard that MCAT score of about 8 is the avg and for USMLE being about 210 (True?). If that's true, I guess 210 on USMLE would be similar to an 8 on MCAT. Any insights?
or a 210 would be similar to 10s (30) on the MCAT, since the average Matriculant is 30-32
 
8 is the average for taking it, not the average of those who get in.

There is a correlation between MCAT and USMLE but it's not the end all be all. A good solid effort could take a person with a mediocre MCAT and get them a good USMLE.
 
there is word going around that step 1 scores have a stronger correlation with the writing score on the MCAT.
 
Verbal perhaps. Writing, not a chance. Way too subject to variation from day to day, topic to topic, and grader to grader to have any real predictive power.
 
Verbal perhaps. Writing, not a chance. Way too subject to variation from day to day, topic to topic, and grader to grader to have any real predictive power.

No, not verbal - sorry. BS is probably the strongest. But the writing shows thought processes and analytical ability - much of which is required on step 1 - especially the more recent exams with more clinical info and more thought-oriented questions/answers. If you knew how to do the writing section, the subject matter or topic didn't make a difference. Every prompt could easily be answered using the same algorithm. Grading was as objective as an essay could be - if you followed the prompt, you would have to go out of your way to get lower than an R (assuming everything else was in check - spelling, grammar, handwriting, etc). Writing does correlate strongly with performance in clinical years and residency. Unfortunately, too many med students are admitted based on numerical MCAT scores and that's why wayy too many doctors are socially inept.
 
I have served on an admissions committee and the score they always focus on is verbal, not writing (or bio/physics). Whenever an mcat score is reported, they always say, 35-12-verbal. The writing score is almost entirely ignored, I believe because of the variance with the graders, etc.

I highly doubt writing scores have any correlation to step 1 (regurgitation), clinical or social abilities. They're probably about as accurate as behavioral science questions are in assessing compassion.
 
I would say Bio had the highest for me, but usmle is a different test. The mcat is so much about learning from a passage and on the spot answering questions while usmle is about learning before the test and figuring out what the question is asking and answer from the stored knowledge in your head.

usmle was much easier for me. plus im not a speed reader and I like short concise sentences without all the fluff.
usmle-239 MCAT-24
 
Yeah, I see very little comparing Verbal or writing MCAT to step 1... (MCAT of 35 with only 10 verbal and a bad writing score and step 1 of 254...) From what I have heard, Step1 correlates much better with shelf grades if you were fortunate (or not fortunate, depending on your perspective) to take them first 2 years (my school did, and I typically was between 1-2STD's above national average and did that on step 1, but a friend of mine who nearly failed all shelfs (which means they had below national average on all of shelfs because my school doesn't understand how to interpret shelf scores) score a 244 on step1, so its not an end all be all...
 
No, not verbal - sorry. BS is probably the strongest. But the writing shows thought processes and analytical ability - much of which is required on step 1 - especially the more recent exams with more clinical info and more thought-oriented questions/answers. If you knew how to do the writing section, the subject matter or topic didn't make a difference. Every prompt could easily be answered using the same algorithm. Grading was as objective as an essay could be - if you followed the prompt, you would have to go out of your way to get lower than an R (assuming everything else was in check - spelling, grammar, handwriting, etc). Writing does correlate strongly with performance in clinical years and residency. Unfortunately, too many med students are admitted based on numerical MCAT scores and that's why wayy too many doctors are socially inept.

Admissions committees disagree with you. Verbal is thought to have the best correlation with the type of analytical and applicational thinking you describe. And I thought there was some research done that showed the best correlation factor with the verbal score specifically... it's in a thread here, somewhere (well, probably on pre-allo forum or MCAT forum). You'd be surprised at how much variation there can be in writing and in grading. There are a lot of people who can memorize a book and apply it to answer just about anything you throw at them, and can barely spell enough words correctly for you to understand them when they write. I agree that being able to write well is important, I just don't think it has any sort of correlation with step 1 performance, and I certainly don't believe the grading is as uniform and objective as you think. I know people who are really strong writers that got a P for whatever reason, and I've seen essay's written by people that got an S and am just astonished at how poor their grammar and organization is.
 
I was going to post a list of scores just using the avj and SD for each of these tests, but when I started to write them out the numbers came out all bonkers.

For example, using numbers stolen from wikipedia (MCAT avj=30, sd=2.2; USMLE avj=218, sd=23) gives things like

+4sd = 38.8 = 310
and
-2sd = 25.6 = 172

Which seems really wacky. Maybe the SD for step 1 is something smaller like 12 or 15? or maybe the scores compress at the edges?
 
std deviations break down on most standardized tests once youre past 1 or 2 sds. its about what the test is designed to measure, and it would be a waste of time to design a test that would reliably separate the 99% from the 99.9%.
 
Every prompt could easily be answered using the same algorithm. Grading was as objective as an essay could be - if you followed the prompt, you would have to go out of your way to get lower than an R (assuming everything else was in check - spelling, grammar, handwriting, etc).

Your logic is ridiculous. If you can just follow a cookie cutter method to make an S on the mcat writing section, then it disproves your theory that the writing section demonstrates that one has masterful analytical and critical thinking abilities.
 
I have served on an admissions committee and the score they always focus on is verbal, not writing (or bio/physics). Whenever an mcat score is reported, they always say, 35-12-verbal. The writing score is almost entirely ignored, I believe because of the variance with the graders, etc.

I highly doubt writing scores have any correlation to step 1 (regurgitation), clinical or social abilities. They're probably about as accurate as behavioral science questions are in assessing compassion.


This is all nice to hear, but unfortunately no one is really talking about med school admissions. Past studies have shown that BS correlated best with step 1 scores, but analyses are showing that WS is becoming more important in predicting step 1 scores as the test evolves. "35-12-verbal" is very cute and all...unfortunately it's just not applicable to this conversation.
 
Seems like older (~15 years ago) studies showed a stronger correlation with the VR scores, while newer studies are showing the strongest correlation with BS scores. Writing section does not seem to correlate well to step 1 grades according to the AAMC research, but does correlate better with clinical grades than any other MCAT section.
 
Anybody have the references for any of the papers these conclusions are coming from?

Its seems all well and good to pontificate on what test or part of a test would correlate with Step 1.

But shouldn't someone just crunch the numbers and just see what correlates?
 
Your logic is ridiculous. If you can just follow a cookie cutter method to make an S on the mcat writing section, then it disproves your theory that the writing section demonstrates that one has masterful analytical and critical thinking abilities.

First paragraph - describe when the prompt is true
Second paragraph - describe when it's not true
Third paragraph - make a rule

Coincidence that this sounds a lot like everything we do in medicine (differentials, etc)? I don't think so.

There is your cookie cutter. Competent writers and thinkers will have no trouble formulating an argument based on those guidelines. I went up 9 points from my first MCAT to my second...got Ts on both writing sections. It is easy to improve the numerical sections through effort in less than a month - just coughing stupid information back. Not so with the writing section. The logic is not ridiculous as the ideas are not mutually exclusive. It's those who don't understand the simplicity and straightforward yet analytical thinking required to succeed on it that become flustered and begin and spew out garbage in its written form (sound familiar?). The WS requires the writer to integrate relevant thoughts and ideas into a coherent argument within a given structural framework - something that is much more applicable to life and medicine in general than any other section of the mcat. Most med students deny any importance of the WS because it threatens their evidently over-inflated ego.
 
Seems like older (~15 years ago) studies showed a stronger correlation with the VR scores, while newer studies are showing the strongest correlation with BS scores. Writing section does not seem to correlate well to step 1 grades according to the AAMC research, but does correlate better with clinical grades than any other MCAT section.

Bear in mind that the correlation shown in any of these studies exists but is pretty small -- I wouldn't get excited about the Step because you did well on the MCAT or vice versa. There also have been major changes to both the MCAT and Step since all these studies were conducted, not to mention the composition of the student body (lots more women, minorities, nontrads and nonsci majors in the mix) so it's pretty doubtful that such studies can even be extrapolated to today's tests or med school crowd.

A number of schools tell their students that there is no better correlation for how you will do on Step 1 then how you did in your second year of med school.
 
This is all nice to hear, but unfortunately no one is really talking about med school admissions. Past studies have shown that BS correlated best with step 1 scores, but analyses are showing that WS is becoming more important in predicting step 1 scores as the test evolves. "35-12-verbal" is very cute and all...unfortunately it's just not applicable to this conversation.

Here, let me spell it out for you. You have been suggesting that writing scores are the strongest predictor of step1 scores (which correlate with good residency matches). You've also stated with no evidence that writing scores also "correlate strongly with performance in clinical years and residency".

Do you think, maybe, just maybe, if writing scores so strongly correlate with being a fantastic test-taker and doctor, that admissions committees would focus more attention towards accepting students with higher writing scores in order to produce better doctors and have better match results?

I'm not speaking for all school committees, but at my school they specifically focus on the verbal score as the best predictor of ability to handle medical school coursework. Maybe there's a reason for that.

Then again, maybe we're have a bunch of clueless idiots running the show at my school. My school only averages 10 points above the step1 national average... Obviously we aren't informed to your fantastic studies.
 
Then again, maybe we're have a bunch of clueless idiots running the show at my school.

+1

Newsflash: Admissions committees know NOTHING about what it takes to become a physician. PS - studies have shown BS to be a much better predictor of preclinical performance and step 1 performance. WS has been shown to predict success in clinical years and beyond. I can find the studies, but I'm making the (dangerous) assumption that you are capable of finding it on pubmed. It's possible that at my school, as is the case at many other schools apparently, "we're have a bunch" of clueless idiots running the show as well.
 
+1

Newsflash: Admissions committees know NOTHING about what it takes to become a physician. PS - studies have shown BS to be a much better predictor of preclinical performance and step 1 performance. WS has been shown to predict success in clinical years and beyond. I can find the studies, but I'm making the (dangerous) assumption that you are capable of finding it on pubmed. It's possible that at my school, as is the case at many other schools apparently, "we're have a bunch" of clueless idiots running the show as well.

Ahh, the old "you're an idiot because you made a typo" argument. Well played, sir. I bow down do your superior intellect.

Since you are so adamant about it, I will take your word for it that you are correct about the studies. I was wrong, and I was silly for getting cute and regurgitating what was told to us repeatedly by our dean. What a ridiculous assumption on my part, to believe our dean over a random internet poster. After all, what would a committee of physicians know about what it takes to be a physician. Next time before offering an opinion on SDN I will be sure to grammar and pubmed-check all of my sources.

You are the man.
 
No, not verbal - sorry. BS is probably the strongest. But the writing shows thought processes and analytical ability - much of which is required on step 1 - especially the more recent exams with more clinical info and more thought-oriented questions/answers. If you knew how to do the writing section, the subject matter or topic didn't make a difference. Every prompt could easily be answered using the same algorithm. Grading was as objective as an essay could be - if you followed the prompt, you would have to go out of your way to get lower than an R (assuming everything else was in check - spelling, grammar, handwriting, etc). Writing does correlate strongly with performance in clinical years and residency. Unfortunately, too many med students are admitted based on numerical MCAT scores and that's why wayy too many doctors are socially inept.

😆
 
Last edited:
Based on what I have heard from admissions committees, the Biological Science score has the highest correlation to Step 1 scores (which seems logical to me).
 
I think the MCAT and USMLE are quite different tests.

The MCAT was far less knowledge based and more thinking/standardized test taking abilities based. The USMLE (so far, I have not written it yet) seems to require far more knowledge than thinking. Very few questions came down to "oh shoot, I totally didn't think about that correctly" and 95% of them are "well, I have no idea what receptor/2nd messenger/enzyme/precursor/product/translocation/side-effect is the right one here".
 
Here's a link to some studies the AAMC has done on MCAT/Step1 correlation as well as a meta study that came out last year

http://www.aamc.org/students/mcat/research/bibliography/start.htm

http://www.ncbi.nlm.nih.gov/pubmed/...ez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum

If you actually take the time to read the papers, I think the take home is that the MCAT has a moderate correlation with Step1 and BS subtest has the strongest correlation of any of the MCAT subtests. As with all stats, it would be foolish for any one person to bury their head in the sand because of a low MCAT and equally foolish for a person with a high MCAT to think they are going automatically to ace Step1. A high MCAT indicates good test taking skills, the patience and tenacity to integrate a large body of material in a short amount of time, and some native ability. But those results may not be duplicated on Step1 if the same dedication, skill, and blessings aren't applied in medical school.

I agree that performance in med school courses is probably a stronger predictor Step 1 success than the MCAT, but in terms of admissions, the most objective tool the committees have to predict Step1 performance is the MCAT. And of course none of this has anything to do with how good of a doctor anyone will be.
 
Last edited:
I think the MCAT and USMLE are quite different tests.

The MCAT was far less knowledge based and more thinking/standardized test taking abilities based. The USMLE (so far, I have not written it yet) seems to require far more knowledge than thinking. Very few questions came down to "oh shoot, I totally didn't think about that correctly" and 95% of them are "well, I have no idea what receptor/2nd messenger/enzyme/precursor/product/translocation/side-effect is the right one here".

I suspect once you've taken the Step 1 you are going to think the opposite. Far more of the USMLE questions are two step -- you know the disease or condition they are describing from the question, but that isn't actually the question, the question is something like, given that disease/condition, what is the most common side effect of the medication you would want to administer, or what other pathology might you expect to see elsewhere in the patient. The MCAT was usually more straightforward, and thus IMHO less thinking intensive, more plug and chug.
 
I agree that performance in med school courses is probably a stronger predictor Step 1 success than the MCAT, but in terms of admissions, the most objective tool the committees have to predict Step1 performance is the MCAT. And of course none of this has anything to do with how good of a doctor anyone will be.

True, but precisely because of the fairly "moderate correlation" you describe, most med schools don't go too crazy about simply taking the highest MCATs over all other considerations. (Also bear in mind my prior post -- the tests and the student bodies have changed since the dates of these studies so you cannot really extrapolate the conclusions). You will see the dude with the 35+ and amazing ECs beating out the guy with the 40 and little else pretty frequently. Instead, they want folks who score decently, but achieve in multiple aspects of the application. Because at the core, they are trying to select people who will be a good fit for their institution and become good physicians, not just folks who might score well on the licensing exam based on prior standardized test history. Because there is no demonstrated correlation between how you score on the boards and how good a doctor you are. Doctoring is about performing in a service industry -- it is about patient care; sure it's helpful to have a good base of knowledge for this. But you have to be technically proficient in the procedures and physical exams you perform, and inter-personally skilled for clinical work. And the initial Step of the licensing exam tests neither of these things. Nor should it -- it is just a hurdle people need to get past to demonstrate they learned what they were supposed to in the first two years of med school. But the NBME has serious reservations about it being used for more than this (and has plans to revamp it because of a misplaced use by residency programs). And a lot of med schools agree and use the MCAT more as a threshold tool than an absolute "higher means better board scores" tool.
 
I suspect once you've taken the Step 1 you are going to think the opposite. Far more of the USMLE questions are two step -- you know the disease or condition they are describing from the question, but that isn't actually the question, the question is something like, given that disease/condition, what is the most common side effect of the medication you would want to administer, or what other pathology might you expect to see elsewhere in the patient. The MCAT was usually more straightforward, and thus IMHO less thinking intensive, more plug and chug.
I see what you mean but when I get tripped up on those multi step questions, the part that is missing is the receptor/2nd messenger/enzyme/precursor/product/translocation/side-effect. It's really a moot point I guess =)
 
I'm the OP....thanks for your inputs.
One of the reason I ask this question is that I have a good feel for MCAT scores the way everyone here has for the value of a dollar and what it can buy. But, since I'm new to the world of USMLE, I have no idea what a 10 point spread represents the way you'd be unfamiliar with values of foreign currency.

I look at how mean USMLE score for radiology is 235 while IM is around 222 and while Family med is around 210 (around 10 point diff from rad to IM and IM to FM). Is 10 points score difference as significant as say 1 point on the MCAT in your opinion?
 
Well the difference between family med to radiology is far far greater than the difference between a 30 on the mcat and a 32.
 
(Also bear in mind my prior post -- the tests and the student bodies have changed since the dates of these studies so you cannot really extrapolate the conclusions).

Well we've had this argument before...I don't think the student bodies have changed that much in the last few years and while the tests have changed, the AAMC is still publishing those studies on their website so they must think there is some validity to them, even if you don't. I would be interested to see a study of more recent scores though.

I think we are in agreement on everything else.
 
Well we've had this argument before...I don't think the student bodies have changed that much in the last few years and while the tests have changed, the AAMC is still publishing those studies on their website so they must think there is some validity to them, even if you don't. I would be interested to see a study of more recent scores though.

I think we are in agreement on everything else.

If it were really just "the last few years", I might agree with you. But if you look over those studies on the AAMC website (which are what was studied in the 2007 meta analysis -- no new data), you will see that the most recent ones (published in 2005) actually covered folks who started med school in 1992-3, meaning they took the MCAT a year or two prior. That's 15 years, which is an eternity in terms of trying to still hold this data out as valid. That the "AAMC is still publishing those studies on their website" isn't exactly critical reasoning for supporting the validity of these studies. The AAMC is responsible for the MCAT, so of course they have a vested interest in suggesting it is of value. But if you read beyond the journal article titles, you will see that this is more of an historical perspective than something that ought to be extrapolated to the test and student body in existence today. I've got to wonder whether the lack of recent studies (as opposed to meta analysis of old data) is indicative of the fact that the AAMC doubts it will ever improve upon the historical correlation they showed back in the early 90s.
 
I see what you mean but when I get tripped up on those multi step questions, the part that is missing is the receptor/2nd messenger/enzyme/precursor/product/translocation/side-effect. It's really a moot point I guess =)

I have to agree. The way I feel about these tests is that the MCAT is probably a better logic/intelligence test whereas Step1 is more of a measure of hard work and dedication. If you're fairly badass, you can rock the mcat with minimal studying, whereas unless you have a photographic memory, you need to have put in serious working during your preclinical years and cramming prior to your exam to rock Step1. Some people could study for years and never get a 37 on the mcat. With Step1 it seems you can compensate for a natural lack of ability through hard work.

I've never really felt the 2 or 3 step jumps on Step1 or the qbanks required much thinking. Most of the time when I get something wrong after a jump it's an "oh, that's something trivial I haven't memorized yet" moment.
 
If it were really just "the last few years", I might agree with you. But if you look over those studies on the AAMC website (which are what was studied in the 2007 meta analysis -- no new data), you will see that the most recent ones (published in 2005) actually covered folks who started med school in 1992-3, meaning they took the MCAT a year or two prior. That's 15 years, which is an eternity in terms of trying to still hold this data out as valid. That the "AAMC is still publishing those studies on their website" isn't exactly critical reasoning for supporting the validity of these studies. The AAMC is responsible for the MCAT, so of course they have a vested interest in suggesting it is of value. But if you read beyond the journal article titles, you will see that this is more of an historical perspective than something that ought to be extrapolated to the test and student body in existence today. I've got to wonder whether the lack of recent studies (as opposed to meta analysis of old data) is indicative of the fact that the AAMC doubts it will ever improve upon the historical correlation they showed back in the early 90s.

We've been over this before:

aamc.org/students/mcat/research/bibliography/start.htm

Basco, W.T., Jr., Way, D.P., Gilbert, G.E., & Hudson, A. (2002). Undergraduate Institutional MCAT Scores as Predictors of USMLE Step 1 Performance. Academic Medicine, 77, S13-S16.
PURPOSE: The purpose of this study was to explore the use of institutional MCAT scores (or MCAT scores averaged across all students from an undergraduate institution) as a measure of selectivity in predicting medical school performance. Using data from two medical schools, this study tested the hypothesis that employing MCAT scores aggregated by undergraduate institution as a measure of selectivity improves the prediction of individual students' performances on the first sitting of the United States Medical Licensing Examination Step 1 (USMLE Step 1).

METHOD: Subjects consisted of the 1996-1998 matriculants of two publicly funded medical schools, one from the Southeast region of the United States and the other from the Midwest. There were 16,954 applicants and 933 matriculants in the combined data set. Independent variables were matriculants' undergraduate science grade-point averages (sciGPAs), and three MCAT scores (Physical Sciences, Biological Sciences, and Verbal Reasoning). The investigational variables were the average MCAT scores attained by all students from a particular undergraduate institution that sat for the exam between April 1996 and August 1999. Demographic variables that included medical school, year of medical school matriculation, gender, and minority status were employed as control variables. The dependent variable was the matriculants' scores from their first sittings for the USMLE Step 1. Multiple regression, multicollinearity, and cross-validation procedures were employed. Correlations with performance on the USMLE Step 1 were adjusted for restriction of range.

RESULTS: Bivariate analyses demonstrated moderate correlations between sciGPA and the individual MCAT scores and between sciGPA and USMLE Step 1 scores. There were substantial correlations between individual MCAT scores and USMLE Step 1 scores. Correlations between individual MCAT scores and the USMLE Step 1 scores were slightly higher than institutional MCAT scores, in part due to adjustment for restriction in range. For the regression model without undergraduate selectivity measures, multicollinearity was observed in MCAT Physical Sciences (MCAT-PS) scores and MCAT Biological Sciences (MCAT-BS) scores. Undergraduate institutional Physical Sciences and undergraduate Biological Sciences also demonstrated multicollinearity in addition to URM status, MCAT-PS scores, and MCAT-BS scores in the model with the selectivity measures. The base multiple regression model containing gender, URM status, and SciGPA accounted for 13.9% of the variation in USMLE Step 1. When applicant MCAT scores were added to the model, the model explained 29.1% of the variation in USMLE Step 1 scores. Finally, when institutional MCAT scores were added to the predictive model, .94% additional percentage of variation in USMLE Step 1 scores was explained.

CONCLUSION: Consistent with findings from previous studies, this study demonstrated that undergraduate science GPAs and MCAT scores are strong predictors of standardized test performances during medical school. In contrast to prior studies, this study employed institutional MCAT averages and demonstrated that their inclusion in regression models, as a measure of selectivity, can produce a small improvement when used in a theoretical model in the prediction of a medical student's performance. Regardless of how the average institutional MCAT scores are interpreted as a measure of selectivity, a measure of academic rigor, or a measure of educational climate, this study shows it to be a useful addition to the traditional prediction model used for admission.



This was published in 2002 using data from 1996-1998 matriculants and was not included in the 2007 meta study. Again, I doubt that medical school student bodies have changed much since then and even if they have, this study indicates that a reasonable MCAT/Step1 correlation exists independent of demographics.

I'm basing my argument on evidence that has been published, which, like any study in science or medicine, isn't perfect. You can make conjecture about any study you want. While it's possible that there hasn't been a more recent study because the AAMC is incapable of critiquing its own exam, I think it is more likely that the changes in both MCAT and Step 1 in the past few years have prevented more recent studies from being accomplished. I also suspect that the changes in the MCAT would likely make it a better, not a worse predictor of Step 1 because the AAMC does have a vested interest in selecting students who will perform well in their medical schools. But that is all just conjecture. The only evidence anyone can reasonably go on is what has been published and the evidence consistently shows a moderate correlation between MCAT/Step 1.
 
Last edited:
We've been over this before:

This was published in 2002 using data from 1996-1998 matriculants and was not included in the 2007 meta study.

Based on this statement you should know there are problems with the study. The fact that a 2005 study you referenced used earlier data and then this 2002 study wasn't included in the 2007 meta study should be troubling. This one was simply dismissed this study because it only involved TWO STATE SCHOOLS which isn't an adequate sample size. And for sure can't be extrapolated to the population at large. And FWIW "1996-1998" is still over a decade ago (and folks who matriculated then took the MCAT even a year or two prior to that) and so it doesn't refute my suggestion that the tests and student body compositions and admissions stats have changed pretty significantly since then (making the results of this study of questionable applicability to todays test takers); they have.

The USMLE back then didn't use many multiple step questions -- you could answer much more of the test purely on buzz words back then. Find yourself a board review question book from the late 90s and take a look if you don't believe me. The passing score has changed at least once in that interval. The USMLE now has images and even sound effect questions on the test. It is now computerized, rather than on paper. The MCAT also has gone through a similar number of content changes. And it too is now taken on computer, and offered much more frequently. The student bodies have dramatically changed in that there are many more women, nontrads and nonscience majors in med school today than a decade ago. A lot of med school classes are now as much as 60% women -- this was not the case back then. The average class age has increased, largely pulled up by older nontrads. The average class size has increased. And there are a number of new med schools that didn't even exist at the time of this study. So yeah, I see a problem with extrapolating a relatively small study done 10+ years ago to a very different pair of tests and a very different and larger grouping of med students. Med school is supposed to teach you to read published studies critically, not just buy into them because they are on PubMed or referenced on a self serving AAMC website. That's why there are so many journal clubs and PBL-type classes which force you to parse through questionable studies and rip them apart. A study like the one you referenced, that only looks at two med schools should turn on a flashing red warning light in your head. It apparently did for the guys who did the meta-analysis and chose not to use it two years later.

You aren't going to convince me, so we may as well end this here. We've both seen all these studies, they are old, and the AAMC doesn't appear poised to sponsor a new one, just meta-analyze the old ones. Thus, IMHO, these are non-representative studies done long ago, and shouldn't get anybody too excited.
 
Last edited:
Based on this statement you should know there are problems with the study. The fact that a 2005 study you referenced used earlier data and then this 2002 study wasn't included in the 2007 meta study should be troubling. This one was simply dismissed this study because it only involved TWO STATE SCHOOLS which isn't an adequate sample size. And for sure can't be extrapolated to the population at large. And FWIW "1996-1998" is still over a decade ago (and folks who matriculated then took the MCAT even a year or two prior to that) and so it doesn't refute my suggestion that the tests and student body compositions and admissions stats have changed pretty significantly since then (making the results of this study of questionable applicability to todays test takers); they have.

The USMLE back then didn't use many multiple step questions -- you could answer much more of the test purely on buzz words back then. Find yourself a board review question book from the late 90s and take a look if you don't believe me. The passing score has changed at least once in that interval. The USMLE now has images and even sound effect questions on the test. It is now computerized, rather than on paper. The MCAT also has gone through a similar number of content changes. And it too is now taken on computer, and offered much more frequently. The student bodies have dramatically changed in that there are many more women, nontrads and nonscience majors in med school today than a decade ago. A lot of med school classes are now as much as 60% women -- this was not the case back then. The average class age has increased, largely pulled up by older nontrads. The average class size has increased. And there are a number of new med schools that didn't even exist at the time of this study. So yeah, I see a problem with extrapolating a relatively small study done 10+ years ago to a very different pair of tests and a very different and larger grouping of med students. Med school is supposed to teach you to read published studies critically, not just buy into them because they are on PubMed or referenced on a self serving AAMC website. That's why there are so many journal clubs and PBL-type classes which force you to parse through questionable studies and rip them apart. A study like the one you referenced, that only looks at two med schools should turn on a flashing red warning light in your head. It apparently did for the guys who did the meta-analysis and chose not to use it two years later.

You aren't going to convince me, so we may as well end this here. We've both seen all these studies, they are old, and the AAMC doesn't appear poised to sponsor a new one, just meta-analyze the old ones. Thus, IMHO, these are non-representative studies done long ago, and shouldn't get anybody too excited.

Somehow the Basco study got published in a peer reviewed journal and is cited by AAMC, despite what you think about its sample size (16,954 applicants and 933 matriculants, which I think anyone would agree is large enough to make some inferences). You can say you whatever want about why it was not in the meta study. That's just your opinion. You can make ad hominem attacks about my inability to analyze papers, you can raise questions about all the evidence that is out there, you can rant in HUGE LETTERS about whatever you want--but it is just your opinion.

The simple fact is the study results all agree with one another. It's not a matter of me convincing you, its a matter of you refusing to acknowledge that there is lots of work out there that supports what I'm saying--and you have produced nothing to back up your opinion. If you cannot even acknowledge that several published, peer reviewed studies show a correlation between MCAT/Step1, then so be it. Maybe you also think HIV is a myth, that global warming isn't occurring, and that NASA faked the Apollo moon landings. And why stop there? Maybe EverYTHinG we know is wrong:

[YOUTUBE]http://www.youtube.com/watch?v=MEsYdiA7OL0[/YOUTUBE]
 
Last edited:
wow, writing is the most important score on the mcat to predict success as a doctor? haha, what a joker
 
I was recently speaking to a admissions officer at a med school in the northeast...he stated that results of a study are showing that writing scores are showing a stronger correlation with step 1 scores - that's why I said "there is word going around...." He showed me some random facts and figures that seemed like it was a study that is clearly yet to go to print and may never make it there. Either that or he was totally lying to my face...who knows. The guy showed me his PhD thesis and the title was something like "gaining admission to med school," so he seems to have some type of unhealthy obsession with med school admissions. I'm not sure if that supports his credibility or detracts from it, but that's for you to decide. It's crazy that merely saying "there is word going around that the writing score is predictive" causes people to flare up. I made my original statement in a casual way because it was just that - a casual statement. So chill out.

All of those studies discussed in the thread above may prove to be inconclusive, but after reading this thread, it seems fairly reasonable to conclude that med students are in fact among the most annoying people around. Exhibit A = that video of Friends (PS - Friends is the worst sitcom to date...p<0.05). The fact that you actually took the time to find that on YouTube is quite possibly the most ridiculous thing I have ever encountered (and please don't respond with, "I had it bookmarked!" because that is much worse).

Doing well on some stupid standardized test may show that you can do well on step 1, but it sure as hell doesn't help decent people get into medical school. Some med students/doctors are some of the most egocentric, narcissistic and arrogant people I have ever met. I know a med student that tried to get out of a speeding ticket by telling the cop he was a med student. Give me a break.

- Mr. 2500 Posts
 
I know a med student that tried to get out of a speeding ticket by telling the cop he was a med student. Give me a break.

- Mr. 2500 Posts

ummm... and what's wrong with that? some cops do give you a break, and if you don't wanna take advantage of that then that's your own problem.
 
This is a thread that doesn't take into consideration the amount of hours spent studying in med school vs undergrad. If I had studied for the MCAT like I studied for boards I would've blown the test away (except for verbal).
 
Totally subjective response to the OP's original question (which has yet to be answered and has been smothered by piles of useless banter):

As previously mentioned, the average person gaining entrance to med school has ~ 30, and the average person taking the USMLE gets ~ 217. These two numbers just don't correlate in my head, however, as being equivalent relative performances.

Part of the problem with your question is that people are only going to be able to tell you what a particular USMLE score "feels like" to them. The following is my relative scale for how things "feel" to me. This is incredibly subjective and not intended to cause a poop storm.

Below avg. = 26 and below
220 - 229 = 27 - 29
230 - 239 = 30 - 33
240 - 249 = 34 - 36
250 - 259 = 37 - 39
260+ = 40+

To me, I felt like getting a 230 on Step 1 was like getting a 30 on the MCAT, mainly because the averages for many of the more competitive specialties are in this ballpark (and since I'm aiming for more competitive specialties, it forms a nice mental correlation). The 230-239 range is really, hence, the center of my subjective scale; 230 is also the all-important "magic number" that floated around in my head while studying for the exam.

In the end, I got a 250+ which also correlated well with my MCAT score on this scale, and I felt a similar level of satisfaction after both when comparing my performance to the performance of my peers.

Sorry for not giving a more scientific response, but I think you got your fill of well-researched BS with the other crazy posts in this thread.
 
I know a med student that tried to get out of a speeding ticket by telling the cop he was a med student. Give me a break.

- Mr. 2500 Posts
See that is his problem, you have to tell them you are a doctor or just tell them you are on your way to the hospital. I got out of a speeding ticket for having scrubs and a white coat. One of the perks of the job...you know, the job where you work your ass off and help people while paying $20-40k/yr.
 
I was going to post a list of scores just using the avj and SD for each of these tests, but when I started to write them out the numbers came out all bonkers.

For example, using numbers stolen from wikipedia (MCAT avj=30, sd=2.2; USMLE avj=218, sd=23) gives things like

+4sd = 38.8 = 310
and
-2sd = 25.6 = 172

Which seems really wacky. Maybe the SD for step 1 is something smaller like 12 or 15? or maybe the scores compress at the edges?

Not sure if someone else has pointed this out yet, but I am pretty sure the sd = 2.2 for the MCAT is for each section. The sd for the overall score is 6-7. That would explain the wackiness in those numbers.
 
Sorry for not giving a more scientific response, but I think you got your fill of well-researched BS with the other crazy posts in this thread.

I think this response answered the OP's question perfectly -- thanks for the helpful perspective!
 
I have served on an admissions committee and the score they always focus on is verbal, not writing (or bio/physics). Whenever an mcat score is reported, they always say, 35-12-verbal. The writing score is almost entirely ignored, I believe because of the variance with the graders, etc.

I highly doubt writing scores have any correlation to step 1 (regurgitation), clinical or social abilities. They're probably about as accurate as behavioral science questions are in assessing compassion.
I don't really agree since the writing scores don't seem to correlate at all with the other sections, and most of the other sections require analytical skills as well.

But then again I scored in like the 10th percentile for writing, and in the 99th for Verbal so I'm highly, highly, highly biased.

FWIW I believe that I have social abilities slightly higher than the 10th percentile my writing score would indicate, and based on the limited clinical stuff I've done so far (and in work before school) I'd also say I'm a bit over the 10th percentile :laugh:

Funny story...I had been joking with my friend that he might end up getting an even lower writing score than me (he applied a year later). I didn't think he'd actually even come close, since I had an L, but he somehow managed to get an M. And yes, he got into medical school.

I think a whole lot of fairly competent people have terrible writing scores. Although I think a large part of my writing score had to do with the fact that I had slept 1 hour the night before, and the writing section was in the afternoon (I took the old 8 hour) by which time my brain had ceased to function.

BTW, even if writing does correlate with Step 1, so do the other sections of the MCAT, and even your SAT scores are likely to correlate. Quite frankly, I think everything kind of correlates but I wouldn't get too stuck up on how you did on any particular section.

Finally, even if this paper does come out, it's still kind of up in the air whether it really correlates since there's previous research that suggests that the writing sample doesn't correlate at all: http://www.springerlink.com/content/x70856118311332q/ I can only assume that it's this research that has resulted in me managing to get into medical school even with a writing sample score of L, so I am of course heavily biased towards this research =)
 
Last edited:
Totally subjective response to the OP's original question (which has yet to be answered and has been smothered by piles of useless banter):

As previously mentioned, the average person gaining entrance to med school has ~ 30, and the average person taking the USMLE gets ~ 217. These two numbers just don't correlate in my head, however, as being equivalent relative performances.

Part of the problem with your question is that people are only going to be able to tell you what a particular USMLE score "feels like" to them. The following is my relative scale for how things "feel" to me. This is incredibly subjective and not intended to cause a poop storm.

Below avg. = 26 and below
220 - 229 = 27 - 29
230 - 239 = 30 - 33
240 - 249 = 34 - 36
250 - 259 = 37 - 39
260+ = 40+

To me, I felt like getting a 230 on Step 1 was like getting a 30 on the MCAT, mainly because the averages for many of the more competitive specialties are in this ballpark (and since I'm aiming for more competitive specialties, it forms a nice mental correlation). The 230-239 range is really, hence, the center of my subjective scale; 230 is also the all-important "magic number" that floated around in my head while studying for the exam.

In the end, I got a 250+ which also correlated well with my MCAT score on this scale, and I felt a similar level of satisfaction after both when comparing my performance to the performance of my peers.

Sorry for not giving a more scientific response, but I think you got your fill of well-researched BS with the other crazy posts in this thread.

Hey thanks for this. Whether your view is scientific or not, it does give me an idea on what people who's taken both exams think. I'll continue to ask around a bit here and there and come to my own judgement on it. Either way, I'll just have to study hard for it. I was just curious about this issue.
Thanks again. Your opinion is appreciated.
 
Top