AAMC I don't understand you.....

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

listener23

Membership Revoked
Removed
10+ Year Member
Joined
Aug 11, 2013
Messages
779
Reaction score
387
If the MCAT is a CBT, why the hell does it take a month to grade???? You would think the results could be generated in seconds before you leave the testing center....
 
If the MCAT is a CBT, why the hell does it take a month to grade???? You would think the results could be generated in seconds before you leave the testing center....
Wow, I...I agree with you.

Given that there is no writing section, the test is fully multiple choice, and the curve is pre-generated from test questions, there is no reason why they could not generate at least a 'preliminary' score right off the bat. Fine, fine, wait 1mo to make it official, just in case there are test-center SNAFUs or something, but seriously? It's MORE simplistic than a scantron.
 
I've heard they adjust the curve, throw out certain questions, and do some other shady ****. @$$holes
 
Why does the small, kind hearted individual get picked on by the 300 lb linebacker bully? Because he (AAMC) can and there's not a damn thing they (us) can do about it.

Real reason: Probably money from people signing up for another test and legal liability reasons.
 
I have 2 theroies

Theory 1- They take a long time to prevent people from chronic retakes. This prevents the testing seats from filling up to quick , which allows new tester an opportunity to take the test....

Theory 2 - they take a long time so med schools won't be swamped with applications...
 
Has anybody else notice when you take a AAMC practice exam you get your score in seconds....
 
I've heard they adjust the curve, throw out certain questions, and do some other shady ****. @$$holes
That seems counter to everything I've heard, but then, it is all *TOP SECRET* so what the hell do any of us know?
It's a sham is what it is.
 
Welcome to pre-med frustrations. I have no idea why it takes so long, I'm convinced they do it to purposely delay applications in order to increase competition amongst us. When I took the PCAT I got a preliminary score report as I walked out of the testing center.
 
So you have time to think you did poorly and reschedule another exam ($$$)
 
Why can't some other company compete and make a better MCAT that u get results immediately. Kinda like what happened when the ACT started to become more popular than the SATs.
 
Why can't some other company compete and make a better MCAT that u get results immediately. Kinda like what happened when the ACT started to become more popular than the SATs.

Doesn't that kind of defeat the purpose of a standardized exam?
 
I don't think so. Colleges accept both SATs and ACTs. Doesn't seem to be an issue for them.


but SATs and ACTs are a joke... and MCAT is administered by the actual med school association, not some company. So they can easily squash any attempts at competition
 
It used to be because of the essay. I didn't expect them to completely do away with the waiting time, but I thought the 2015 MCAT may feature instant scoring. Not really sure why they don't as other graduate level tests do.

Also, the MCAT isn't curved.. it's scaled. Everyone on a test day can get a 45 or a 3, they don't adjust the curve at all. I highly doubt they throw out any questions either.. remember, they use previous testers to establish the validity of a question.

They don't let you sign up for another MCAT until a few days after your test to prevent people from taking up too many spots. Ultimately, this would leave some test takers out of a seat and would cost them money as some test takers cancel and their seats potentially go unfilled. While doing away with the waiting time altogether will see a substantial increase in retakes, I'm not sure why they don't look into it. I figure they judged by their statistics that they wouldn't be able to support the increased number of test takers with their current infastructure.

There won't be any test that competes with the MCAT.. this is a test produced by the AAMC (who makes money off of it), comprised of all the U.S. medical schools. Schools simply won't accept a competitor's test. Moot point.
 
If the MCAT is a CBT, why the hell does it take a month to grade???? You would think the results could be generated in seconds before you leave the testing center....

because they love stringing us out. this process is all about waiting and being strung out. when you apply to medical schools on the first day you're still stuck waiting 20+ days and with the mcat the aamc is trying to play off your fears to get you to register again so they get more $$$$$$$$$$$$$ for their "non-profit".
 
None of the reasons make complete sense. The DAT used a scaled scoring system but unofficial scores are generated immediately and given to you at the testing center. They combat the "chronic retakes" worry by requiring a 90-day wait between retakes (you also have to have a special waiver to take the test more than 3 times). The GRE has essays, but they also give you unofficial scores for all the other sections immediately.

Personally, I think it's all a ploy to get us used to the insane amount of waiting involved in the entire application process.
 
If the MCAT is a CBT, why the hell does it take a month to grade???? You would think the results could be generated in seconds before you leave the testing center....
I was wondering the exact same thing! My friends who took the GRE knew their results right away.
 
It has to do with the way the curve is generated. Also, yes, they do throw out some questions based on the curve and because some are experimental. You Gen-Y-ers and your need for instant gratification 🙄...
 
It has to do with the way the curve is generated. Also, yes, they do throw out some questions based on the curve and because some are experimental. You Gen-Y-ers and your need for instant gratification 🙄...
The scale is established based on previous tester performance on the scored questions, and experimental items are obviously known beforehand and will not be scored. It has nothing to do with "the way the curve is generated" and there is no reason for them to throw out questions based on any "curve" because there isn't one. Jepstein's post is correct, the scale (raw to scaled scores) is set beforehand based on the item composition of the test.
 
I expect that they look at the scores and distributions on the whole test, the different sections, and individual questions. If everyone gets a question wrong, they might take a look at it and throw it out, or weight it differently, or something. I would expect and hope that they do a full review of the test given the scores and percentages. I was involved in some standardized test grading very briefly back in the day and there are a lot of little details and nudges put into the final score calculations.
 
I expect that they look at the scores and distributions on the whole test, the different sections, and individual questions. If everyone gets a question wrong, they might take a look at it and throw it out, or weight it differently, or something. I would expect and hope that they do a full review of the test given the scores and percentages. I was involved in some standardized test grading very briefly back in the day and there are a lot of little details and nudges put into the final score calculations.

This is what I was referring to, though I realize the way it was stated in my previous post was incorrect. When I first started preparing to study for the MCAT (2009-2010ish) I had some info from the AAMC that corroborated with this. I've since gotten rid of most of my MCAT stuff, but I'll take a look and if I still have it I'll post it. From the AAMC's "Understanding Your Score Page:
"Why are raw scores converted to scaled scores?
The conversion of raw scores to scaled scores compensates for small variations in difficulty between sets of questions. The exact conversion of raw to scaled scores is not constant; because different sets of questions are used on different exams." This suggests that there is some post-test statistical analysis before scores are known.
 
Makes me upset, since for the DAT you instantly get your scores so if you want to retake you can start immediately. For us we have that anxiety period for 30-35 days so if we have to retake that period can really kill us.
 
It used to be because of the essay. I didn't expect them to completely do away with the waiting time, but I thought the 2015 MCAT may feature instant scoring. Not really sure why they don't as other graduate level tests do.

Previous threads have actually discussed this in detail and when some posters called AAMC, AAMC actually told them that it wasn't because of the essay (if I remember correctly).

On another note, isn't Step 1 all multiple choice also, but don't med students get their scores a month after taking the exam? And you can't retake that exam unless you fail it...
 
This is what I was referring to, though I realize the way it was stated in my previous post was incorrect. When I first started preparing to study for the MCAT (2009-2010ish) I had some info from the AAMC that corroborated with this. I've since gotten rid of most of my MCAT stuff, but I'll take a look and if I still have it I'll post it. From the AAMC's "Understanding Your Score Page:
"Why are raw scores converted to scaled scores?
The conversion of raw scores to scaled scores compensates for small variations in difficulty between sets of questions. The exact conversion of raw to scaled scores is not constant; because different sets of questions are used on different exams." This suggests that there is some post-test statistical analysis before scores are known.

No, that means based on the set of questions, a scale is generated.

It's very simple.

1) AAMC* designs tons of MCAT questions of varying difficulties
2) AAMC puts these in MCATs as experimental (unscored) questions to get data on how current test takers perform on these questions
3) Based on that data, AAMC designs an MCAT that has roughly the same difficulty as all the other MCATs ever offered
4) Because it's troublesome to get the exact same difficulty level, the AAMC can also slightly adjust the conversions to do the job for them.

The MCAT is not curved. The MCAT does not throw out any questions if everyone got them wrong. All of the questions you get were already taken by hundreds of test takers.. the ones that are 'unfair' have already been weeded out.

Seriously, look up the difference between 'scaled' and 'curved'. Two very different things.

From that same page,

Is the exam graded on a curve?
Examinees often ask if earning a high score or higher percentile is easier or harder at different times of the testing year. They ask whether they have a better chance of earning a higher score in April or in August, for example. The question is based on an assumption that the exam is scored on a curve, and that a final score is dependent on how an individual performed in comparison to other examinees from the same test day or same time of year.

While there may be small differences in the MCAT exam you took compared to another examinee, the scoring process accounts for these differences so that an 8 earned on physical sciences on one exam means the same thing as an 8 earned on any other exam. The percentile provided on your score report simply indicates what percentage of examinees from the previous testing year scored the same as you did on the MCAT exam.

How you score on the MCAT exam, therefore, is not reflective of the particular exam you took—including the time of day, the test date, or the time of year—since any difference in difficulty level is accounted for when calculating your scale scores (see above for information about scaling).

*and by AAMC, I mean the testing company they hire to produce questions
 
Last edited:
No, that means based on the set of questions, a scale is generated.

It's very simple.

1) AAMC* designs tons of MCAT questions of varying difficulties
2) AAMC puts these in MCATs as experimental (unscored) questions to get data on how current test takers perform on these questions
3) Based on that data, AAMC designs an MCAT that has roughly the same difficulty as all the other MCATs ever offered
4) Because it's troublesome to get the exact same difficulty level, the AAMC can also slightly adjust the conversions to do the job for them.

The MCAT is not curved. The MCAT does not throw out any questions if everyone got them wrong. All of the questions you get were already taken by hundreds of test takers.. the ones that are 'unfair' have already been weeded out.

Seriously, look up the difference between 'scaled' and 'curved'. Two very different things.

From that same page,



*and by AAMC, I mean the testing company they hire to produce questions

I understand the difference between a scaled test and curved test, and I've already acknowledged that I misspoke about this in a previous post. Regarding your fourth point, for these adjustments to be as accurate and valid as possible, it would make the most sense to do them after a post-test analysis. Doing so pre-test would require assumptions and leave room for error.
 
I understand the difference between a scaled test and curved test, and I've already acknowledged that I misspoke about this in a previous post. Regarding your fourth point, for these adjustments to be as accurate and valid as possible, it would make the most sense to do them after a post-test analysis. Doing so pre-test would require assumptions and leave room for error.
I don't understand why post-test would be more accurate/valid. If anything, it is far MORE valid to base them on a large base of students taking the MCAT over multiple different sittings, rather than the group that happens to go in August. Hey, maybe the students taking it in August are more prepared because they spent all summer studying. If you were to give more credence to a post-test analysis for that group, you would essentially take your scaling and turn it INTO a curve, which would make the grades a lot less consistent between sittings and a lot more "I am being compared to all of the students who studied over the summer if I take in August, I should take it in March so I'm competing against all of the last-minute peeps who can't afford to delay any longer even if they're not prepped".

You may understand the difference between a scaled grade and a curve, but you seem determined to turn the scaling system INTO a curve.
 
I don't understand why post-test would be more accurate/valid. If anything, it is far MORE valid to base them on a large base of students taking the MCAT over multiple different sittings, rather than the group that happens to go in August. Hey, maybe the students taking it in August are more prepared because they spent all summer studying. If you were to give more credence to a post-test analysis for that group, you would essentially take your scaling and turn it INTO a curve, which would make the grades a lot less consistent between sittings and a lot more "I am being compared to all of the students who studied over the summer if I take in August, I should take it in March so I'm competing against all of the last-minute peeps who can't afford to delay any longer even if they're not prepped".

You may understand the difference between a scaled grade and a curve, but you seem determined to turn the scaling system INTO a curve.

Because of the context in which the questions are presented, which is a significant factor in developing any type of metric. It's the first time all of those questions will be presented and evaluated together in that context. One possible effect of this, as previously mentioned by jepstein, is inadvertently creating a more difficult and taxing exam. It may seem arbitrary but it can have a significant effect on a metric's validity. Thus, pre-test they may have a good idea of where adjustments should be in conversion factors, but the only way to confirm this is with post-test analyses.
 
Last edited:
Because of the context in which the questions are presented, which is a significant factor in developing any type of metric. It's the first time all of those questions will be presented and evaluated together in that context. One possible effect of this is, as previously mentioned by jepstein, is inadvertently creating a more difficult and taxing exam. It may seem arbitrary but it can have a significant effect on a metric's validity. Thus, pre-test they may have a good idea of where adjustments should be in conversion factors, but the only way to confirm this is with post-test analyses.
2+2 = 4 whether you did a multiplication problem before it or a division. Each passage is independent; my performance on one isn't going to be affected by the other passages in an exam.

Even if that WERE a factor, given that each question has been field-tested in a variety of contexts, though, it seems as if the scale would essentially be set with each question's average difficulty/scaling. That seems like a good thing to me.
 
Just some theories:
I have not seen any indications that the correct answers are even available in the test center. It is quite possible that AAMC distributes only the questions, collects the answers and grades them later. Considering that leaving the test computers connected to the internet is not the wisest move and that the CBT has been developed some time ago, I would not be surprised if the data from the test centers is sent to AAMC on some sort of physical carrier.

On dropping questions:
Supposedly, one of the questions on my test date had no correct answer. Someone contested it with AAMC and the answer was that the question will be dropped and will not be part of the score. So they probably leave at least some time to avoid screw ups like this which would require regrading a lot of tests.
 
I'm sure it's not totally unheard of for a question to be thrown out or re-graded for difficulty after the fact because of something or other, even if just a typo. I'm sure it's also not totally unheard of for a test center irregularity to be discovered, or some error to have been made.

Imagine if an immediate score was given, and it turned out a passage had a typo that resulted in many of the answers being wrong.. but everyone had their scores when it was discovered. Or if there was a typo in the answer key that resulted in a bunch of people who chose the wrong answer but got it marked correct. What do you do? Show them "Your score: 34" at the end of the test then send them a letter 3 weeks later saying "just kidding, you actually got a 33"?
 
Just some theories:
I have not seen any indications that the correct answers are even available in the test center. It is quite possible that AAMC distributes only the questions, collects the answers and grades them later. Considering that leaving the test computers connected to the internet is not the wisest move and that the CBT has been developed some time ago, I would not be surprised if the data from the test centers is sent to AAMC on some sort of physical carrier.

Another fine point.
 
Has anybody else notice when you take a AAMC practice exam you get your score in seconds....

Ever notice how the grading rubric for different practice tests are sometimes different? (E.g. 22-23 correct answers = 10 on one test while it may be 23-24 = 10 on another).

AAMC waits to grade so that they can evaluate and compare how EVERYONE did on their test before modifying the score. They score by percentile, not by simply how many questions you got correct. In order to do this, they need everyone to finish taking the test, else it's just an educated guess.

At least this is what I have always thought. I could be wrong, then again I don't care enough to delve any deeper.
 
I'm sure it's not totally unheard of for a question to be thrown out or re-graded for difficulty after the fact because of something or other, even if just a typo. I'm sure it's also not totally unheard of for a test center irregularity to be discovered, or some error to have been made.

Imagine if an immediate score was given, and it turned out a passage had a typo that resulted in many of the answers being wrong.. but everyone had their scores when it was discovered. Or if there was a typo in the answer key that resulted in a bunch of people who chose the wrong answer but got it marked correct. What do you do? Show them "Your score: 34" at the end of the test then send them a letter 3 weeks later saying "just kidding, you actually got a 33"?
Yes. Give them a preliminary, unofficial score right away with full knowledge that it's not real until the score report comes through.
It would at least give people an idea of where they stand. A few questions being tossed or awarded here or there is not going to change you more than a point or so. Knowing "I got around a 33" is way more useful and relieving than "I kind of felt OK enough not to void it."

If it's really that much of an issue, give them a range: there is a 95% chance (base that number on the stats of how often these things actually change from prelim to final) that you scored between 32-34.
 
Ever notice how the grading rubric for different practice tests are sometimes different? (E.g. 22-23 correct answers = 10 on one test while it may be 23-24 = 10 on another).

AAMC waits to grade so that they can evaluate and compare how EVERYONE did on their test before modifying the score. They score by percentile, not by simply how many questions you got correct. In order to do this, they need everyone to finish taking the test, else it's just an educated guess.

At least this is what I have always thought. I could be wrong, then again I don't care enough to delve any deeper.
No, that's a curve. See the discussion above; AAMC scales the scores, they don't curve. It's slightly different.
 
I'm sure it's not totally unheard of for a question to be thrown out or re-graded for difficulty after the fact because of something or other, even if just a typo. I'm sure it's also not totally unheard of for a test center irregularity to be discovered, or some error to have been made.

Imagine if an immediate score was given, and it turned out a passage had a typo that resulted in many of the answers being wrong.. but everyone had their scores when it was discovered. Or if there was a typo in the answer key that resulted in a bunch of people who chose the wrong answer but got it marked correct. What do you do? Show them "Your score: 34" at the end of the test then send them a letter 3 weeks later saying "just kidding, you actually got a 33"?

The DAT and PCAT seem to be doing just fine.
 
*retraction
 
Last edited:
I understand the difference between a scaled test and curved test, and I've already acknowledged that I misspoke about this in a previous post. Regarding your fourth point, for these adjustments to be as accurate and valid as possible, it would make the most sense to do them after a post-test analysis. Doing so pre-test would require assumptions and leave room for error.

That's great and all.. but it's not how they do it. This isn't about how you should best do it, it's about how they actually do it. They don't adjust anything post-test.

Ever notice how the grading rubric for different practice tests are sometimes different? (E.g. 22-23 correct answers = 10 on one test while it may be 23-24 = 10 on another).

AAMC waits to grade so that they can evaluate and compare how EVERYONE did on their test before modifying the score. They score by percentile, not by simply how many questions you got correct. In order to do this, they need everyone to finish taking the test, else it's just an educated guess.

At least this is what I have always thought. I could be wrong, then again I don't care enough to delve any deeper.

Absolutely wrong. Scaled, not curved.
 
2+2 = 4 whether you did a multiplication problem before it or a division.

This isn't anywhere close to an even slightly comparable situation.

Each passage is independent; my performance on one isn't going to be affected by the other passages in an exam.

A plethora of research says otherwise. This is something that's been understood for close to a century.


If it's really that much of an issue, give them a range: there is a 95% chance (base that number on the stats of how often these things actually change from prelim to final) that you scored between 32-34.

You can't really make a truly accurate confidence interval without the actual data. I'm not denying that it sucks to have to wait for a month for your results, but there is a perfectly valid reason for it. I know its not the most fair comparison, as the turn around time is much shorter (though I can think of several exceptions from my own experience), but professors don't give you an idea of how you scored on a test as soon as you hand it in. Aside from grading it they have to see if there were any issues, and in some cases run some stats on the results before they're final (I've had several profs. who did these). Why some other test should be held to a different standard to appease some people's neurosis is kind of ridiculous IMHO.
 
Curved, scaled, whatever. Doesn't matter guys. We don't know 100 percent what goes on in the background. Maybe one day they will make scores available immediately like other grad entrance exams
 
That's great and all.. but it's not how they do it.

And you know this how? The AAMC keeps the methods used for scaling raw scores confidential, its anyone's conjecture. Logically, for accuracy in determining scaled scores and for the delay in reporting scores, post-test analyses makes much more sense.
 
And you know this how? The AAMC keeps the methods used for scaling raw scores confidential, its anyone's conjecture. Logically, for accuracy in determining scaled scores and for the delay in reporting scores, post-test analyses makes much more sense.

Based on information readily available on AAMC's own website and general knowledge of how the MCAT works? Plus info learned from teaching for the MCAT through a large test prep company?

Also, by the very fact that its a SCALE not a curve. You still don't seem to know the difference.

For instance.

Every form of the MCAT exam measures the same basic skills and concepts. However, each form is different in regard to the specific questions it uses. Because each form has the potential to be easier or slightly more difficult than another, raw scores are converted to a scale that takes into consideration the level of difficulty of the test questions on a given form. This conversion minimizes variability in the meaning of test scores across forms

takes into consideration the level of difficulty, not the level of performance.

they don't adjust anything post-test because the goal isn't to generate a certain distribution of scores on each exam. the goal is to have a 30 be equivalent to a 30 on any other MCAT ever offered. Hence, they generate the scale based on ALL test takers.. not the specific ones that happened to take a certain test. By adjusting the scale based on the performance of a particular subset of test takers, scores for that test are no longer comparable to ALL test takers. So no, I don't even agree with your point that its better this way.

the AAMC could care less how people score on an individual test. everyone could get a 45 or a 3. the point is that those who get 45s or 3s are just like those who got 45s or 3s in the past.
 
Last edited:
You can't really make a truly accurate confidence interval without the actual data. I'm not denying that it sucks to have to wait for a month for your results, but there is a perfectly valid reason for it. I know its not the most fair comparison, as the turn around time is much shorter (though I can think of several exceptions from my own experience), but professors don't give you an idea of how you scored on a test as soon as you hand it in. Aside from grading it they have to see if there were any issues, and in some cases run some stats on the results before they're final (I've had several profs. who did these). Why some other test should be held to a different standard to appease some people's neurosis is kind of ridiculous IMHO.
You can absolutely make confidence interval based on, statistically, how often scores change between prelim and final. They can GET actual data for that very, very quickly, even if they aren't willing to go into the HUGE pile of data they already have sitting around.
If you combine the range idea, the disclaimer, and the %likelihood of landing in that range, you avoid the concerns of grade changes in postgame analysis.

And, as Jepstein said...that's not how they do it anyway, so it's not even an issue in the first place!
 
Based on information readily available on AAMC's own website and general knowledge of how the MCAT works? Plus info learned from teaching for the MCAT through a large test prep company?

So we have the exact same knowledge base, but I choose to look at the unknown aspect of how the scaled scores are derived from a perspective that would yield greater accuracy.

Also, by the very fact that its a SCALE not a curve. You still don't seem to know the difference.

I unintentionally misused curve once and admitted it in my next post. I haven't misused scale or curve since. So your point is:shrug:...
 
So we have the exact same knowledge base, but I choose to look at the unknown aspect of how the scaled scores are derived from a perspective that would yield greater accuracy.



I unintentionally misused curve once and admitted it in my next post. I haven't misused scale or curve since. So your point is:shrug:...

scales = determined pre-test
curves = determined post-test

the very thing you are saying the AAMC does would be defined as a curve, yet you keep calling it a scale. Adjusting a scale based on the performance of test takers is literally implementing a curve.

and no, our knowledge bases aren't the same considering I know for a fact that they don't adjust the scale after a test and you seem to not have learned of that.
 
Top