MD & DO how well do tests predict functional knowledge in medicine?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I don't think you'll get a straight answer on this because of how inconsistently 3rd year grades are determined. At my school, shelf exams count for 100% of the grade and preceptor evals only count towards the dean's letter (unless you do something egregious enough to warrant a failure). At other schools evals may count for 50% or more of the clinical grade with shelf exams having minimal impact. So the using grades as a predictor is a poor metric imo.

If you only look at shelf scores you encounter the same problems as using only Step 1. At the same time I don't think attending evals are a very good metric either as different attendings have different standards and consistency becomes an issue.

Personally, I'd like to see better implementation and standardization for Step 2 CS as well as actual scores, as it's the only consistent metric that measure students' ability to take a history, gather info, and create a treatment plan. No, it might not be relevant to every field, but I think it would probably be a better metric for clinical performance than others.

I think a better method would be to introduce a standardized scoring system for 3rd year to try and objectively measure how well you are picking up true clinical skills with real patients instead of actors.

Members don't see this ad.
 
  • Like
Reactions: 1 user
While there are multiple issues with standardized testing, it has its merits. In a general sense, it can give someone at least some objective idea about how well they can acomplish a task relative to others. I would guess that someone, who is s good test-taker in general, is more likely to be good at other things too, when all other factors are controlled for.
 
I think a better method would be to introduce a standardized scoring system for 3rd year to try and objectively measure how well you are picking up true clinical skills with real patients instead of actors.

Idk what the proper way to implement the evaluation or even what the best metrics would be. I just feel like there should be some other mostly objective way to evaluate candidates other than multiple choice tests which only test a small set of skills/knowledge needed to be a physician.
 
Members don't see this ad :)
Idk what the proper way to implement the evaluation or even what the best metrics would be. I just feel like there should be some other mostly objective way to evaluate candidates other than multiple choice tests which only test a small set of skills/knowledge needed to be a physician.

I agree. I think a good would be standardizing his third year is graded and slowly improving the system on a large scale. Everyone just doing their own thing really limits us imo
 
Having trained residents for 15 years, the best performers clinically tend to also be the highest USMLE and in-training exam scorers. It's a marker of breadth of general knowledge and also identifies those with personality traits that drive people to read, study, prepare, and pay attention to details. I've seen a lot of average to good residents that test poorly, but I don't know if I've ever seen a great one that didn't do (at least) top quartile.
 
  • Like
Reactions: 7 users
Having trained residents for 15 years, the best performers clinically tend to also be the highest USMLE and in-training exam scorers. It's a marker of breadth of general knowledge and also identifies those with personality traits that drive people to read, study, prepare, and pay attention to details. I've seen a lot of average to good residents that test poorly, but I don't know if I've ever seen a great one that didn't do (at least) top quartile.
Some people gain more interest in medicine once step 2 material and clinicals start. I suspect those "good clinician bad test takers" would (and should) perform better on step 2 vs. step 1 if we gave it more importance.
 
The USMLE is a licensing exam and not an aptitude test. It's mostly a knowledge test of guidelines with critical thinking as a secondary function, not as a primary function like on an aptitude test (ie. MCAT, LSAT, GRE, etc).

Most of the other tests med student shave taken before were aptitude tests. Aptitude tests are actually not good markers of conscienciousness, a trait training programs care about. The reason for this is that immediate hard work can't get you gains on aptitude tests as quickly on knowledge based tests, insofar as the skills being tested are considered more innate (the degree to which is controversial) and based too heavily on long term skill sets that can't be improved rapidly in the short term like reading comprehension (non controversial).

Knowledge based licensing style tests are good measures of conscienciousness aka ability to be disciplined and work hard. It makes sense for residencies to love them. But I do think it's rather odd for medical students to claim innate smarts off of them solely after not doing well on previous aptitude style tests, albeit this is a small arrogant and insecure population.
 
The USMLE is a licensing exam and not an aptitude test. It's mostly a knowledge test of guidelines with critical thinking as a secondary function, not as a primary function like on an aptitude test (ie. MCAT, LSAT, GRE, etc).

Most of the other tests med student shave taken before were aptitude tests. Aptitude tests are actually not good markers of conscienciousness, a trait training programs care about. The reason for this is that immediate hard work can't get you gains on aptitude tests as quickly on knowledge based tests, insofar as the skills being tested are considered more innate (the degree to which is controversial) and based too heavily on long term skill sets that can't be improved rapidly in the short term like reading comprehension (non controversial).

Knowledge based licensing style tests are good measures of conscienciousness aka ability to be disciplined and work hard. It makes sense for residencies to love them. But I do think it's rather odd for medical students to claim innate smarts off of them solely after not doing well on previous aptitude style tests, albeit this is a small arrogant and insecure population.
The only thing I'd say was innate was the verbal section of the mcat. Everything else could be improved to a some degree via sheer hard work.
 
The only thing I'd say was innate was the verbal section of the mcat. Everything else could be improved to a some degree via sheer hard work.

MCAT is the least aptitude heavy. LSAT is the most.

Verbal is very long term skills based. Most kids that I knew that read a lot, including myself, did decently or did well. Most that didn't (maybe poor background or schoolimg whatever) and even excelled at verbally skilled activities like debate or model UN didn't do as well. But that is just anecdotal. Also, the section was 40 questions, when I took it, with like 8 experimental questions so the whole score was 32 questions. It was kind of funny.

As a tutor, I actually didn't see super sized gains on the sciences. I saw better gains in students than on verbal, but I didn't see many 8s going to 14s. But again anecdotal.

Also, interestingly the bio section of the old MCAT, arguably the least g loaded, correlated better than the other two to USMLE performance.
 
Last edited:
Having trained residents for 15 years, the best performers clinically tend to also be the highest USMLE and in-training exam scorers. It's a marker of breadth of general knowledge and also identifies those with personality traits that drive people to read, study, prepare, and pay attention to details. I've seen a lot of average to good residents that test poorly, but I don't know if I've ever seen a great one that didn't do (at least) top quartile.

What is your specialty? Your experience is contrary to that of an attending who has not seen strong correlation between scores and expertise in radiology. I would have expected a relationship given that radiology and pathology tend to be very detail-oriented. I have also seen exceptions in my own limited experience Radiology step 1 average 2016
 
As with most things, my unwarrented opinion is that test scores predict functional ability in medicine UP TO a certain threshold (minimum acceptable competence), and then beyond that threshold the difference is much slimmer between high and low scorers.

For example, if you have no knowledge of numbers, then being a cashier is impossible and everyone can see that you are unfit to work. However, once beyond a rudimentary level of proficiency (2nd grade?) the difference between someone competent at cashiering and someone excellent at cashiering becomes very slim, so they must be differentiated on soft skills like ability to work with others and people skills that bring customers back.

On this note, I would not trust a doctor who got a 185 step (not passing, but allowed to practice somehow) no matter how “good they were” at their job. But after a passing step, I pretty much would accept anyone to care for me or my family if I liked them. So minimum competency is probably pretty well deliniated with test scores, but actual job performance probably does not correlate well with success in the job beyond that minimal threshold.
 
  • Like
Reactions: 1 user
A PD I spoke to says that there doesn't seem to really be a difference in people once they get into the 240s/250s range. Basically, above that and the returns in competency are extremely diminishing beyond the exteme super genius freaks of nature. But again, just anecdote. I wish had more hard data on this.
 
Has anyone seen students who failed their clinical evaluations, yet got 80+ on the respective shelf exam? Exactly...

On the other hand - plenty of students honor their clinical eval but bomb the shelf.
 
Members don't see this ad :)
Has anyone seen students who failed their clinical evaluations, yet got 80+ on the respective shelf exam? Exactly...

On the other hand - plenty of students honor their clinical eval but bomb the shelf.

I actually have seen the first, but it was really arrogant students. I have seen shelf takers bomb OSCEs at my school and miss honors.

But yeah, I've seen your latter example much more often.
 
  • Like
Reactions: 1 user
I actually have seen the first, but it was really arrogant students. I have seen shelf takers bomb OSCEs at my school and miss honors.

But yeah, I've seen your latter example much more often.

Did they fail clinically bc of interpersonal conflicts resulting from their arrogance or was it bc they were truly incompetent?

My point is I have yet to see anyone who did good on the shelf (80+) yet failed clinically due to incompetence.
 
Did they fail clinically bc of interpersonal conflicts resulting from their arrogance or was it bc they were truly incompetent?

My point is I have yet to see anyone who did good on the shelf (80+) yet failed clinically due to incompetence.

Lack of IP skills resulted in them not being able to adequately show their competence. So yes, fundamentally lack of IP skills. I agree with your point.
 
  • Like
Reactions: 1 user
I never understood the whole "I'm not a good test-taker" argument. What part aren't you good at, the part where you have to use your knowledge to show what you know?
-Daniel Tosh

Edit: oops didn’t realize this was already addressed. Didn’t read through before I replied.
 
Has anyone seen students who failed their clinical evaluations, yet got 80+ on the respective shelf exam? Exactly...

On the other hand - plenty of students honor their clinical eval but bomb the shelf.

I know numerous people who fall into this category. At my school they didn't "fail" their clinical evals because our grades are 100% shelf based, but I'd hear them complain that their terrible attending/resident evals were bs and used their shelf scores to justify why their attendings were clueless. Then again when you decide to skip portions of your rotations to go study for shelf exams you shouldn't expect a good eval...

I also don't think the latter necessarily means they're poor test takers or have poor step scores. When your clinical grade is 75% or more of your grade, why would you spend more time studying for your shelf exam when you know your attending eval is what actually matters?
 
Last edited:
  • Like
Reactions: 1 user
This is why our generation is full of participation trophy winners and such. People that score low begin to make excuses to discredit other people's hard work, intellect, and achievements. I mean none of this is mutually exclusive; the people that score high on exams tend to also be high achievers with strong work ethics and legit know what they are doing and talking about. It's not a coincidence.
 
  • Like
Reactions: 1 users
I know numerous people who fall into this category. At my school they didn't "fail" their clinical evals because our grades are 100% shelf based, but I'd hear them complain that their terrible attending/resident evals were bs and used their shelf scores to justify why their attendings were clueless. Then again when you decide to skip portions of your rotations to go study for shelf exams you shouldn't expect a good eval...

I also don't think the latter necessarily means they're poor test takers or have poor step scores. When your clinical grade is 75% or more of your grade, why would you spend more time studying for your shelf exam when you know your attending eval is what actually matters?

Fair enough. No effort = poor evaluation.

Let's say your clinical eval is weighted equally to your shelf exam (50%/50%) - have you seen anyone HONOR the shelf, yet fail the clinical evaluation? Assuming this student tried hard on the wards. I don't think so - and I believe it is because in order to honor the shelf, you must have a high baseline work ethics, clinical reasoning, motivation, and intelligence that will translates to also superior work clinically. Just my 2 cents.
 
Fair enough. No effort = poor evaluation.

Let's say your clinical eval is weighted equally to your shelf exam (50%/50%) - have you seen anyone HONOR the shelf, yet fail the clinical evaluation? Assuming this student tried hard on the wards. I don't think so - and I believe it is because in order to honor the shelf, you must have a high baseline work ethics, clinical reasoning, motivation, and intelligence that will translates to also superior work clinically. Just my 2 cents.

Actually I have. A guy I rotated with (psych rotation) busted his butt on the wards but just didn't have the social skills to do well. He got upper 80's on his shelf but his attending comments said he did not meet expectations in any of the aspects of psychiatry. Granted, that's only one person I can confirm, but I do know multiple people who had 240+ Step 1 scores but failed Step 2 CS/Level 2PE even after they prepared for it. I actually helped tutor one of them after they failed and it blew my mind how completely inept at taking a history/interacting with a patient this individual was. I seriously questioned how they even got accepted into med school as I can't imagine their interview skills were any better.

It's those experiences along with the fact that shelf and Step exams fail to test certain skill sets necessary for most physicians to have (interpersonal and social skills, gathering information/taking a history, actually performing a physical exam) that make me feel multiple choice exams alone are an inadequate metric to evaluate students by.
 
Actually I have. A guy I rotated with (psych rotation) busted his butt on the wards but just didn't have the social skills to do well. He got upper 80's on his shelf but his attending comments said he did not meet expectations in any of the aspects of psychiatry. Granted, that's only one person I can confirm, but I do know multiple people who had 240+ Step 1 scores but failed Step 2 CS/Level 2PE even after they prepared for it. I actually helped tutor one of them after they failed and it blew my mind how completely inept at taking a history/interacting with a patient this individual was. I seriously questioned how they even got accepted into med school as I can't imagine their interview skills were any better.

It's those experiences along with the fact that shelf and Step exams fail to test certain skill sets necessary for most physicians to have (interpersonal and social skills, gathering information/taking a history, actually performing a physical exam) that make me feel multiple choice exams alone are an inadequate metric to evaluate students by.

How did he do on other rotations? If he did well on FM/IM/Peds - yet suddenly could not take a psych history - then likely the attending was malignant.

A poor evaluation will always be viewed in light of performance on other rotations. If he did well in all but one, then the student retains the benefit of doubt. However, if student had multiple low passes or fails - then that tips the scale.
 
There is small to moderate statistically significant correlation between Step1 and Clincal performance in 3rd year assessments.
Predictors of medical school clerkship performance: a multispecialty longitudinal analysis of standardized examination scores and clinical assessments
https://pdfs.semanticscholar.org/45c0/f967b1137e7bd9c6372811eb7529082588be.pdf
https://www.tandfonline.com/doi/pdf/10.3402/meo.v11i.4589


This gets brought up all the time by students who dont do well on step, but the correlation in the studies published indicate otherwise. Is it possible that poor performers can do well in third year , sure. Is it possible that socially inept robots will do poorly in third year, sure. But more likely than not poor performers on step will perform poorly compared to higher performing peers.

Now I do not know of any long term studies that look at outcomes compared to step scores. It would be interesting , but it is difficult to control for.
 
  • Like
Reactions: 1 user
I don't think you'll get a straight answer on this because of how inconsistently 3rd year grades are determined. At my school, shelf exams count for 100% of the grade and preceptor evals only count towards the dean's letter (unless you do something egregious enough to warrant a failure). At other schools evals may count for 50% or more of the clinical grade with shelf exams having minimal impact. So the using grades as a predictor is a poor metric imo.

If you only look at shelf scores you encounter the same problems as using only Step 1. At the same time I don't think attending evals are a very good metric either as different attendings have different standards and consistency becomes an issue.

Personally, I'd like to see better implementation and standardization for Step 2 CS as well as actual scores, as it's the only consistent metric that measure students' ability to take a history, gather info, and create a treatment plan. No, it might not be relevant to every field, but I think it would probably be a better metric for clinical performance than others.
How does your school prepare students for CS or PE then?
 
How did he do on other rotations? If he did well on FM/IM/Peds - yet suddenly could not take a psych history - then likely the attending was malignant.

A poor evaluation will always be viewed in light of performance on other rotations. If he did well in all but one, then the student retains the benefit of doubt. However, if student had multiple low passes or fails - then that tips the scale.

I'm not sure what his reviews were on other rotations, but I don't think they were glowing just judging by how he said 3rd year was going. I can also say with almost 100% certainty that the issue was with the student and not with the attending being malignant, as I know many students who rotated with that attending and none of them had issues or even remotely negative reviews (attending is actually known as one of the friendlier individuals we rotate with).

To be clear, I'm not trying to suggest that Step 1 isn't a good tool for measuring aptitude or that you can't make certain judgments about candidates from their board scores. However, I don't think it is a comprehensive enough metric to get a complete enough picture of applicant quality for most fields.

How does your school prepare students for CS or PE then?

During first 2 years we have standardized patients just like on the PE. End of first year and end of each semester of second year we have to pass a certain number of standardized patients and we get graded on the same criteria as the PE. If you failed you had to retake it until you passed. I always did exceptional on these portions so I was one of the people my school would ask to work with the people who failed. It actually surprised me how many of the "top students" had to retake those practicals, which has probably influenced my views on Step 1 and class rank as appriopriate metrics.

In terms of test time, our students are across the country for clinical rotations. Idk what they do at other sites, but if your core site is in the same city as the school they have certain days you can go practice on standardized patients around test time or you can set appointments with the director of the sim center to come in and use the rooms/review with her to prepare. Other than that we're mostly on our own.
 
  • Like
Reactions: 1 user
I'm not sure what his reviews were on other rotations, but I don't think they were glowing just judging by how he said 3rd year was going. I can also say with almost 100% certainty that the issue was with the student and not with the attending being malignant, as I know many students who rotated with that attending and none of them had issues or even remotely negative reviews (attending is actually known as one of the friendlier individuals we rotate with).

To be clear, I'm not trying to suggest that Step 1 isn't a good tool for measuring aptitude or that you can't make certain judgments about candidates from their board scores. However, I don't think it is a comprehensive enough metric to get a complete enough picture of applicant quality for most fields.



During first 2 years we have standardized patients just like on the PE. End of first year and end of each semester of second year we have to pass a certain number of standardized patients and we get graded on the same criteria as the PE. If you failed you had to retake it until you passed. I always did exceptional on these portions so I was one of the people my school would ask to work with the people who failed. It actually surprised me how many of the "top students" had to retake those practicals, which has probably influenced my views on Step 1 and class rank as appriopriate metrics.

In terms of test time, our students are across the country for clinical rotations. Idk what they do at other sites, but if your core site is in the same city as the school they have certain days you can go practice on standardized patients around test time or you can set appointments with the director of the sim center to come in and use the rooms/review with her to prepare. Other than that we're mostly on our own.

STEP1 is ideally a knowledge assessment tool. It moreso tests, if you know the ABCs and basic spelling. Clinical medicine is more of the reading and writing, albeit STEP2 still goes a bit too hard on the cookbooky type stuff
 
  • Like
Reactions: 1 user
Way better than the MCAT, if only because the preparation leading into the exam is pretty standardized. The MCAT is so broad and questions can be approached in so many more ways, whereas the Step exam is much, much more straightforward.
This seems like the opposite of my experience. The MCAT covered a very short, discrete list of topics. You could cover all of them, and then all of the shortcut approaches to each concept coming from any direction, in <3000 flashcards. Everything you needed to know was laid out and explained in very clear review books, and that was it.

Step is...not as organized, not as self-limited, and doesn't have anywhere near the type of straightforward, organized review books (that actually cover the material). For the MCAT, you could basically just do TBR and learn everything you need to know. It had explanations and tricks. FA is just...a list of the things you need to go study. It's not enough on its own, and even with UFAPS it's a pile of disjointed factoids rather than a cohesive set of knowledge. The test takers even provided an outline for the MCAT with a clear delineation of exactly everything you needed to learn to dominate the exam, rather than with Step where we have multiple test prep publishers putting out a bunch of stuff that they think you should know, but even so go learn more because who knows what else the official makers will include.

I stopped studying days before the MCAT because there was literally nothing left to learn. I knew that I knew it all and there was nothing they could throw at me that I would not only have seen before, but approached from that angle before. For Step even people who rocked the test say there were left-field type questions.
 
This seems like the opposite of my experience. The MCAT covered a very short, discrete list of topics. You could cover all of them, and then all of the shortcut approaches to each concept coming from any direction, in <3000 flashcards. Everything you needed to know was laid out and explained in very clear review books, and that was it.

Step is...not as organized, not as self-limited, and doesn't have anywhere near the type of straightforward, organized review books (that actually cover the material). For the MCAT, you could basically just do TBR and learn everything you need to know. It had explanations and tricks. FA is just...a list of the things you need to go study. It's not enough on its own, and even with UFAPS it's a pile of disjointed factoids rather than a cohesive set of knowledge. The test takers even provided an outline for the MCAT with a clear delineation of exactly everything you needed to learn to dominate the exam, rather than with Step where we have multiple test prep publishers putting out a bunch of stuff that they think you should know, but even so go learn more because who knows what else the official makers will include.

I stopped studying days before the MCAT because there was literally nothing left to learn. I knew that I knew it all and there was nothing they could throw at me that I would not only have seen before, but approached from that angle before. For Step even people who rocked the test say there were left-field type questions.
I also took the MCAT before they changed a lot of things, particularly in regard to the physics material. There used to be a lot of difficult physics concepts in optics, satellite motion, etc that I just couldn't memorize all the formulas for, let alone manipulate the formulas for every possible scenario they could devise, as there are literally infinite ways to use many of the formulas in physics by changing the scenario involved. I had a rather unlucky experience, as I had questions on satellite motion AND optics on my exam lol, both presented in ways that were very obscure.
 
I’ve worked with residents who seemed clinically fine to quite good, matched to good fellowships, and failed boards. They also underperformed on USMLE and in training exams. I don’t doubt there are people with test anxiety or some other deficit that doesn’t hinder them outside the closed-book exam context, which context is pretty artificial compared to the actual practice of medicine. But for better or worse those exams matter (for residency placement, for licensure and board certification) and it would be worthwhile to try to remediate such deficits if you have them.
 
I also took the MCAT before they changed a lot of things, particularly in regard to the physics material. There used to be a lot of difficult physics concepts in optics, satellite motion, etc that I just couldn't memorize all the formulas for, let alone manipulate the formulas for every possible scenario they could devise, as there are literally infinite ways to use many of the formulas in physics by changing the scenario involved. I had a rather unlucky experience, as I had questions on satellite motion AND optics on my exam lol, both presented in ways that were very obscure.

I loved that stuff. I hate that the intuitive quantitative logic has been left out of the new version. It's there but very watered down. Alas, at the med school level, medicine doesn't actually require it since most of it just memorizing a **** ton of guidelines and using basic knowledge to manipulate. The twists on the PS and the verbal passages are what makes the MCAT g loaded or a longer terms skill assessment rather than just a test of factoids. STEP1 is fundamentally photographic memory test with a baseline 120 IQ needed to do pretty much all of the reasoning. Rate limiting reagent to reaction for success is diligence+ memory rather than logical intuition. Enough tryhards beast STEP1 that studied 6 months for the MCAT and couldn't get all that farm These same people struggled on the SATs or GREs, if they took them. If anything STEP1 needs a change of focus to make the conceptual questions a greater portion of the test. I did fairly well but not super well (247) and my worst sections were the most memory heavy things, microbiology and anatomy. Most of the test felt like you know or you don't type stuff. Truly intriguing questions were the minority. On the other hand, I found the whole MCAT verbal section to be a very good test of one's abilities to perform the highest level cognitive tasks, insofar as one had to comprehend, synthesize, evaluate, and weigh choices. One has to do that in the clinic. The hard STEP questions for me were those that asked for random embryological origins, lymph nodes, media to grow skme types of bacteria, if something was a single stranded positive sense blah, recognizing some pathognmonic shape on a blood smear, what chromosome some was messed up in a disease or if it's x linked recessive or just recessive. If anything, the pathophys and phys in UWorld and Kaplan questions was much more thought provoking.
 
  • Like
Reactions: 2 users
I also took the MCAT before they changed a lot of things, particularly in regard to the physics material. There used to be a lot of difficult physics concepts in optics, satellite motion, etc that I just couldn't memorize all the formulas for, let alone manipulate the formulas for every possible scenario they could devise, as there are literally infinite ways to use many of the formulas in physics by changing the scenario involved. I had a rather unlucky experience, as I had questions on satellite motion AND optics on my exam lol, both presented in ways that were very obscure.
I definitely had plenty of optics; not so sure about the satellite motion. Definitely included all of the equations for the former, and possibly the latter, in my numbers above. There may be infinite ways to use the formulas, but there aren't that many actual formulas. For each one, there are some shortcuts to common ways to combine them, and you have to practice recognizing which variables you were given, but beyond that...the variation is fairly meaningless, other than that it keeps the questions interesting.

I loved that stuff. I hate that the intuitive quantitative logic has been left out of the new version. It's there but very watered down. Alas, at the med school level, medicine doesn't actually require it since most of it just memorizing a **** ton of guidelines and using basic knowledge to manipulate. The twists on the PS and the verbal passages are what makes the MCAT g loaded or a longer terms skill assessment rather than just a test of factoids. STEP1 is fundamentally photographic memory test with a baseline 120 IQ needed to do pretty much all of the reasoning. Rate limiting reagent to reaction for success is diligence+ memory rather than logical intuition. Enough tryhards beast STEP1 that studied 6 months for the MCAT and couldn't get all that farm These same people struggled on the SATs or GREs, if they took them. If anything STEP1 needs a change of focus to make the conceptual questions a greater portion of the test. I did fairly well but not super well (247) and my worst sections were the most memory heavy things, microbiology and anatomy. Most of the test felt like you know or you don't type stuff. Truly intriguing questions were the minority. On the other hand, I found the whole MCAT verbal section to be a very good test of one's abilities to perform the highest level cognitive tasks, insofar as one had to comprehend, synthesize, evaluate, and weigh choices. One has to do that in the clinic. The hard STEP questions for me were those that asked for random embryological origins, lymph nodes, media to grow skme types of bacteria, if something was a single stranded positive sense blah, recognizing some pathognmonic shape on a blood smear, what chromosome some was messed up in a disease or if it's x linked recessive or just recessive. If anything, the pathophys and phys in UWorld and Kaplan questions was much more thought provoking.
I kinda feel like the MCAT's job was to see if you could think critically enough to succeed in medicine, though, while Step 1 largely exists as a motivator to actually cram as much as possible into your head before going on to where the real learning occurs (mostly residency). I also don't think that programs really mind having a metric for how hard someone is going to work/how much capacity they have to actually retain what they work at.
 
  • Like
Reactions: 1 users
I definitely had plenty of optics; not so sure about the satellite motion. Definitely included all of the equations for the former, and possibly the latter, in my numbers above. There may be infinite ways to use the formulas, but there aren't that many actual formulas. For each one, there are some shortcuts to common ways to combine them, and you have to practice recognizing which variables you were given, but beyond that...the variation is fairly meaningless, other than that it keeps the questions interesting.


I kinda feel like the MCAT's job was to see if you could think critically enough to succeed in medicine, though, while Step 1 largely exists as a motivator to actually cram as much as possible into your head before going on to where the real learning occurs (mostly residency). I also don't think that programs really mind having a metric for how hard someone is going to work/how much capacity they have to actually retain what they work at.

Yes. It's very fair. Ratio of effort: outcome is very solid for the STEP1, relative to previous tests med students have probably taken. It's a not a bad surrogate marker for the hard work one has done. Although class exams and preclinical rank also can showcase that.
 
so I recently overheard another student say that while they stink at uworld and boards they're much better in clinical practice and basically know their stuff. what do y'all think?
It is possible to be medicre in terms of shelf and board scores but be a better and even more knowledgeable physician than your peers who do better on those exams. The reason is because USMLE and NBME shelf exams test a fairly high amount of useless or outdated information.
 
On the other hand, I found the whole MCAT verbal section to be a very good test of one's abilities to perform the highest level cognitive tasks, insofar as one had to comprehend, synthesize, evaluate, and weigh choices. One has to do that in the clinic.

What evidence do you have for this statement? There are PhDs in math/physics that get average verbal scores on the GRE/SAT. For example Richard Feynman would have done at best average on the MCAT verbal. But we all know he more than proved that he could perform at the "highest level cognitive tasks". There are several domains of intelligence, and it is hard to reliably metricize these domains. We don't even understand the basis of consciousness and somehow you think we can reliably make confident statements about higher order cognitive attributes?

I also don't see radiology or your pathology in your future, since you despise memorization so much. I used to think like you, but having more knowledge actually gives you a broader base of information to enhance your thinking and provide better care. Medicine is memory-intensive, with some specialties requiring more of it than others.

Nobody cares about your SAT in college, MCAT in medical school or USMLEs when you begin residency or fellowship. Just climb the hurdle and move on.
 
What evidence do you have for this statement? There are PhDs in math/physics that get average verbal scores on the GRE/SAT. For example Richard Feynman would have done at best average on the MCAT verbal. But we all know he more than proved that he could perform at the "highest level cognitive tasks". There are several domains of intelligence, and it is hard to reliably metricize these domains. We don't even understand the basis of consciousness and somehow you think we can reliably make confident statements about higher order cognitive attributes?

I also don't see radiology or your pathology in your future, since you despise memorization so much. I used to think like you, but having more knowledge actually gives you a broader base of information to enhance your thinking and provide better care. Medicine is memory-intensive, with some specialties requiring more of it than others.

Nobody cares about your SAT in college, MCAT in medical school or USMLEs when you begin residency or fellowship. Just climb the hurdle and move on.

his mathematical IQ was off the charts, as was his creativity. we have no idea how his verbalnwould have been. he had a 128 IQ measured as a school boy. that's all we know about his testing. I know no one cares about those. I'm just saying which ones I thought were more fun. Fun to me is logical reasoning based. None of them are actually all that gloaded anymore, hence why MENSA ceases to accept them. I'm not saying I'd ever want to join MENSA but I trust their standards for what counts as an IQ test and those (SAT and MCAT I think) used to count. LSAT still counts I believe.

USMLE is an extremely imperfect marker of consciencetiousness, but it's the best one schools have, insofar as the anecdote in rec letters generally isn't sufficient to make value judgements because most people get the same buzzwords said about them. Clinical grades are good too but those mostly rely on shelf exams, which again are just more board questions.

I definitely don't plan on doing radiology or pathology. I like the electrophysio I'm exposed to.

Once again, I certainly agree that no one cares about these in practical circumstances. It's extremely stupid to even really mention them in such. So I agree wholeheartedly with you there.
 
Last edited:
If anything, the pathophys and phys in UWorld and Kaplan questions was much more thought provoking.

Yep. Uworld has been by far the most critical thinking bank I have seen so far, yet the factoid memorization is still there. I think there were around 10 questions almost purely critical thinking questions (that I encountered so far) that a person with an introductory biology background could answer surprisingly enough...
 
Top