Trends in LizzyM Scores Over Time

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Deans of Admission don't have to teach the students; we Faculty do. That's all you need to know.


I'm just curious as to the reason behind the disconnect between adcoms and the Dean of Admissions on issues like this. One would figure you have similar goals. Is this just a difference of opinion or is it rooted in having different goals as the dean vs the adcom?

Members don't see this ad.
 
  • Like
Reactions: 1 users
In that context I agree; it's definitely much more difficult to imagine AI/robots rushing to the scene.

I think the advent of such technologies, though, will be very fruitful and beneficial to physicians' overarching mission. Because, in the end, we're (well, not me, but they...eventually me hopefully!) serving our patients and caring for their wellbeing: if we're honest, and stick to that mission, any technology that is evidenced to aid that mission we ought to embrace--despite the negative (economic) repercussions it may have.
I don't think anyone in this thread has said that they would not use technology if it improved outcomes.
Two things

I do believe you are conflating strong ai with weak ai.

And discounting the lack of return on capital for some technological feats that may be possible but are too costly to develop, implement,service compared to using the status qou.

We can develop a device that can tell if there is something stuck in your throat or not by looking at images and by training it via images and weak ai to see what a clear airway and an obstructed airway looks like. We can't however put it in a portable self propelled system that analyzes the possible causes of arrest, reviews your medical history, collects history from your family,analyzes it removes the blockage, initiates CPR, places shock pads on you, finds a vein , injects you with some eppi and analyzes for response. It could potentially be done but the undertaking is equivalent to that of the complexity involved in the moon landing. The cost , time , regulatory approval , safety testing, security testing would put the cost soo high and the return so small that it is difficult to justify the implementation. This will happen , after strong AI development, however that is probably after our expected life spans.
 
Last edited:
Members don't see this ad :)
Update

So I wanted to see how matriculant LizzyM scores fare with matriculation rates to medical school, where:

matriculation rate = number of matriculants / number of applicants * 100%

I simplified the results by focusing only on US MD schools using AAMC Table A-16.

FhFx5JE.jpg


Correlation looks fairly moderate with R^2 = 0.4866. r = -sqrt(0.4866) = -0.698, since there is a negative correlation.

Given that there is some relationship between the two quantities, I used a metric called a competition ratio to measure the competitiveness in medical school admissions, where:

competition ratio = matriculant LizzyM score / (matriculation rate * 100)

Lower competition ratios resulting from lower matriculant LizzyM scores and/or higher matriculation rates suggest lower competitiveness, so it is easier to get into medical school. Likewise is true for higher competition ratios. Competition ratio trends for US MD schools are shown below.

lDwxkTz.jpg


Feel free to share your comments below!
 
  • Like
Reactions: 1 user
Update

So I wanted to see how matriculant LizzyM scores fare with matriculation rates to medical school, where:

matriculation rate = number of matriculants / number of applicants * 100%

I simplified the results by focusing only on US MD schools using AAMC Table A-16.

FhFx5JE.jpg


Correlation looks fairly moderate with R^2 = 0.4866. r = -sqrt(0.4866) = -0.698, since there is a negative correlation.

Given that there is some relationship between the two quantities, I used a metric called a competition ratio to measure the competitiveness in medical school admissions, where:

competition ratio = matriculant LizzyM score / (matriculation rate * 100)

Lower competition ratios resulting from lower matriculant LizzyM scores and/or higher matriculation rates suggest lower competitiveness, so it is easier to get into medical school. Likewise is true for higher competition ratios. Competition ratio trends for US MD schools are shown below.

lDwxkTz.jpg


Feel free to share your comments below!

This is fantastic because we can finally say that competition is increasing. I wonder what this graph would look like if we had data looking back into the 90s, but we dont do we??

Also, am I interpreting a competition ratio correctly if I say

2016 CR: 1.74
2007 CR: 1.5

1.74/1.5 = 1.16 therefore competition for medical school has increased by about 16 % over the past decade?
 
This is fantastic because we can finally say that competition is increasing. I wonder what this graph would look like if we had data looking back into the 90s, but we dont do we??

Also, am I interpreting a competition ratio correctly if I say

2016 CR: 1.74
2007 CR: 1.5

1.74/1.5 = 1.16 therefore competition for medical school has increased by about 16 % over the past decade?

It's in the AAMC Data Book which is copyrighted. I think to measure relative increase in competition, we would use something like a percent change, with 2007 being the base year, so:

% change in competition = (2016 CR - 2007 CR) / (2007 CR) = (1.74 - 1.5)/1.5 * 100% = 16% increase, basically what you suggested.
 
If I could make one suggestion. I would change the scales on those charts since autofiting the scale on the competition ratio is a little misleading.
 
  • Like
Reactions: 1 user
Wait isn't a competition ratio a measure of apps:seats?

Someone please explain to me why it makes sense to treat LizzyM/Matric% as a metric here? It seems like a weird variable. It would claim that an increase in LizzyM of 5 points (absolutely massive) is offset by an admit rate climb of only 2.9% (not crazy at all, we've seen more than that much change in the last few years).

68.4/39.6 = 73.4/42.5 = 1.73
 
Wait isn't a competition ratio a measure of apps:seats?

Someone please explain to me why it makes sense to treat LizzyM/Matric% as a metric here? It seems like a weird variable. It would claim that an increase in LizzyM of 5 points (absolutely massive) is offset by an admit rate climb of only 2.9% (not crazy at all, we've seen more than that much change in the last few years).

68.4/39.6 = 73.4/42.5 = 1.73

isn't apps:seats simply just 1/matric rate? and a 68.4/39.6 = 73.4/42.5 being the same actually makes sense, since they show that both are equally competitive, which is true. it's like how low yield schools like Georgetown can rival competitiveness with top tiers like Harvard.

not exactly sure where you got the 68.4/39.6 = 73.4/42.5 = 1.73 example from though. to me that looks like comparing competitiveness between two schools.

EDIT: nvm the 1.73 is the competition ratio for the 2016 entering class year. so I guess what you're saying is we have LizzyM score change = 1.73 * matric rate change? a matric rate increase of 1% would correspond to 1.73 increase in LizzyM score. yeah that can be a pretty steep offset.
 
Last edited:
Frankly, the sort of people who focus enormous resources on scoring 520+ are not the sort that I find desirable as medical students but I'm not the Dean.
Just had an MS-4 tell me she got great Steps 1 & 2 scores, top of her class... arrogant as can be ... discussed private and confidential medical records/diagnosis of a pro athlete on social media because she has access to the records from behind the scenes. Scary to think what she might do with patient records. No social skills and obviously, lack of discretion.

As a side note, I'm rather pleased my LM is 71... depending on how the 30 year old grades are used (if not used, 71.2; if used, not so much ... LM = subzero :D )
 
isn't apps:seats simply just 1/matric rate? and a 68.4/39.6 = 73.4/42.5 being the same actually makes sense, since they show that both are equally competitive, which is true. it's like how low yield schools like Georgetown can rival competitiveness with top tiers like Harvard.

not exactly sure where you got the 68.4/39.6 = 73.4/42.5 = 1.73 example from though. to me that looks like comparing competitiveness between two schools.
I mean the term "competition ratio" is used when describing things like job application, referring to number of applicants per position.

The entire population right now is represented by 68.4/39.6 (matric LizzyM / Matric%). If the entire population suddenly, next year, had the LizzyM shoot up to an insanely high record 73.4 (+5) it would, according to your metric, be totally offset by the admit rate returning to around where it was a few years ago (42.5%).

That's bonkers man. This LizzyM/Matric% metric isn't a sensible thing that represents anything well.
 
Members don't see this ad :)
I mean the term "competition ratio" is used when describing things like job application, referring to number of applicants per position.

The entire population right now is represented by 68.4/39.6 (matric LizzyM / Matric%). If the entire population suddenly, next year, had the LizzyM shoot up to an insanely high record 73.4 (+5) it would, according to your metric, be totally offset by the admit rate returning to around where it was a few years ago (42.5%).

That's bonkers man. This LizzyM/Matric% metric isn't a sensible thing that represents anything well.

i just used it as a random name not like apps:seat = 1/matric. and generally, a +5 increase in LizzyM score would be matched with a decrease in matric % not an increase (you can see the above regression graph showing the general decrease trend). i don't know why schools would suddenly admit more applicants in a highly competitive applicant pool. but in your scenario, yeah the two metrics would be the same.

i'll rename the competition ratio but the LizzyM/Matric% metric assumes that LizzyM scores decrease with increasing matric rate. it's a metric that accounts for competitiveness measured by two commonly used metrics. more competitive schools tend to have higher LizzyM scores and lower admit rates, and this is reflected by higher LizzyM/Matric% scores.
 
Isn't the number of applicants not necessarily relevant when it comes to competitiveness? If 100,000 more people apply next year than this year, but they all have LM=30, competition doesn't actually increase. Can't competition simply be measured by looking at the average LM at the lowest percentile accepted? For example if 40% of applicants are accepted, can't we just look at the average LM of applicants at the 60th percentile of the applicant pool and call that our competitiveness score? This should also work for individual schools, e.g. if Harvard accepts 6% of applicants we can look at the LM score of applicants at the 94th percentile of Harvard's applicant pool and call that Harvard's competitiveness score
 
The flaw I'm pointing out is that LizzyM/Matric% is not a good way to measure competitiveness. One of the two metrics varies much more than the other with less significant impact, so giving them equal weights in an X/Y setup doesn't work out, like in my example. It is an arbitrary way to combine the two that doesn't capture anything well...it would be like me suggesting we multiply LizzyM by twelve and then divide by half the square root of matriculation percent.

The best way to look at this data is one plot of admit rate, and another plot of LizzyM. Seeing two trends from a common cause doesn't mean you should mash them up like that.

Like if I'm measuring dryness of California and I see dam lake level decreasing in large increments and fire rate increasing in small increments, it does not follow that one divided 1:1 by the other is a good measure of Californian dryness.
 
The flaw I'm pointing out is that LizzyM/Matric% is not a good way to measure competitiveness. One of the two metrics varies much more than the other with less significant impact, so giving them equal weights in an X/Y setup doesn't work out, like in my example. It is an arbitrary way to combine the two that doesn't capture anything well...it would be like me suggesting we multiply LizzyM by twelve and then divide by half the square root of matriculation percent.

The best way to look at this data is one plot of admit rate, and another plot of LizzyM. Seeing two trends from a common cause doesn't mean you should mash them up like that.

Like if I'm measuring dryness of California and I see dam lake level decreasing in large increments and fire rate increasing in small increments, it does not follow that one divided 1:1 by the other is a good measure of Californian dryness.

I mean I can easily adjust the metric to account for differences in variation by using the slope of the linear regression between LizzyM vs matric% (although granted the correlation is only moderate). it's just a way to have a simplified look on competitiveness, which makes sense intuitively.

Isn't the number of applicants not necessarily relevant when it comes to competitiveness? If 100,000 more people apply next year than this year, but they all have LM=30, competition doesn't actually increase. Can't competition simply be measured by looking at the average LM at the lowest percentile accepted? For example if 40% of applicants are accepted, can't we just look at the average LM of applicants at the 60th percentile of the applicant pool and call that our competitiveness score? This should also work for individual schools, e.g. if Harvard accepts 6% of applicants we can look at the LM score of applicants at the 94th percentile of Harvard's applicant pool and call that Harvard's competitiveness score

I used the reported average LizzyM scores of those who actually matriculated at medical school from the AAMC tables. The matriculant pool is the critical factor here. Having more people apply to medical school but only the same fraction of those matriculate just means the admit rate of that school decreased and the school got more competitive (which is true). Applicant LizzyM scores aren't used.

School-specific analysis is definitely possible but that depends on MSAR (which is copyrighted) or school websites (which may be fudged and not reliable).
 
Last edited:
I mean I can easily adjust the metric to account for differences in variation by using the slope of the linear regression between LizzyM vs matric% (although granted the correlation is only moderate). it's just a way to have a simplified look on competitiveness, which makes sense intuitively.
It really doesn't make sense intuitively to measure competitiveness as a 1:1 X/Y for those two values. A 10% change in LizzyM (~7) is insanity while a 10% change in admit rate (~4%) is something we've seen happen over just a few years. I can see I won't convince you but for future readers of the thread I have to point out it makes little sense to me to use this value.
 
  • Like
Reactions: 1 users
It really doesn't make sense intuitively to measure competitiveness as a 1:1 X/Y for those two values. A 10% change in LizzyM (~7) is insanity while a 10% change in admit rate (~4%) is something we've seen happen over just a few years. I can see I won't convince you but for future readers of the thread I have to point out it makes little sense to me to use this value.

Yah this is right, I hadn't considered that the interpretation behind the variances of the two variables were so different from one another. I agree they should be kept separate.

Shame, I just wanted to actually see some metric of how much more competitive admissions is at any point in time
 
It really doesn't make sense intuitively to measure competitiveness as a 1:1 X/Y for those two values. A 10% change in LizzyM (~7) is insanity while a 10% change in admit rate (~4%) is something we've seen happen over just a few years. I can see I won't convince you but for future readers of the thread I have to point out it makes little sense to me to use this value.

? I'm not talking about a 1:1 ratio here. I'm saying I can use that slope of the linear regression between LizzyM score vs matric rate to adjust the ratio to account for the variations.

Doing so would be something like:

competition ratio = (1 / (slope / 100)) * (matriculant LizzyM score / (matriculation rate * 100) ) -->
competition ratio = (1 / (25 / 100)) * (matriculant LizzyM score / (matriculation rate * 100) ) -->
competition ratio = (4 * matriculant LizzyM score) / (matriculation rate * 100) -->

matriculant LizzyM score = (1/4) * (competition ratio) * (matriculation rate * 100)

A 1% increase in matriculation rate would correspond to (1/4) * 1.73 * 1 = 0.4325 increase in LizzyM score. So a 2.9% increase in matriculation rate would be (1/4) * 1.73 * 2.9 = 1.25 increase in LizzyM score.
 
? I'm not talking about a 1:1 ratio here. I'm saying I can use that slope of the linear regression between LizzyM score vs matric rate to adjust the ratio to account for the variations.

Doing so would be something like:

competition ratio = (1 / (slope / 100)) * (matriculant LizzyM score / (matriculation rate * 100) ) -->
competition ratio = (1 / (25 / 100)) * (matriculant LizzyM score / (matriculation rate * 100) ) -->
competition ratio = (4 * matriculant LizzyM score) / (matriculation rate * 100) -->

matriculant LizzyM score = (1/4) * (competition ratio) * (matriculation rate * 100)

A 1% increase in matriculation rate would correspond to (1/4) * 1.73 * 1 = 0.4325 increase in LizzyM score. So a 2.9% increase in matriculation rate would be (1/4) * 1.73 * 2.9 = 1.25 increase in LizzyM score.

It's not so much that you can't adjust for the rate of change of each variable than it is that the change in each variable carries a very different significance.
Like Efle said:

10% higher matriculation rate: new schools opened? Number of applications dropped?

10% higher LizzyM? Significantly higher selection for stats was necessary.

The alternative would be to add a weight coefficient to LM score but any amount you weighted it by would be an essentially arbitrary decision. Thus, putting them together masks the significance of either effect as opposed to highlighting an overall effect.
 
It's not so much that you can't adjust for the rate of change of each variable than it is that the change in each variable carries a very different significance.
Like Efle said:

10% higher matriculation rate: new schools opened? Number of applications dropped?

10% higher LizzyM? Significantly higher selection for stats was necessary.

simplifying two metrics into one reduces information, but i was more focused on accounting for variational changes. at least now with the adjusted metric, a 2.9% increase in matric rate doesn't result in major increases in LizzyM scores.

the purpose of a new metric is to measure competition trends uniformly over time. understanding the possible causal factors behind changes in competition requires more information.
 
Significantly higher selection for stats
And this would be a massive understatement. Even with GPA absorbing as much as possible (3 points), a 4 point jump in admitted MCAT would be from best ~17% to best ~4%.

The score you need to be competitive becoming 4x more rare is so, so, so much more significant than a few percent less admitted.
 
And this would be a massive understatement. Even with GPA absorbing as much as possible (3 points), a 4 point jump in admitted MCAT would be from best ~17% to best ~4%.

The score you need to be competitive becoming 4x more rare is so, so, so much more significant than a few percent less admitted.

What are your thoughts on the revision? I'm focusing on statistical variations, not so much on explanatory power

? I'm not talking about a 1:1 ratio here. I'm saying I can use that slope of the linear regression between LizzyM score vs matric rate to adjust the ratio to account for the variations.

Doing so would be something like:

competition ratio = (1 / (slope / 100)) * (matriculant LizzyM score / (matriculation rate * 100) ) -->
competition ratio = (1 / (25 / 100)) * (matriculant LizzyM score / (matriculation rate * 100) ) -->
competition ratio = (4 * matriculant LizzyM score) / (matriculation rate * 100) -->

matriculant LizzyM score = (1/4) * (competition ratio) * (matriculation rate * 100)

A 1% increase in matriculation rate would correspond to (1/4) * 1.73 * 1 = 0.4325 increase in LizzyM score. So a 2.9% increase in matriculation rate would be (1/4) * 1.73 * 2.9 = 1.25 increase in LizzyM score.
 
What are your thoughts on the revision? I'm focusing on statistical variations, not so much on explanatory power
Any revision will be arbitrary. We have no reason to expect the relation is linear, and certainly no good reason to say it is causal at all in either direction. The plot above R^2 is weak. Like Lucca said you would essentially have to try and fix the situation by arbitrarily weighting the LizzyM changes differently, but there is no good way to pick a weight value there.

My personal approach is always going to be looking at LizzyM and admit rate as two totally separate things.
 
And really they behave differently, too. GPAs have crept very steadily, MCAT not as steadily, and admit rate has been up and down. Trying to put all of them together into an arbitrary value just doesn't make sense.
 
Looks like overall inflation is about 0.05 GPA point per 5 years on gradeinflation. Median GPA admitted to med school rose 0.06 in 9 years.

So slower than grade inflation.
 
Last edited:
Looks like overall inflation is about 0.05 GPA point per 5 years. Median GPA rose 0.06 in 9 years.

So slower than grade inflation.
This is median, it is possible that some schools that are over represented in medical schools have faster then median gpa inflation . Also MCAT inflation might be impacting the LizzyM analysis.

I might be in the minority but matriculants / applicants might be a better gauge of competitiveness. But this metric falls into the problem of self selection in the Do vs MD pools and applicants.
 
This is median, it is possible that some schools that are over represented in medical schools have faster then median gpa inflation . Also MCAT inflation might be impacting the LizzyM analysis.

I might be in the minority but matriculants / applicants might be a better gauge of competitiveness. But this metric falls into the problem of self selection in the Do vs MD pools and applicants.
I've seen MCAT inflation mentioned a few times now and still don't understand what anyone means by it. The median admitted MCAT is in a pretty dang static portion of the curve (30-32):

 
it is possible that some schools that are over represented in medical schools have faster then median gpa inflation
Wouldn't this actually cause admitted GPA to rise faster than inflation, not slower?

Edit: I think my post above was ambiguous about which was faster.
 
Wouldn't this actually cause admitted GPA to rise faster than inflation, not slower?

Edit: I think my post above was ambiguous about which was faster.
It would make admitted GPAs to raise faster explaining the trend line vs the median gpa inflation staying at the described value. So you could see an artificial increase in LizzyM compared to the actual change in competitiveness of medical school admissions.
 
It would make admitted GPAs to raise faster explaining the trend line vs the median gpa inflation staying at the described value. So you could see an artificial increase in LizzyM compared to the actual change in competitiveness of medical school admissions.
Admitted GPAs have been rising slower than the inflation trend
 
Any revision will be arbitrary. We have no reason to expect the relation is linear, and certainly no good reason to say it is causal at all in either direction. The plot above R^2 is weak. Like Lucca said you would essentially have to try and fix the situation by arbitrarily weighting the LizzyM changes differently, but there is no good way to pick a weight value there.

My personal approach is always going to be looking at LizzyM and admit rate as two totally separate things.

Wasn't arbitrary. I just used the simple regression slope as the correction factor, which makes sense. That is the best statistical correction possible.

R^2 is moderate with only 10 data points. AAMC data book has everything but unfortunately it's copyrighted so i'm stuck. The key thing is that there is a correlation between matriculated LizzyM scores and admit rate that has R^2 of nearly 0.50. That's a compelling reason to combine the two metrics imo. It isn't like the correlation is nearly 0.

The key limitations are loss of some information with merging the metrics and limited data set. But I think it could be a fairly useful tool.
 
I just used the simple regression slope as the correction factor
It only makes sense if you expect the amount of change in one to directly relate to some amount of change in the other, no? There really isn't reason to believe that, the admit percent curve has been all over the place, distinctly different from the LizzyM very nice trends:

iMP4Z4b.png
 
The key thing is that there is a correlation between matriculated LizzyM scores and admit rate that has R^2 of nearly 0.50. That's a compelling reason to combine the two metrics imo.
Someone link the spurious correlations website to shut the bolded down ASAP
 
It only makes sense if you expect the amount of change in one to directly relate to some amount of change in the other, no? There really isn't reason to believe that, the admit percent curve has been all over the place, distinctly different from the LizzyM very nice trends:

iMP4Z4b.png

But the R^2 in LizzyM score vs admit rate is 0.49. It's close to 0.50 despite limited data set even though the curves look different.

And in theory, this makes sense. Schools want the best candidates possible so they will try to optimize that some way. LizzyM scores generally increase with competition pressures, as can be seen in many cases (see Hofstra and NYU).

Someone link the spurious correlations website to shut the bolded down ASAP

Except the two metrics are related in theory and in practice. What other factors common to both could be contributing to this correlation?
 
I've seen MCAT inflation mentioned a few times now and still don't understand what anyone means by it. The median admitted MCAT is in a pretty dang static portion of the curve (30-32):
I read due to the exam not changing drastically in a long time better prep material and availability of better resources contributed to upward creep of the average. Also if you look at the score distributions of the old exam2013 you will see the verbal and the bs have peaks around the 10 and don't show the normal distribution of the new exam. I could have sworn I read it in one of the aamc documents when the new exam was being introduced but I can't seem to find it.
 
I read due to the exam not changing drastically in a long time better prep material and availability of better resources contributed to upward creep of the average. Also if you look at the score distributions of the old exam2013 you will see the verbal and the bs have peaks around the 10 and don't show the normal distribution of the new exam. I could have sworn I read it in one of the aamc documents when the new exam was being introduced but I can't seem to find it.
I do recall something about normalizing the subsections yeah, the bins were a little too short on the far left and a little too tall at 9-10. But I don't think the percent scoring 30-32+ composite has changed more than like a single percentile point over many years! LizzyM creep is real as far as I can see
 
But the R^2 in LizzyM score vs admit rate is 0.49. It's close to 0.50 despite limited data set even though the curves look different.

And in theory, this makes sense. Schools want the best candidates possible so they will try to optimize that some way. LizzyM scores generally increase with competition pressures, as can be seen in many cases (see Hofstra and NYU).



Except the two metrics are related in theory and in practice. What other factors common to both could be contributing to this correlation?
They are not related in theory and practice. You can literally see an up then down pattern in the admit rate that is not mirrored in stats in what I posted. There could be a bunch more applicants in a given year, but maybe they tend to be on the weak side so competition among the upper end doesn't change much. There could be a bunch more seats that open up somewhere with a new school, but it is instate-mission so the vast majority have their odds unaffected. And so on. One plot is a curvy funny upside down U and the other isn't, I can't make a much clearer case than that.
 
They are not related in theory and practice. You can literally see an up then down pattern in the admit rate that is not mirrored in stats in what I posted. There could be a bunch more applicants in a given year, but maybe they tend to be on the weak side so competition among the upper end doesn't change much. There could be a bunch more seats that open up somewhere with a new school, but it is instate-mission so the vast majority have their odds unaffected. And so on. One plot is a curvy funny upside down U and the other isn't, I can't make a much clearer case than that.

but if more people apply to schools whose matriculant stats match that of their own stats, wouldn't this result in the stats creep? that's how top schools' matriculant MCAT medians rose from ~33 several years ago to ~37 now.
 
but if more people apply to schools whose matriculant stats match that of their own stats, wouldn't this result in the stats creep? that's how top schools' matriculant MCAT medians rose from ~33 several years ago to ~37 now.
When did top schools have medians of 33?? Again though man you're positing explanations for a trend coupling that doesn't exist
 
When did top schools have medians of 33?? Again though man you're positing explanations for a trend coupling that doesn't exist

idk it was commonly cited on here that top schools like Harvard had MCAT medians of 33 back in 2000 or so. i don't have data for this so could be totally off.

but my theory is that more people applying to schools with matching stats would force the schools to accept higher quality applicants, who incidentally have higher stats than average, thus contributing to the creep.
 
it was commonly cited on here that top schools like Harvard had MCAT medians of 33 back in 2000
I have no idea how valid it is but I did find this from a SUNY bio department.

Apparently, these schools were the only places with a 30+ MCAT in the early/mid 2000s, and yeah the best of the best were like 32-34:

MPWsVxB.png


This blows my mind.
 
  • Like
Reactions: 1 users
Alright I think I found a way to non-arbitrarily combine all the data to measure changes in competitiveness. The bottom line here is that average matriculant data doesn't do us much good, because you don't need to be at that average LizzyM score to get into med school, you only need to be above the acceptance-rejection threshold (i.e. you need to get to a LizzyM where you're at the percentile of all applicants that is just above the percent of applicants who are rejected). Without having to arbitrarily divide LizzyM by Matric% or anything like that, we can just find the LizzyM score at that "threshold" percentile, since both the percentile and corresponding LizzyM score will change dynamically and non-arbitrarily every year.

Edit: Updated 1992-2015 Competitiveness Graph:

upload_2017-3-1_12-52-55.png


Let me know what you think of this @efle @Lawper @Lucca

100-Acceptance rates = threshold percentile of applicant pool that applicants must reach
upload_2017-2-28_14-11-11.png


Based on changes in GPA averages over the years, these are the GPA's necessary to reach the changing thresholds of the overall applicant pool based on changing acceptance rates (and based on a GPA SD of .34 which was accurate for 2016, but not necessarily accurate for the other years, though it should be close enough)
upload_2017-2-28_14-11-27.png


Same process for the MCAT
upload_2017-2-28_14-11-38.png


Combining those dynamic GPA percentiles, MCAT percentiles, and applicant pool thresholds based on acceptance rate changes, here's a plot of LizzyM scores needed to reach the relevant acceptance thresholds. I believe these LizzyM scores can pretty reasonably be considered "competitiveness" scores for the overall applicant pools.
upload_2017-2-28_14-30-44.png
 
Last edited:
  • Like
Reactions: 1 users
I have no idea how valid it is but I did find this from a SUNY bio department.

Apparently, these schools were the only places with a 30+ MCAT in the early/mid 2000s, and yeah the best of the best were like 32-34:

MPWsVxB.png


This blows my mind.

so this supports what i suggested before? that more people applying to schools with matching stats would force the schools to accept higher quality applicants, who incidentally have higher stats than average, thus contributing to the creep.
 
Alright I think I found a way to non-arbitrarily combine all the data to measure changes in competitiveness. The bottom line here is that average matriculant data doesn't do us much good, because you don't need to be at that average LizzyM score to get into med school, you only need to be above the acceptance-rejection threshold (i.e. you need to get to a LizzyM where you're at the percentile of all applicants that is just above the percent of applicants who are rejected). Without having to arbitrarily divide LizzyM by Matric% or anything like that, we can just find the LizzyM score at that "threshold" percentile, since both the percentile and corresponding LizzyM score will change dynamically and non-arbitrarily every year.

Let me know what you think of this @efle @Lawper @Lucca

I'm in a bit of a rush so I've only calculated all of this for the past four years.

Acceptance rates:
View attachment 215515

100-Acceptance rates = threshold percentile of applicant pool that applicants must reach
View attachment 215516

Based on changes in GPA averages over the years, these are the GPA's necessary to reach the changing thresholds of the overall applicant pool based on changing acceptance rates (and based on a GPA SD of .34 which was accurate for 2016, but not necessarily accurate for the other years, though it should be close enough)
View attachment 215517

Same process for the MCAT
View attachment 215518

Combining those dynamic GPA percentiles, MCAT percentiles, and applicant pool thresholds based on acceptance rate changes, here's a plot of LizzyM scores needed to reach the relevant acceptance thresholds. I believe these LizzyM scores can pretty reasonably be considered "competitiveness" scores for the overall applicant pools.
View attachment 215520
Haha love it. The only criticism I have is that there is no way the GPA distribution is anything approaching normal, because you cannot even get to median+1SD (4.06), while you will have an extremely long left tail with people beyond -1SD still having some luck.

The MCAT one I'd also be wary of with the new test having rolled out making things weird.

Overall though, the fact that the change has been extremely minor on the small time scale (less than a percent climbed in a few years) sounds about right.
 
  • Like
Reactions: 1 user
so this supports what i suggested before? that more people applying to schools with matching stats would force the schools to accept higher quality applicants, who incidentally have higher stats than average, thus contributing to the creep.
I think I'd tell a different story: med schools very suddenly developed an interest in class profiles with exam scores as high as they could push them without sacrificing quality packages/ECs. Right around the time that the internet and comparisons/rankings become widespread among applicants to college and graduate school...
 
I have no idea how valid it is but I did find this from a SUNY bio department.

Apparently, these schools were the only places with a 30+ MCAT in the early/mid 2000s, and yeah the best of the best were like 32-34:

MPWsVxB.png


This blows my mind.

Jesus Christ.....times have changed. I don't want to imagine what the scores will be 10 years from now.
 
Top