WedgeDawg's Applicant Rating System (Updated Jan 2017)

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

WedgeDawg

not actually a dog
Moderator Emeritus
10+ Year Member
Joined
Mar 22, 2012
Messages
7,691
Reaction score
13,035
Link to online WARS Calculator

Latest version is 1.3 (Released January 2017)
Collaborator credit: @To be MD


Introduction

As some of you may have seen, I've recently been pioneering a new system that helps applicants figure out where they stand with respect to medical school admissions as well as giving them a place to start when it comes to creating a school list. My system is a comprehensive algorithm that takes into account all of the major (and some of the minor!) factors that go into building a successful application! This post aims to elucidate the process by which this method scores an applicant as well as get community input on the algorithm to attempt to strengthen it even more.

When I first started building this system, I used a google doc spreadsheet to make notes and create the initial versions of some of the formulas that go into this program. In order to do this, I scored myself, other applicants I knew in real life, and many applications I found in the What Are My Chances (WAMC) forum to adjust rating scales to try and create a generalized model that placed applicants into appropriate discrete categories.

Once I had my initial quantitative rating system in place, I wrote a Python script that allowed me to easily score an applicant based on factors normally included in their WAMC thread and which gives the output that I normally post in these threads. This is the point at which I started posting in threads as well to see how well my formulations matched up with community suggestions.

Finally, after some more tweaking, I created a comprehensive Excel document that contains instructions, qualitative descriptions of each factor that are then reduced to a numerical score, a place to input score values and receive a score in addition to a category level and school breakdown, and a page that displays which schools are in which categories. This document is available for download.

I will go through each of these factors in this post to articulate how they fit into the overall scoring paradigm as well as solicit input from the SDN community about how to increase the accuracy of this system.

The LizzyM System

This system was originally created as a supplement to, not a replacement for, the already widely-utilized LizzyM scoring system. As a reference, the LizzyM score is defined as (GPA*10)+MCAT and may contain a +1 or -1 modifier in certain situations. The applicant's LizzyM score is then compared to the LizzyM score for a school to determine whether or not the applicant is statistically competitive for that school. However, the inherent simplicity of the LizzyM score, while making it quick and easy to generate and apply, also creates problems endemic to systems that reduce and generalize. The two major simplifications are the reduction of an entire application to two (already numerical) metrics and the assumption that the LizzyM score accounts for the majority of, if not all of, the variability attributed to selectivity.

While there is merit to these assumptions, which is why the LizzyM score is so widely used, there are also deficiencies that need to be addressed in order to create a more accurate system for assessing an application. One of these deficiencies is that certain schools with similar LizzyM schools may be in very different levels of competitiveness. For example, although UVA and Duke have identical LizzyM scores, it is clear that Duke is a much more selective school than UVA. Additionally, small differences in LizzyM score become significant when using this metric to assess competitiveness for two similar schools. For example, Duke has a LizzyM score of 75, while Yale has a LizzyM score of 76; both schools are similarly selective, but someone might (very mistakenly) advise a applicant with a 3.9/36 that they are more competitive for Duke than they are for Yale. Finally, the LizzyM score is used as a way to tell if someone is statistically competitive for a single school and is significantly less useful for helping an applicant come up with a list of schools.

The Applicant Rating System - Overview

The WedgeDawg Applicant Rating System (ARS) was created to address these deficiencies. It takes into account most of the factors that make up an application to medical school, gives an applicant a separate score for each one, and then gives an applicant a numerical rating. This numerical rating is then translated to a category level and a profile of schools to apply to is created based on that category.

One of the major assumptions of the ARS is that applicants can be broadly classified in terms of competitiveness into one of 6 categories. Within these categories, distinctions between applicants are might lower than the differences between applicants that are in separate groups. Much of the variability that occurs between two applicants in the same group comes from subjective parts of the application that are not taken into account here, namely the personal statement, letters of recommendation, secondary essays, and their interviews. Because the purpose of the ARS is to create a starting point for a school list, these factors are not yet relevant. Indeed, the ARS does not assess where an applicant will be accepted; rather, it determines the best collection of schools for the applicant to apply to maximize chances of success at the best schools realistically possible.

The following factors are taken into account by the ARS:

  1. GPA
  2. MCAT
  3. Research
  4. Clinical Experience
  5. Shadowing
  6. Volunteering
  7. Leadership and Teaching
  8. Miscellaneous
  9. Undergraduate School
  10. Representation in Medicine
  11. GPA Trend
Each of these categories is assigned a score that corresponds to the strength of that portion of the application, weighted, and then summed together. The formula is as follows:

ARS Score = (Stats*5)+(Research*3)+(Clinical Experience [9, 5, -10])+(Shadowing [6, -5])+(Volunteering*2)+(Leadership and Teaching*2)+(Miscellaneous*3)+[(Undergrad-1)*3]+[(URM-1)*7]+[(Upward Trend-1)*4]

This score is then translated to one of 6 categories that applicants are grouped into, which are designated Levels S, A, B, C, D, E in decreasing score order. The score thresholds are as follows:

  • Level S: 85
  • Level A: 80
  • Level B: 75
  • Level C: 68
  • Level D: 60
  • Level E: 0
Note that the score is not out of 100 - it is in fact out of 121 if all factors are assigned the highest possible score. However, the raw number means very little when compared to the actual Level assigned to the applicant. Each level has its own profile of schools to apply to which are not parsed out by individual score.

School Categories and Applicant Profiles

Schools are similarly grouped into 7 broad categories by basis of selectivity. The categories are as follows:

Category 1 (TOP): Harvard, Stanford, Hopkins, UCSF, Penn, WashU, Yale, Columbia, Duke, Chicago

Category 2 (HIGH): Michigan, UCLA*, NYU, Vanderbilt, Pitt, UCSD*, Cornell, Northwestern, Mt Sinai, Baylor*, Mayo, Case Western, Emory

Category 3 (MID): UTSW*, UVA, Ohio State, USC-Keck, Rochester, Dartmouth, Einstein, Hofstra, UNC*

Category 4 (LOW): USF-Morsani, Wayne State, Creighton, Oakland, SLU, Cincinnati, Indiana, Miami, Iowa, MC Wisconsin, Toledo, SUNY Downstate, Stony Brook, VCU, Western MI, EVMS, Vermont, WVU, Wisconsin, Quinnipiac, Wake Forest, Maryland

Category 5 (STATE): Your state schools if they do not appear elsewhere on this list - You should always apply to all of these if applying MD

Category 6 (LOW YIELD): Jefferson, Tulane, Tufts, Georgetown, Brown, BU, Loyola, Rosalind Franklin, Drexel, Commonwealth, Temple, GWU, NYMC, Penn State, Albany, Rush

Category 7 (DO): DO Schools

Application profiles give the total number of schools an applicant should apply to in addition to the % of each category that should make up the total. Table 1 shows the score ranges, percentage of schools by category, total number of schools, and whether or not the applicant should apply to Category 6 or 7 schools. State schools should always be applied to if the applicant is applying to any MD schools.

dL9GvKa.png



RDExzik.png


Figure 1 shows the proportion of school list by category for each applicant level. Note that Level E applicants should only be applying to DO schools, as shown in Table 1.

Scoring Methodology

This section will delineate each of the metrics used to score an applicant in all of the categories mentioned previously. The multiplier for the score will also be shown, as well as the score cap for the section.

Stats

Score Cap: 10
Multiplier: 5

The stats section is determined by a combination of MCAT and GPA. However, it is different from the LizzyM system in that scores are grouped into larger groups that then determines the Stats score for the applicant. This is because when using the LizzyM system, an applicant with a 2.9 and a 40 will be as competitive as someone with a 3.9 and a 30, while this is not true in practice (generally the latter will be more competitive). The LizzyM score appears to be less accurate at the extremes.

Table 2 shows how to determine an applicant's Stats score based on their MCAT and GPA. The number given in the table is the Stats score assigned.

iQdtnfI.png


This table was developed by a combination of Tables 24/25 published by AMCAS that gives an applicant's chance of success with certain MCAT and GPA as well as by individually looking at how applicants with certain combinations of GPAs and MCATs fared. Median, 10th, and 90th percentile GPAs and MCATs for schools in each category were also looked at when compiling this chart. GPA is averaged over all applicable fields - undergraduate sGPA, undergraduate cGPA, post-bac GPA, graduate GPA.

Score conversion percentiles were taken from the old MCAT percentiles chart (2012-2014) and the new MCAT percentiles chart (2015). The percentage of the old MCAT score was used as the floor for the percentage for the new MCAT. So if 24 was 40th percentile, 25 was 42nd, 490 was 39th, 491 was 40th, and 492 was 41st, then 24 would correspond to 491-492.


Research Experience

Score Cap: 5
Multiplier: 3

Level 5: Significant, sustained research activity. Generally, applicants in this category will have a first author publication, publication in a high-impact journal, and/or solo presentation of their own, original work at a major conference. These are the research superstars who are performing work well beyond the level of an undergraduate. PhDs will generally fall into this category, too.

Level 4: Significant, sustained research activity, generally for at least 2 years. Applicants in this category may have a poster presentation, a middle author publication in a medium- or low-impact journal, an abstract, or a thesis. These applicants have a strong research focus and perform research above the level of the average undergraduate.

Level 3: Moderate research activity, generally for a year or more. These applicants generally don't have publications or presentations, but may have completed a project.

Level 2: Slight research activity, generally for less than a year.

Level 1: No research activity.

Clinical Experience

Note that clinical experience can be volunteer or non-volunteer experience.

Score Cap: 3
Multiplier: +9, +5, -10 (by Level)

Level 3: Significant, sustained clinical experience, generally for well over a year. These applicants have demonstrated a strong commitment to clinical endeavors and have exposure in a clinical setting well beyond the average applicant.

Level 2: Moderate clinical experience, generally for well under a year. These applicants have adequate/sufficient exposure to clinical activity.

Level 1: Slight or no clinical experience.

Shadowing

Score Cap: 2
Multiplier: +6, -5 (by Level)


Level 2: Adequate shadowing or greater

Level 1: Slight or no shadowing experience.


Volunteering

Note that this section takes into account both clinical and non-clinical volunteering.

Score Cap: 3
Multiplier: 2

Level 3: Significant, sustained volunteering activity, generally over a long period of time, in one or multiple organizations. May also be working with marginalized or disadvantaged groups or in uncomfortable settings.

Level 2: Some volunteering activity, generally with low-to-moderate levels of commitment or sustained activity.

Level 1: Slight or no volunteering experience.

Leadership and Teaching

Score Cap: 3
Multiplier: 2

Level 3: Sustained, significant teaching and/or leadership experience. This category includes applicants who teach grade school students, go on a teaching fellowship, have TA'd or tutored for long periods of time, are the head of a major organization, or have other equally demanding responsibilities.

Level 2: Some teaching and/or leadership experience, often with low-to-moderate levels of commitment or sustained activity.

Level 1: Slight or no leadership or teaching experience.

Miscellaneous

Score Cap: 4
Multiplier: 3

Level 4: Highly significant life experiences or achievements that are seen as outstanding and contribute maximally to personal and professional development. This may include Rhodes scholarships, world class musicianship, professional or Olympic athletics, significant or sustained meaningful or unique work experiences, or anything else outlandishly impressive.

Level 3: Moderately-to-highly significant life experiences or achievements. This includes other terminal graduate degrees such as PhDs or JDs, military or Peace Corp service, as well as intense involvement with a unique or meaningful non-medical activity.

Level 2: Minimal-to-moderate involvement in personal activity or work experience. This may include major personal hobbies or athletics, musicianship, or other experiences.

Level 1: Nothing else to add.

Undergraduate School

Score Cap: 3 (really 2, 1, 0, but that's taken into account in the formula already)
Multiplier: 3


Level 3: Harvard, Yale, Princeton, Stanford, MIT

Level 2: All other "prestigious" or highly selective schools including other Ivies, WashU, Duke, Hopkins, UChicago etc

Level 1: All other schools

Representation in Medicine

Score Cap: 2 (really 1, 0, but that's taken into account in the formula already)
Multiplier: 7

Level 2: Underrepresented in Medicine (URM)

Level 1: All other

GPA Trend

Score Cap: 2 (really 1, 0, but that's taken into account in the formula already)
Multiplier: 4


Level 2: Upward trend

Level 1: No upward trend

Discussion

There are a few problems associated with the ARS. First, it's tied mostly to MD applicants - it breaks down for people primarily applying to DO schools. It also doesn't have a real way to evaluate the competitiveness of MD/PhD applicants (or Lerner/Cleveland Clinic applicants). Second, it obviously does not take into account subjective factors such as how one talks about their experiences and it assumes that certain groups of applicants will be similar enough to group them based on an almost arbitrary cut-off (which could be contested). Finally, it does not have a great way of scoring people with multiple but very disparate GPAs (such as 2.9 undergraduate but 3.95 graduate).

Overall, this is just a tool for applicants to analyze themselves and figure out how to create a balanced school list that will offer them the optimal chance of success. I hope that it will not turn into a "check-box" machine where applicants will tailor their activities to try and "game" this system. Remember that it is not my system that is ultimately evaluating an application, it is a group of adcoms who do so through a process far more nuanced than this one. This is just a way to get an "at a glance" view of an application after it has been built. It is my hope that new applicants will use this system to help them construct a school list that is at the same time realistic and geared toward making them as successful an applicant as possible.

Members don't see this ad.
 

Attachments

  • Wedge Applicant Rating System v 1.3 (1-2017) protected.xlsx
    37.4 KB · Views: 17,111
Last edited:
  • Like
  • Love
  • Wow
Reactions: 174 users
Might be worth writing a paper! If an Adcom has access to medical student data on, say, Board scores, or pre-clinical GPA, the schema might be then used to predict medical student success.

I would tweak just a little, such as having Cornell in rank 1, and downgrade Wake.

Also consider the 10-90th percentile ranges as a criteria for "more selective
 
  • Like
Reactions: 20 users
I love this, of course. I think it'll make its mark very quickly in WAMC threads, and indeed the pilot version already has.

While you called it ARS, I'm going to recommend calling it WARS, WedgeDawg Applicant Rating System, if for no other reason than so we can think of it as Wedges Above Replacement.
 
  • Like
Reactions: 16 users
Members don't see this ad :)
Why is MIT second tier?

MIT students are just as good (arguably better) than those at HYPS.
 
  • Like
  • Haha
Reactions: 6 users
Wow. This is extremely well thought out.
 
  • Like
Reactions: 6 users
Can't wait to see the Excel file!
 
  • Like
Reactions: 1 users
Has this been tested in the current crop of applicants, did it predict which schools would issue invitations for interview?
I think that the undergrad school list could be rearranged with about 15-20 schools in each of the top levels.
 
  • Like
Reactions: 16 users
Very nice, though I obviously wonder about the accuracy of prediction.


On another note, in my happy world, all adcoms would use some formula like this to rank applicants (at least prior to interviews). In my world, research schools would increase the research multiplier, those more interested in service would increase the volunteering/leadership multipliers, etc etc etc.
 
Last edited:
  • Like
Reactions: 3 users
Why is MIT second tier?

MIT students are just as good (arguably better) than those at HYPS.

Yeah. That is TOTALLY the most important thing in this thread.

:bang:
 
  • Like
  • Haha
Reactions: 23 users
I think you should see how this worked for students accepted this cycle. I'll definitely see how this compared to my actual cycle once you release the excel spreadsheet
 
  • Like
Reactions: 2 users
I have serious doubts concerning multiple of your factors and their criteria; the whole ordeal is VERY precise for something which -I assume- hasn't been widely tested - or not tested at all. (Which is systematically a recipe for personal biases.)

I'd wait for more data before making up my mind.

Good initiative though!
 
  • Like
Reactions: 5 users
I volunteer as tribute

for comparing my application to this system's projections.
 
  • Like
Reactions: 5 users
Members don't see this ad :)
what does low-yield mean in terms of schools? sorry if this is an obvious one...
 
  • Like
Reactions: 4 users
As comprehensive as the above rating system appears, its still a little too oversimplified to be a primary resource for creating a school list.

1. Each school (and sometimes each adcom member themselves) applies different weights to each EC category. Some schools value research and community service more than others and vice versa (part of the "mission" of a school). You can have great stats and ECs but if you don't fit the school's mission, you can still be rejected (happens to hundreds of applicants every year - part of the "randomness" of admissions). Not to mention that on the whole, some schools value ECs more than others (the so called "stats ******" vs. "nontrad friendly").

2. Each school looks at multiple MCATs differently (combines scores vs. looks at most recent)

3. Each school evaluates GPAs differently (cGPA vs. sGPA, lower vs. upper division, upward vs. downward trends)

4. Each school evaluates CC credits differently (some won't accept, some look down on, some don't care)

5. Each school evaluates Grad credits differently (some view as EC, some view as watered down GPA, some use it to replace UG GPA)

I'd also say that school tier is a little more complicated and varies from med school to med school.

For instance in CA a common example would be:
Harvard/Stanford/etc > UC Berkeley > UC San Diego > UC Riverside > Cal State > CC

IMO MIT and CalTech both should be in the top tier as well.

On the whole, however, its a good start and has the potential to be a very helpful guide after some minor adjustments and adding a few personalization features (as mentioned above).

Just my n=1
 
  • Like
Reactions: 5 users
Why is MIT second tier?

MIT students are just as good (arguably better) than those at HYPS.

It's not a matter of how good they are, it's a matter of how the schools are perceived by adcoms. I've been reading some literature (mostly blog posts and stuff) from applicants from schools and found that while MIT is a wonderful name for MD/PhD admissions, it doesn't seem to have the same pull for MD only (PM if you're interested in what I was reading). However, this was something I was struggling with initially - I just haven't found enough MIT datasets to accurately tell the "school effect" of MIT. As such, if people from MIT are willing to try this system out and see if they can get me some more data, that would be wonderful.

Has this been tested in the current crop of applicants, did it predict which schools would issue invitations for interview?
I think that the undergrad school list could be rearranged with about 15-20 schools in each of the top levels.

I haven't tested it to the extent to which someone might call "rigorous". Mostly I checked it against applicants and see if I and other people who generally give solid advice on WAMC threads (Goro, gyngyn, etc) would agree with what was given by this algorithm. Intuition played a major role in the fine-tuning of this formula - it's creation was inductive.

Really, it's the quantization of what is already generally already known at an intuitive level by most people giving good advice in WAMC threads. However, I am curious to see the predictive validity of it based on people in the current cycle.

As I said before, I used myself and others that I know in real life to fine-tune some of the numbers. However, I might argue that the predictive validity might be strongly affected by subjective aspects such as LORs, PS, interviews, etc that have a very strong impact on acceptances and interviews. If this algorithm does not predict acceptances particularly at highly competitive schools, I would guess that this would be a major factor to that effect.

I think you should see how this worked for students accepted this cycle. I'll definitely see how this compared to my actual cycle once you release the excel spreadsheet

I agree - however, keep in mind that the subjective parts of an application cannot be evaluated by this tool. This does not predict where one will get accepted, only where one should apply to maximize chances of an acceptance at the best school possible. It's not meant to predict specific acceptances, just acceptances in general to the best school that would be realistic for that applicant.

I have serious doubts concerning multiple of your factors and their criteria; the whole ordeal is VERY precise for something which -I assume- hasn't been widely tested - or not tested at all. (Which is systematically a recipe for personal biases.)

I'd wait for more data before making up my mind.

Good initiative though!

Thank you for your comments! I would just like to reiterate that this is a method for creating a school list - not for predicting acceptances. Many people who were successful in their application cycle applied to schools in ratios similar to the ones suggested by this algorithm. This is not a method to predict particular acceptances - I would argue that there is not a system that can accurately do that due to the enormous variability inherent to this process.

I volunteer as tribute

for comparing my application to this system's projections.

Please do and report your results!
 
  • Like
Reactions: 4 users
For anyone who would like to test this and report back, here is the Excel file.

Please read the instructions carefully and only change values where indicated - if you mess it up, just re-download it.
 
  • Like
Reactions: 1 user
I would compare my results to the system, but as a PA resident I'll be breaking the rules and applying to our low-yield "state" schools (Temple, Jefferson, Penn State)
 
  • Like
Reactions: 3 users
You might consider specifying ex military and Peace Corps as factors that bump one's changes upward more than most experiences.
Those would fall under miscellaneous! I would expect that they are level 3 or 4 Misc experiences. Adding them explicitly might be a good idea, however. Would you recommend they be level 3 or 4?

Also @alpinism

Thank you for your very detailed input! I agree that schools definitely have different importance ratings for each category. However, it is so varied that if one were to take into account the specifics for each school. My system is more of a generalized account and it is a tool that helps people decide where to apply, not where they are the best "fit". That seems to be figured out by both applicant and school during the interview and (if they get to that stage) second look days.
 
I'm a level 5 vegan how much does that count for?
 
  • Like
  • Haha
Reactions: 52 users
Seems like you did a great job! In terms of field testing, what would be considered a success case for the system? An applicant with a school list matching the results given by WARS and with one acceptance?
 
Rising grade trends would be another.

I trust that most of you can see that the Dawg has created a nice hypothesis, and now it can be tested for validity.


You might consider specifying ex military and Peace Corps as factors that bump one's changes upward more than most experiences.
 
  • Like
Reactions: 6 users
I'd like to gauge your thoughts as well as any adcoms that would like to chime in on the effect of other graduate "terminal degrees" for this system. Is it truly considered favorable to have such a degree if say its in another healthcare profession e.g. pharmacy, PA, etc? Or would this be considered negative and maybe drop down someones miscellaneous score to a 2? Might want to consider some verbiage to address this for your trial efforts. I realize this is an unproven system but with the level of effort put into this I'd love to see it refined to something that could be verified and really meaningful at some point.
 
Re undergrad schools.

No liberal arts colleges in there.

Amherst, Williams, Swarthmore, Wesleyan, and the Nescac schools are listed in the top school list for matriculating students at many competitive schools.

Re personal experience with Mit students: smart as heck, awkward as fark.
 
  • Like
Reactions: 7 users
by your metric i am a cat D applicant but accepted to type 1, 2, and 6 schools

This is the point at which I started posting in threads as well to see how well my formulations matched up with community suggestions.

i may be wrong here but this seems to be the entirety of your validation method. your weighing of various factors is arbitrary and is held to the standard of consistency with sdn suggestions. so really, it means nothing. while i have no doubt premeds will flock to this because you throw lots of numbers at them and fancy sounding stuff like "python", the truth is there is no actual hard data to back any of this formula up.

ultimately, med school admissions is a fuzzy science. no matter how hard you or anyone else tries to quantify the process there will and always will be a large number of outliers. the formula imo is really pretty much this:

meet gpa/mcat standards --> have a cool thing or two that matches what your reviewer is looking for --> don't be weird in the interview --> randomize --> get in.
 
  • Like
Reactions: 12 users
by your metric i am a cat D applicant but accepted to type 1, 2, and 6 schools



i may be wrong here but this seems to be the entirety of your validation method. your weighing of various factors is arbitrary and is held to the standard of consistency with sdn suggestions. so really, it means nothing. while i have no doubt premeds will flock to this because you throw lots of numbers at them and fancy sounding stuff like "python", the truth is there is no actual hard data to back any of this formula up.

ultimately, med school admissions is a fuzzy science. no matter how hard you or anyone else tries to quantify the process there will and always will be a large number of outliers. the formula imo is really pretty much this:

meet gpa/mcat standards --> have a cool thing or two that matches what your reviewer is looking for --> don't be weird in the interview --> randomize --> get in.
OP isn't saying what he has is validated yet. Rather, the validation will potentially come with the help of suggestions and community feedback. Nothing wrong with trying to provide something beyond GPA/MCAT that could be useful in the future.
 
  • Like
Reactions: 3 users
This ends up with me ranked way too high...
 
  • Like
Reactions: 2 users
Also @alpinism

Thank you for your very detailed input! I agree that schools definitely have different importance ratings for each category. However, it is so varied that if one were to take into account the specifics for each school. My system is more of a generalized account and it is a tool that helps people decide where to apply, not where they are the best "fit". That seems to be figured out by both applicant and school during the interview and (if they get to that stage) second look days.

hate to say it, but i like where you're going with this. sure, it may need some more refinements, but it's just a guide.

the idea of trying to develop an optimal list of say, 20-30 schools, for any applicant to apply to is pretty cool. the shotgunning approach is too costly for many applicants, and it probably puts a strain on some adcoms (although some may like it so they can talk about their low acceptance rates). but, giving guidance to applicants to narrow their focus and get the best bang for their application buck$ is what those WAMC threads are all about.

hope you continue to be receptive to others trying to help you improve this.
 
  • Like
Reactions: 1 user
ultimately, med school admissions is a fuzzy science. no matter how hard you or anyone else tries to quantify the process there will and always will be a large number of outliers


Agreed. Trying to quantify the way ~150 different schools will view one individual is too complicated for a single equation (no matter how much applicants wish they could reduce it to a single equation).



As for other factors, let's add:

-LoRs
-Primary submission date
-Personal Statement
-Whether or not you play video games for a living, and,
-Being the grand-nephew of the dean of admissions
 
  • Like
Reactions: 7 users
OP isn't saying what he has is validated yet. Rather, the validation will potentially come with the help of suggestions and community feedback. Nothing wrong with trying to provide something beyond GPA/MCAT that could be useful in the future.
No, validation can only come from large-scale data mining, which will never happen. So all we will have is an anecdote-based validation which isn't validation at all. Personally I'd rather not have a flawed tool that implies quantification of a non-quantifiable process but that's just me.
 
  • Like
Reactions: 2 users
Re undergrad schools.

No liberal arts colleges in there.

Amherst, Williams, Swarthmore, Wesleyan, and the Nescac schools are listed in the top school list for matriculating students at many competitive schools.

I agree. Also Reed which grade deflates and for which students ought to get a bit of a bump for school attended.

Re personal experience with Mit students: smart as heck, awkward as fark.[/QUOTE]

Which should work in their favor for getting interviews, even if it does not favor them in admissions.
 
  • Like
Reactions: 2 users
I matched for a C level, that matches up quite well with the schools I applied to and was eventually accepted to. Good job @WedgeDawg , very cool idea.
 
  • Like
Reactions: 1 users
I agree. Also Reed which grade deflates and for which students ought to get a bit of a bump for school attended.

Re personal experience with Mit students: smart as heck, awkward as fark.

Which should work in their favor for getting interviews, even if it does not favor them in admissions.[/QUOTE]

The ucs are also missing. Berkeley and LA are nothing to scoff at.
 
I find it humorous that people have such a problem with this, but turn around and use the LizzyM score without a second thought.

This is an awesome tool, and another way of "getting a feel" of how to apply to medical school. You all need to calm down. The OP never said the tool was perfect.

Everyone complaining about data mining and whatever... I'm sure they did that for the LizzyM score, too. :smack:
 
  • Like
  • Haha
Reactions: 8 users
I have to say the results I got are almost identical to the breakdown I had selected for myself and has been okayed/suggested by others in WAMC. I say right on! I do wonder if upward trends should be considered in some way, as Goro suggested, though
 
  • Like
Reactions: 1 user
@WedgeDawg

Here's a thought that might help incorporate school specific preferences that might change your school list. If you score at an A or S level and your research score is also a 5, then the percentage breakdown might change to include more top research schools, like from 45% to 60% or something like that. Similarly if you score at a B or C level and your volunteer or clinical experience levels are at the highest level, you might recommend applying to an even higher percentage of schools like creighton and other schools that emphasize service. Does that make sense?
 
  • Like
Reactions: 1 users
nice humble brag Mehc
Sorry, I'll only give feedback that doesn't hint in any way that the system ranked me well, because even though I admit that the assessment is inaccurate, apparently I'm bragging?

I didn't realize that you guys put that much stock in this as-of-yet untested system. :rolleyes:
 
  • Like
Reactions: 3 users
lol, I knew that would get you! :rolleyes::rolleyes:

..too easy
 
lol, I knew that would get you! :rolleyes::rolleyes:

..too easy
Good for you?
I'd be more embarrassed of going around being a jack@$# for giggles than I am of responding sarcastically to jack@$#ery.
 
Everyone complaining about data mining and whatever... I'm sure they did that for the LizzyM score, too. :smack:
i'm sure this is directed at me and i'm also sure you have no idea what i'm talking about
 
  • Like
Reactions: 1 users
Good for you?
I'd be more embarrassed of going around being a jack@$# for giggles than I am of responding sarcastically to jack@$#ery.
take it outside you two
 
  • Like
Reactions: 1 users
Good for you?
I'd be more embarrassed of going around being a jack@$# for giggles than I am of responding sarcastically to jack@$#ery.

You're right, in the future I will be sure to use the jackass font and you just be sure to use the sarcastic font. On a side note, why so grumpy?
 
  • Like
Reactions: 1 user
Top