I just released the new application manager feature, where applicants can keep track of their cycle results (applications, interviews, acceptances, etc) all in one place. You can find it
here.
Early next week, after a few thousand applications worth of data are filled in by applicants, I'll then release the corresponding live Cycle Results feature which anonymously aggregates the application statuses of users who opt-in and makes the results publicly visible. For example, you will be able to see that X anonymous applicant received a Y school interview on a certain date, then received an acceptance on a later date, etc.
I'm hoping that these features do a few things:
1) Helps applicants see when schools start sending out interviews, finish sending out interviews, pull from waitlists, make acceptances, etc in a very structured and organized way. The forums are great for finding this info but it requires users to look over dozens of pages to see individual thread replies to get this data. I figured having it all in one consolidated place on Admit would make it much easier to get this info.
2) Allows me to improve the school list builder by having access to thousands of data points on the individual admissions rubrics, weights, and point systems that schools use, especially when it comes to giving out interviews. The admissions process for giving out interviews is largely rubric-based, where screeners score applicants in several categories, then apply further modifiers to the applicant's overall score based on certain unique applicant metrics (undergrad ranking, SES and disadvantaged status, legacy, etc). If we ignore screener inconsistency and control for essay quality, identical applicants should consistently receive interviews to the same schools based on their primary application (this was also proven in the NYU ML admissions paper, where a scoring algorithm had the same predictive power as screeners in the admissions office for recommending applicants for interview, further review, or rejection.
It'll also be interesting to see how the points threshold needed to receive an interview at each school decreases over the course of the cycle, as well as learn about different screening tendencies. For example, 518 is a commonly shared MCAT soft screen for non-X factor applicants applying to Penn/Hopkins/NYU/WashU based on their admissions rubrics. We can also learn about other nuances, like the impact of low MCAT subsection scores, thresholds for minimum service hours at service-heavy schools, the influence of state residency (largely CA and TX) on OOS school admissions, etc.
My hope is that with enough data, the school list builder can become extremely accurate and help applicants reduce the size of their school lists and apply more efficiently to let's say 20 schools, rather than inefficiently to 30 or 40 schools. It also gives applicants the chance to focus on and submit higher-quality secondary essays rather than writing low-quality ones. With smaller applicant pools, schools can also spend more time holistically evaluating individual applicants with demonstrated mission fits, rather than waste hundreds of hours screening thousands of applicants. I think it would also be cool to one day be able to automatically suggest specific improvements to applicants' primaries, such as what activities or hours are missing from the primary that would increase the probability of receiving an interview at specific schools if incorporated. This is something for the future though that I'll probably work on closer to the start of the next cycle.
That's all for now and thanks for reading; I'm excited to see how this works out and will leave updates as I begin working on the updated version of the school list builder as well as other features
🙂
View attachment 388504