Don't work for THalamus, we do use it. It's interesting data, dismissing it out of hand is not a good choice IMHO
Selection bias is in effect because Thalamus is a platform that
1. At least for my specialty, a lot of low tier programs are using for scheduling.
2. For me, has been the platform with the highest chance of being waitlisted upon a given interview invite and extremely difficult to get off, and
3. Is a terrible interviewing platform, and again, coincidentally, used by programs I would rank near the very end.
Additionally,
Post-hoc-ergo-propter-hoc fallacy is in play: Thalamus might use the results of this "study" to argue that their platform is better. This is a biased conclusion and a logical fallacy.
The AAMC letter encompasses data from Student Affairs offices, and I would consider that to be more reliable, although it may have its limitations as well.
I agree that there may be some program selection bias. They have not used this data to suggest that their platform is "better" than anything else -- and this data wouldn't prove or disprove that.
Well for one, their data is based only on programs that use thalamus, and they included programs that used thalamus this year but not last year. What percentage of programs use thalamus? If it’s not most programs, then their data is more likely to suffer from sample bias and a hasty generalization.
Their data also seems to conflict with AAMC data. Wouldn’t the AAMC have more complete data?
Also, and this is not a flaw in their “methodology” or anything, but they spend that entire article saying there is no difference in overlap and then proceed to give the exact same recommendations as the AAMC, just with a couple added phrases saying this would happen in any other year.
Actually, the AAMC will not have more complete data. They will get data from anyone that uses the ERAS scheduler. Other than that, they have no idea how many people we invite for interviews -- unless we label it some way in ERAS, which some programs do and some programs don't. The press release from the AAMC isn't based on any real data that I can see. And since the AAMC makes ridic amounts of $ off of ERAS, could easily decrease prices to take some of the financial burden off of students, I don't see them as a neutral player here.
I just looked at it again and I'm even more confused.
Since last year, their utilization is up 400%, yet the average person has the same number of thalamus interviews? Shouldnt the average person have a lot more than last year, if a lot more residencies adopted thalamus?
This is their explanation for determining hoarding.
"For each specialty denoted on the x-axis, if you were to select any two residency programs at random, the average amount of overlap in applicants that interviewed at both programs is displayed on the y-axis as a percentage. "
OK, let's review what they have published. The 1st graph explores how much overlap there is between candidates interviewing at programs. It's limited to programs that used Thalamus both years, so is unaffected by any growth. It shows no difference -- no evidence that "programs are all interviewing the same people". It says nothing about hoarding. The 2nd graph includes all new programs, and comparing with the first looks about the same -- again showing that in their data, there is no evidence of massive overlap of applicants and programs.
The third and fourth graph tries to assess hoarding. It compares the frequency of interviews scheduled between last year and this year. According to the text, it only includes invites from programs that participated in both years -- hence would also not be affected by any Thalamus growth. Thalamus purposefully removed the x axis labels so as not to create panic in applicants, plus would be very hard to interpret as any applicant could have 1 Thalamus interview but a whole bunch of others based upon which programs they applied to.
Obvously, the data is limited because these are only Thalamus programs -- but one would expect some change in these metrics if there was a huge widespread problem.
Yeah their data make no sense.
Also, some of those specialties have error bars you could drive a truck through. I’m guessing the ones with the large error bars are the specialties with low percentages of thalamus interviews, which coincidentally are also the specialties where people are really complaining about this problem.
Certainly possible that the hoarding / distribution is an issue only in a small subset of specialties.
"Thalamus experienced significant growth over the last year, increasing in size in number of programs by over 400% during this period (Conversely, the size of our applicant pool has remained essentially fixed given the wide distribution of candidate applications and interviews). "
>same number of applicants participating
>4x as many residencies participating
>interview distribution looks identical to last year
How on earth does this look reassuring to them. Feel like I'm taking crazy pills. Am I missing something here, or did they have 4x as many interviews handed out among the same number of candidates this year, and end up with identical looking distributions? How is that possible?
See above, the data presented are limited to invites from programs that participated in both years. At the end, they state that they also looked at all data and corrected for the growth in programs using Thalamus and also said there was no difference.
Does this mean that there is no problem? No, but it is somewhat reassuring that any problem is likely much smaller than what's being talked about here.