The review is hot garbage. I said that shooting from the hip when I saw the head line. I said it louder when I spent 5 minutes looking at it. I screamed it louder after spending 30 min reading it and the studies they included in it. Universally, everyone who has analyzed this paper notes multiple, fundamental flaws with methodology, statistics, design, and conclusion. This is entirely separate from the intermittently inflammatory rhetoric and editorialized condescension that flows throughout a lot of its text.
To be specific:
(1) The headline grabbing 250,000 deaths per year is based off of ONE study, from CANADA, of ONE ER’s HIGH ACUITY AREA, where they reviewed 500 consecutive charts looking for errors. Less than HALF of the patients were seen by an ED Attending! So from the start, we have a small single center study that doesn’t match typical US practice, and already have a selection bias towards high acuity/illness as well. They found a SINGLE man who died, partially due to a 6-7hr delay in diagnosing an aortic dissection. The paper notes he was admitted for chest pain, but they found some delay in delineating it was from dissection, which contributed to his death.
So 1/500 patients died. That means 0.2% of the patients in the 500 patient cohort died. The study authors decide this is the best evidence for overall death rate due to ER MISDIAGNOSIS, multiple it by the number of patients seen in a year in all American ERs and get 250,000.
This is all they have, and they use this study to grab headlines and slander an entire profession.
Now you might think this is just my opinion, but I do have statistics to back this up.
The authors themselves note that the confidence interval for this 0.2% death rate is… wide. In fact they calculate it to be from 0.005% to 1.1% (!!!). This would give you between 6,000 and 1.3million ER deaths a year in the US. This is an insanely huge CI, ERs kill between 5 people a day in this country and 3500 people at day in this country??
Now the authors note this fact, and also note that 0.2% number is 217x higher(!!) than the estimate from multiple retrospective reviews. But they feel this estimate is superior to those reviews, so they stick with it.
They decide to invent whole cloth an estimate that the real number must be near their’s, perhaps 0.1-0.4%. Which is how you do statistics, you invent them…. See their own text below.
View attachment 363575
As evidence their invented estimate is correct, they cherry pick a study that showed of the SUBSET of patients who are >65yo, who are discharged from an ER, 0.12% die within 7 days. Which, in their world, means all of those are misdiagnoses (?!) and since 0.12% is close to 0.2% their number is great and supported. Now do we care that these are only elderly people with high baseline death rates? No class, this doesn’t invalidate that percentage who randomly die any given Sunday. Now do we care that actually a large percentage of the patients cause-of-death actually MATCHED the ER diagnosis from the recent visit (ie they were seen in the ER for COPD, then died of respiratory failure a week later), meaning de facto the ER had a correct diagnosis but perhaps the treatment was poor, or the patient just died regardless? Nope guys, that has nothing to do with what we are looking at! Do we care that a lot of patients died of things unrelated to their ER visit (ie. 3% of them died of opiate OD!), or that the initial ED visit was for “superficial injury” in 10% of the visits and those patients died of things like Acute Mi, stroke, or pneumonia within a week, clearly un-related to the primary visit? Nope. *waves hands*
So my opinion that this is a dumpster sludge study stands. You can go through the other various conclusions of this study and it fall apart over and over again, just like the claim of a quarter million deaths a year falls apart. Perhaps you would like the brief ACEP rebuttal—
Which is brief and easy to digest. An example section follows
View attachment 363578