In which a neurologist writes a bad study claiming we are all idiots

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Nor is it evidence

I disagree, the authors estimate is based on 2 prospective cohort studies, and data on preventable deaths measured among ED discharges that all provide a similar estimate. Now you could argue that there are limitations and the estimate may not be accurate, I agree with that, but suggesting it is not evidence is incorrect.

What is you estimate based on? Your personal experience? Anecdote is not evidence. Until there is better evidence the ‘best available evidence’ would still be the 2 prospective cohort studies and not your personal experience.

As to your suggestion that the estimate is absurd because with such high rates EM physicians would not be able to obtain medical insurance or it would prevent patients from going to the ER. The former point assumes all of the misdiagnosis would lead to litigation and the rates are incredibly higher compared to any other speciality and the cost to insurance companies would be too high. Do you have any data on this? For your latter point, I don’t follow your logic, why would patients stop going to the ER with a 0.2% case fatality from diagnostic error, the benefit of going to the ER would greatly outweigh the harms from diagnostic error.

Your opinion is already clear, no need to repeat. You believe in evidence above all else rah rah, even if the data/methods underpinning said “evidence” is so weak that it’s own authors say it may not actually support the claim they are making.

I’m sure if you were in Salem in the 1600s you would have lauded the conviction of those witches because the “evidence” the prosecution produced was the “best available.”

Members don't see this ad.
 
  • Like
Reactions: 9 users
I think the author’s whole point was to create this “dizziness is dangerous” paradigm and support that dizziness needs pathways (and reimbursement) similar to STEMIs. Never mind that there’s all sorts of data on benefit of PCI in STEMI. I have yet to see high quality evidence that posterior circulation CVA pts are any less f&@$ed because a neurologist told us to get an MRI STAT.
As an aside, the lead author is essentially a dizziness expert. He's written a few papers with an EM doc that, to his credit, are pretty helpful in approaching these patients in the ED. Here's a nice summary.

The paper is what it is. I certainly make diagnostic errors. I also agree with others that this is pretty classic - another group of specialists bemoaning how bad the care delivered in the ED is. The expectation that we always strike the perfect balance between knowing enough not to consult, but not be "hyper-independent", is tiring. I am happy to listen to critiques from experts, but at the end of the day I'm not losing sleep over a specialist calling me an idiot. I don't give a ****.
 
  • Like
Reactions: 1 user
I am the tweet critiquing this study.

Only have time for a quick note.

Funny to say the non-U.S. studies are corroborated by the observational work, when the death rates in the observational studies reviewed are about 0.0001% – which are pooled with the tiny non-U.S. studies and their ~1 to 5% death rates to make a "plausible" 0.2% death rate. This is grossly ridiculous to pool studies with findings several orders of magnitude different and just throw a dart in the middle and say "this is right!"

And, frankly, the systematic review is just full of – for lack of a better word – full of lies. Statements such as:
"One study with such a design found misdiagnosis-related death rates—0.12 percent (n=12,375 of 10,093,678)148—to be much closer to those seen in the high-quality, prospective study (0.2%)."

Citation 148 is:
"Early death after discharge from emergency departments: analysis of national US insurance claims data"

Finding a result such as:
"Among discharged patients, 0.12% (12 375/10 093 678, in the 20% sample over 2007-12) died within seven days, or 10 093 per year nationally."

So, 0.12% of 70 year olds died within 7 days of an ED visit – not even because of errors! – and this supposedly supports the plausibility of a 0.2% per-visit misdiagnosis death rate? This is a big leap. The peer reviewers totally failed on this paper.
 
  • Like
Reactions: 7 users
Members don't see this ad :)
this study is pretty terrible, and CNN just did a piece on jake tapper's daughter's delay to appendicitis diagnosis. a lot of common tropes of 'the doctors weren't listening, i knew something was wrong, i asked about appendicitis, i called the administrators and demanded answers' etc etc.

what a terrible climate for emergency medicine. seeing these studies after getting absolutely destroyed by unprecedented patient volume, lack of staffing and bed availability, lack of accepting hospitals and EMS crews for transport, medicare cuts... it's really something to experience.
It’s terrible in all of medicine really, not just ED; we just have a bit more say in who we see on the outpatient side
 
  • Like
Reactions: 1 user
I think the author’s whole point was to create this “dizziness is dangerous” paradigm and support that dizziness needs pathways (and reimbursement) similar to STEMIs. Never mind that there’s all sorts of data on benefit of PCI in STEMI. I have yet to see high quality evidence that posterior circulation CVA pts are any less f&@$ed because a neurologist told us to get an MRI STAT.

Yea let’s not kid ourselves. If neurologists worked in the same environment that we do and have to work up 5 dizzy people of varying ages and comorbidities a shift, the queue for MRI would be 3 days long.

I don’t buy the HINTS exam as applicable in many cases. Have you ever tried a head impulse test in a 82 yo grandpa? Do you know what it sounds like when you break an 82 yo grandpas neck?
 
  • Like
  • Haha
Reactions: 17 users
Yea let’s not kid ourselves. If neurologists worked in the same environment that we do and have to work up 5 dizzy people of varying ages and comorbidities a shift, the queue for MRI would be 3 days long.

I don’t buy the HINTS exam as applicable in many cases. Have you ever tried a head impulse test in a 82 yo grandpa? Do you know what it sounds like when you break an 82 yo grandpas neck?
Who said the HINTS exam is applicable in many cases? It is only applicable in an acute vestibular syndrome. I have performed the head impulse test on the elderly and have never broken a neck, lol. Please leave your grandpas neck alone, and the neck of patients that don't present with an acute vestibular syndrome.

It's also not a question of belief but evidence - the HINTS exam has better sensitivity than an MRI, but that does entail understanding what it is used for and how to perform it.
 
  • Dislike
Reactions: 1 user
Not making many friends here.

Everyone remember how to hide a $100 bill from a neurologist?
 
Yes, the standard of care obviously is any patient with a complaint that contains dizziness will now get a Neurology consult. I am excited to see how the profund increase quality of care and cost and ED throughput with the increase in CT angio and MRIs. Exciting.
Our neurologists really want to be consulted more than I want to consult them because they say things like "that's just peripheral vertigo, get an MRA head and neck and if negative obs them for symptom control." But other times they say things like "those new neurologic complaints in your patient with MS are clearly not MS. You can just discharge the patient now."

Insert one of several classic WTF gifs.
 
  • Like
Reactions: 1 user
Nor is it evidence

I disagree, the authors estimate is based on 2 prospective cohort studies, and data on preventable deaths measured among ED discharges that all provide a similar estimate. Now you could argue that there are limitations and the estimate may not be accurate, I agree with that, but suggesting it is not evidence is incorrect.

What is you estimate based on? Your personal experience? Anecdote is not evidence. Until there is better evidence the ‘best available evidence’ would still be the 2 prospective cohort studies and not your personal experience.

As to your suggestion that the estimate is absurd because with such high rates EM physicians would not be able to obtain medical insurance or it would prevent patients from going to the ER. The former point assumes all of the misdiagnosis would lead to litigation and the rates are incredibly higher compared to any other speciality and the cost to insurance companies would be too high. Do you have any data on this? For your latter point, I don’t follow your logic, why would patients stop going to the ER with a 0.2% case fatality from diagnostic error, the benefit of going to the ER would greatly outweigh the harms from diagnostic error.
To quote Enrico Fermi "but where is everybody?"

If 1/350 ED patients die as a result of misdiagnosis, then where are the dead bodies? They go straight to the funeral home? Get enrolled in hospice at a PCP visit? In my ED, that's about 1 patient per day. Would I not see them, at least some of them, on bounceback? My ED has a robust case review process and I am on the review committee. In 10+ years on this committee I can think of ONE case that I would call a death caused by misdiagnosis (terrible case). I know of many, many more errors and misdiagnoses, but they didn't cause death.

No, my anecdotal experience isn't grounds for statistical analysis. However, when your calculations give your conclusions that are so wildly out of step with your experience it should give you pause.
 
  • Like
Reactions: 10 users
Stop feeding the troll people.
 
  • Like
Reactions: 2 users
To quote Enrico Fermi "but where is everybody?"

If 1/350 ED patients die as a result of misdiagnosis, then where are the dead bodies? They go straight to the funeral home? Get enrolled in hospice at a PCP visit? In my ED, that's about 1 patient per day. Would I not see them, at least some of them, on bounceback? My ED has a robust case review process and I am on the review committee. In 10+ years on this committee I can think of ONE case that I would call a death caused by misdiagnosis (terrible case). I know of many, many more errors and misdiagnoses, but they didn't cause death.

No, my anecdotal experience isn't grounds for statistical analysis. However, when your calculations give your conclusions that are so wildly out of step with your experience it should give you pause.
Agreed. I work in a similarly busy Ed. This reminds me of the dumb PE / copd study from Italy a few years back. Something *****ic like 15% of those patients had PEs. Move along it’s idiotic.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
I wonder if the earlier poster would still allege that the naysayers didn't read the study given the depth of many replies here including someone going through the citations, people pulling screen grabs, etc. Just want to say I appreciate the quality of the clap backs on this thread. The paper is bad science and the lay news coverage is bad journalism. I think that's pretty clear.
(I surveyed posters in this thread and 97 percent of them support the above... It's the best available evidence!)
 
  • Like
Reactions: 1 users
We owe it to the general public to do better than to publish this idea as a "best guess" that needs to be studied more.”
This point seems to be major issue for most people. Should we not publish the ‘best available evidence’ because it runs the risk of being sensationalized and misconstrued by the media? I do agree with your point, and it can be argued that the claims could have been more tempered regarding the supporting evidence, though the authors did acknowledge many of the limitations and it seems the bigger issue is the fact that the NYT wrote an article. Is that the authors fault?
Wrong evidence is more harmful than no evidence. Wrong evidence directs patients towards being harmed, wastes research effort, and has a negative effect on public opinion.

Imagine for example claiming an anti-tuberculosis/inflammatory arthropathy drug should be first line for a viral respiratory infection. It is very easy to publish low quality, harmful evidence, which then leads to direct harm to patients who receive the drug, a divisiveness in public opinion, and time/effort for researchers to then disprove that initial claim.
 
Last edited:
  • Like
Reactions: 4 users
Didn't I say that above?

Edit: yeah, you saw it! Great minds, and all that.
Yes, you made the same point, but you know me well enough to know I'm not gonna let a little redundancy get in the way of my quoting Enrico Fermi!
 
  • Love
  • Like
Reactions: 1 users
Wrong evidence is more harmful than no evidence. Wrong evidence directs patients towards being harmed, wastes research effort, and has a negative effect on public opinion.

Imagine for example claiming an anti-tuberculosis/inflammatory arthropathy drug should be first line for a respiratory infection. It is very easy to publish low quality, harmful evidence, which then leads to direct harm to patients who receive the drug, a divisiveness in public opinion, and time/effort for researchers to then disprove that initial claim.

Also, systematic reviews are only highest level of evidence when they have quality studies supplying the underlying data for the review. Otherwise it’s just GIGO: Garbage In, Garbage Out. If you have a GIGO scenario, which the authors acknowledge significant limitations to their own data, then best practice is to discard the systematic review and its results.
 
  • Like
Reactions: 2 users
Here are some studies on the benefit of endovascular thrombectomy in basilar occlusions.

https://www.nejm.org/doi/full/10.1056/NEJMoa2207576

https://www.nejm.org/doi/full/10.1056/NEJMoa2206317
Cool, small studies out of China, one with a moving primary endpoint. Will be interesting to see when the big US based study with a negative primary endpoint but some trend towards benefit and a couple of positive non-patient oriented secondary outcomes gets published.
 
  • Like
Reactions: 2 users
Wrong evidence is more harmful than no evidence. Wrong evidence directs patients towards being harmed, wastes research effort, and has a negative effect on public opinion.

Imagine for example claiming an anti-tuberculosis/inflammatory arthropathy drug should be first line for a viral respiratory infection. It is very easy to publish low quality, harmful evidence, which then leads to direct harm to patients who receive the drug, a divisiveness in public opinion, and time/effort for researchers to then disprove that initial claim.

Here's a good dive into how this is not best evidence, it's openly wrong evidence.
And BS methodology at times too.
After spending more time with this anyone defending this study methodologically is bananas.
 
  • Like
Reactions: 1 user
I am the tweet critiquing this study.

Only have time for a quick note.

Funny to say the non-U.S. studies are corroborated by the observational work, when the death rates in the observational studies reviewed are about 0.0001% – which are pooled with the tiny non-U.S. studies and their ~1 to 5% death rates to make a "plausible" 0.2% death rate. This is grossly ridiculous to pool studies with findings several orders of magnitude different and just throw a dart in the middle and say "this is right!"

And, frankly, the systematic review is just full of – for lack of a better word – full of lies. Statements such as:
"One study with such a design found misdiagnosis-related death rates—0.12 percent (n=12,375 of 10,093,678)148—to be much closer to those seen in the high-quality, prospective study (0.2%)."

Citation 148 is:
"Early death after discharge from emergency departments: analysis of national US insurance claims data"

Finding a result such as:
"Among discharged patients, 0.12% (12 375/10 093 678, in the 20% sample over 2007-12) died within seven days, or 10 093 per year nationally."

So, 0.12% of 70 year olds died within 7 days of an ED visit – not even because of errors! – and this supposedly supports the plausibility of a 0.2% per-visit misdiagnosis death rate? This is a big leap. The peer reviewers totally failed on this paper.

“This is grossly ridiculous to pool studies with findings several orders of magnitude different”
The authors never pooled the results of death rate with retrospective studies. The 0.2% death rate is based on the higher quality prospective study and corroborated by another prospective study. The authors specifically outline why the results were not pooled with lower quality retrospective studies that suffer from under-ascertainment bias. Have you read the study?

“So, 0.12% of 70 year olds died within 7 days of an ED visit – not even because of errors!”
Though the authors statement in that particular sentence is inaccurate, language they do not use when describing the study in other parts, your suggestion that this was baseline expected deaths and not due to error is also inaccurate.

0.12% of patients died 7 days after an ED visit that was unexpected as they had no diagnosis of a life limiting illness. By far the most common presentation associated with risk of early death was ‘altered mental status’. So a patient comes to the ER for altered mental status and are subsequently discharged from the ER, and 7 days later they die unexpectedly even though they have no diagnosis of a life limiting illness, and you don’t think an inappropriate discharge or a lack of additional testing is a possible factor that contributed to their death?

The authors themselves in the paper outline reasons why they think error is at play - “some deaths might have reflected “baseline” mortality after discharge from the emergency department. We view this as unlikely given observed variation in risk of mortality over time and across hospitals.”

full of lies
Out of a study over 700 pages you were able to find a single time the wording was inaccurate, though the citation still in fact supported their overall claim. This is your evidence that the study is "full of lies"? Since you were inaccurate, does the imply you were lying? Inflammatory rhetoric is not helpful.
 
How does this study support a conclusion that misdiagnosis causes death?

Mortality at 90 days was 31% in the thrombectomy group and 42% in the control group (adjusted risk ratio, 0.75; 95% CI, 0.54 to 1.04).
I never said it did.
 
Cool, small studies out of China, one with a moving primary endpoint. Will be interesting to see when the big US based study with a negative primary endpoint but some trend towards benefit and a couple of positive non-patient oriented secondary outcomes gets published.
Why do you consider the study small, based on what?

Studies are designed for efficiency, and powered accordingly based on the expected treatment effect. When benefit is large enough to be meaningful, it will be evident in a relatively "small" study population.
 
I wonder if the earlier poster would still allege that the naysayers didn't read the study given the depth of many replies here including someone going through the citations, people pulling screen grabs, etc. Just want to say I appreciate the quality of the clap backs on this thread. The paper is bad science and the lay news coverage is bad journalism. I think that's pretty clear.
(I surveyed posters in this thread and 97 percent of them support the above... It's the best available evidence!)

It‘s a 744 page study. I would allege that nobody, possibly even the authors have read the study. 🤣

I have no other point.
 
  • Like
Reactions: 1 users
Here are some studies on the benefit of endovascular thrombectomy in basilar occlusions.

https://www.nejm.org/doi/full/10.1056/NEJMoa2207576

https://www.nejm.org/doi/full/10.1056/NEJMoa2206317
These studies are fresh. Even in top institutions, our interventional colleagues haven't been consistently doing thrombectomies in basilar occlusions. It is more technically difficult and the evidence is unclear. It's like the early days of MCA thrombectomies (before this beauty https://www.nejm.org/doi/full/10.1056/nejmoa1411587).

That being said, I imagine that interventional thrombectomies will continue to improve as our technology and skills are honed.
 


Wait until they hear about NPs and PAs in the ED. But if they said that there would be so much backlash that they would be forced to retract. Doctors are really bad at organizing as a collective
 
Don't worry, after awhile you'll become a little numb to hot-garbage stuff like this.

Whenever one of these hit pieces come out, I remind myself of a small shop where I briefly worked. Place literally had blood stains on the curtains and always short on RNs (way before the recent severe shortage) which predictably caused long wait times. Basically every patient would complain about one or both of these 2 things. Then one day a new hospital VP of whatever came down and lectured us that all our problems (really, their problem of low pt sat) could be solved with AIDET. Lol. When we asked about getting more nurses to actually make things safer/better for patients we got crickets. And when we asked about the curtains they said they could only replace ~50% of them due to the "budget."

Basically, you come to appreciate that almost nobody has any idea of what the hell is going on (and actually matters) in HC. Especially the muppets who've don't treat patients or only see them in a well-resourced ivory tower.

You don’t think AIDET works to solve all problems 😮

You sir/madam blasphem…
 
“This is grossly ridiculous to pool studies with findings several orders of magnitude different”
The authors never pooled the results of death rate with retrospective studies. The 0.2% death rate is based on the higher quality prospective study and corroborated by another prospective study. The authors specifically outline why the results were not pooled with lower quality retrospective studies that suffer from under-ascertainment bias. Have you read the study?

“So, 0.12% of 70 year olds died within 7 days of an ED visit – not even because of errors!”
Though the authors statement in that particular sentence is inaccurate, language they do not use when describing the study in other parts, your suggestion that this was baseline expected deaths and not due to error is also inaccurate.

0.12% of patients died 7 days after an ED visit that was unexpected as they had no diagnosis of a life limiting illness. By far the most common presentation associated with risk of early death was ‘altered mental status’. So a patient comes to the ER for altered mental status and are subsequently discharged from the ER, and 7 days later they die unexpectedly even though they have no diagnosis of a life limiting illness, and you don’t think an inappropriate discharge or a lack of additional testing is a possible factor that contributed to their death?

The authors themselves in the paper outline reasons why they think error is at play - “some deaths might have reflected “baseline” mortality after discharge from the emergency department. We view this as unlikely given observed variation in risk of mortality over time and across hospitals.”

full of lies
Out of a study over 700 pages you were able to find a single time the wording was inaccurate, though the citation still in fact supported their overall claim. This is your evidence that the study is "full of lies"? Since you were inaccurate, does the imply you were lying? Inflammatory rhetoric is not helpful.
The review is hot garbage. I said that shooting from the hip when I saw the head line. I said it louder when I spent 5 minutes looking at it. I screamed it louder after spending 30 min reading it and the studies they included in it. Universally, everyone who has analyzed this paper notes multiple, fundamental flaws with methodology, statistics, design, and conclusion. This is entirely separate from the intermittently inflammatory rhetoric and editorialized condescension that flows throughout a lot of its text.

To be specific:
(1) The headline grabbing 250,000 deaths per year is based off of ONE study, from CANADA, of ONE ER’s HIGH ACUITY AREA, where they reviewed 500 consecutive charts looking for errors. Less than HALF of the patients were seen by an ED Attending! So from the start, we have a small single center study that doesn’t match typical US practice, and already have a selection bias towards high acuity/illness as well. They found a SINGLE man who died, partially due to a 6-7hr delay in diagnosing an aortic dissection. The paper notes he was admitted for chest pain, but they found some delay in delineating it was from dissection, which contributed to his death.

So 1/500 patients died. That means 0.2% of the patients in the 500 patient cohort died. The study authors decide this is the best evidence for overall death rate due to ER MISDIAGNOSIS, multiple it by the number of patients seen in a year in all American ERs and get 250,000.

This is all they have, and they use this study to grab headlines and slander an entire profession.

Now you might think this is just my opinion, but I do have statistics to back this up.

The authors themselves note that the confidence interval for this 0.2% death rate is… wide. In fact they calculate it to be from 0.005% to 1.1% (!!!). This would give you between 6,000 and 1.3million ER deaths a year in the US. This is an insanely huge CI, ERs kill between 5 people a day in this country and 3500 people at day in this country??

Now the authors note this fact, and also note that 0.2% number is 217x higher(!!) than the estimate from multiple retrospective reviews. But they feel this estimate is superior to those reviews, so they stick with it.

They decide to invent whole cloth an estimate that the real number must be near their’s, perhaps 0.1-0.4%. Which is how you do statistics, you invent them…. See their own text below.

41219465-C416-44BA-A35F-88B4E0E71100.jpeg


As evidence their invented estimate is correct, they cherry pick a study that showed of the SUBSET of patients who are >65yo, who are discharged from an ER, 0.12% die within 7 days. Which, in their world, means all of those are misdiagnoses (?!) and since 0.12% is close to 0.2% their number is great and supported. Now do we care that these are only elderly people with high baseline death rates? No class, this doesn’t invalidate that percentage who randomly die any given Sunday. Now do we care that actually a large percentage of the patients cause-of-death actually MATCHED the ER diagnosis from the recent visit (ie they were seen in the ER for COPD, then died of respiratory failure a week later), meaning de facto the ER had a correct diagnosis but perhaps the treatment was poor, or the patient just died regardless? Nope guys, that has nothing to do with what we are looking at! Do we care that a lot of patients died of things unrelated to their ER visit (ie. 3% of them died of opiate OD!), or that the initial ED visit was for “superficial injury” in 10% of the visits and those patients died of things like Acute Mi, stroke, or pneumonia within a week, clearly un-related to the primary visit? Nope. *waves hands*


So my opinion that this is a dumpster sludge study stands. You can go through the other various conclusions of this study and it fall apart over and over again, just like the claim of a quarter million deaths a year falls apart. Perhaps you would like the brief ACEP rebuttal—


Which is brief and easy to digest. An example section follows

4F3D9CD9-E43A-437F-BBFB-39B83AFDF2A2.jpeg
 

Attachments

  • 79A639A3-9283-4473-BBD8-460F1B84CD9C.jpeg
    79A639A3-9283-4473-BBD8-460F1B84CD9C.jpeg
    168.5 KB · Views: 38
  • 65CB38F5-909E-4939-AE9E-EADC45CB4B2B.jpeg
    65CB38F5-909E-4939-AE9E-EADC45CB4B2B.jpeg
    218.7 KB · Views: 38
  • Like
Reactions: 7 users
My first take is 17 out of 18 patients get the correct diagnosis. Wow! that seems pretty good!!!
I have used the HINTS test, and when it is positive it is helpful... but that's rare. Usually the ROAD test is a better indicator for pathology.
One thing I remember from residency was "speed is the enemy of thoroughness" quote. It's tough to try to clear out an entire ER of pathology quickly, while also being thorough. We're expected to move pretty fast in our ERs.
 
Last edited:
The review is hot garbage. I said that shooting from the hip when I saw the head line. I said it louder when I spent 5 minutes looking at it. I screamed it louder after spending 30 min reading it and the studies they included in it. Universally, everyone who has analyzed this paper notes multiple, fundamental flaws with methodology, statistics, design, and conclusion. This is entirely separate from the intermittently inflammatory rhetoric and editorialized condescension that flows throughout a lot of its text.

To be specific:
(1) The headline grabbing 250,000 deaths per year is based off of ONE study, from CANADA, of ONE ER’s HIGH ACUITY AREA, where they reviewed 500 consecutive charts looking for errors. Less than HALF of the patients were seen by an ED Attending! So from the start, we have a small single center study that doesn’t match typical US practice, and already have a selection bias towards high acuity/illness as well. They found a SINGLE man who died, partially due to a 6-7hr delay in diagnosing an aortic dissection. The paper notes he was admitted for chest pain, but they found some delay in delineating it was from dissection, which contributed to his death.

So 1/500 patients died. That means 0.2% of the patients in the 500 patient cohort died. The study authors decide this is the best evidence for overall death rate due to ER MISDIAGNOSIS, multiple it by the number of patients seen in a year in all American ERs and get 250,000.

This is all they have, and they use this study to grab headlines and slander an entire profession.

Now you might think this is just my opinion, but I do have statistics to back this up.

The authors themselves note that the confidence interval for this 0.2% death rate is… wide. In fact they calculate it to be from 0.005% to 1.1% (!!!). This would give you between 6,000 and 1.3million ER deaths a year in the US. This is an insanely huge CI, ERs kill between 5 people a day in this country and 3500 people at day in this country??

Now the authors note this fact, and also note that 0.2% number is 217x higher(!!) than the estimate from multiple retrospective reviews. But they feel this estimate is superior to those reviews, so they stick with it.

They decide to invent whole cloth an estimate that the real number must be near their’s, perhaps 0.1-0.4%. Which is how you do statistics, you invent them…. See their own text below.

View attachment 363575

As evidence their invented estimate is correct, they cherry pick a study that showed of the SUBSET of patients who are >65yo, who are discharged from an ER, 0.12% die within 7 days. Which, in their world, means all of those are misdiagnoses (?!) and since 0.12% is close to 0.2% their number is great and supported. Now do we care that these are only elderly people with high baseline death rates? No class, this doesn’t invalidate that percentage who randomly die any given Sunday. Now do we care that actually a large percentage of the patients cause-of-death actually MATCHED the ER diagnosis from the recent visit (ie they were seen in the ER for COPD, then died of respiratory failure a week later), meaning de facto the ER had a correct diagnosis but perhaps the treatment was poor, or the patient just died regardless? Nope guys, that has nothing to do with what we are looking at! Do we care that a lot of patients died of things unrelated to their ER visit (ie. 3% of them died of opiate OD!), or that the initial ED visit was for “superficial injury” in 10% of the visits and those patients died of things like Acute Mi, stroke, or pneumonia within a week, clearly un-related to the primary visit? Nope. *waves hands*


So my opinion that this is a dumpster sludge study stands. You can go through the other various conclusions of this study and it fall apart over and over again, just like the claim of a quarter million deaths a year falls apart. Perhaps you would like the brief ACEP rebuttal—


Which is brief and easy to digest. An example section follows

View attachment 363578
The calculation of CI, and then tossing it out and making one up, is probably my favorite part of this whole hot garbage pile.

I also would point out the dissection patient delay in diagnosis does not equal missed diagnosis, we don't have the details of that case but who knows why the delay happened (maybe the ED doctor initially suspected AD on presentation but CT was unavailable or who knows), and that the likelihood is the patient would have died from this 90% mortality condition even if the ED had X-ray vision and diagnosed the patient 1 minute after arrival.
My point is it's highly possible the attributable death rate to misdiagnosis in that study was 0, and 0 x (#ed visits in the ED) = ...0!
(This would also be a bad study)
 
  • Like
Reactions: 1 users
I don't know why we answer to jonny bananas. He's an idiot and a baffoon. He shouldn't even be allowed to post here. This study is so clearly ridiculous we are acknowledging the study by refuting those who support.

Almost 10% of all deaths in the US are due to ER doc misdiagnosing?

giphy.gif
giphy.gif
 
  • Like
Reactions: 3 users
“This is grossly ridiculous to pool studies with findings several orders of magnitude different”
The authors never pooled the results of death rate with retrospective studies. The 0.2% death rate is based on the higher quality prospective study and corroborated by another prospective study. The authors specifically outline why the results were not pooled with lower quality retrospective studies that suffer from under-ascertainment bias. Have you read the study?

“So, 0.12% of 70 year olds died within 7 days of an ED visit – not even because of errors!”
Though the authors statement in that particular sentence is inaccurate, language they do not use when describing the study in other parts, your suggestion that this was baseline expected deaths and not due to error is also inaccurate.

0.12% of patients died 7 days after an ED visit that was unexpected as they had no diagnosis of a life limiting illness. By far the most common presentation associated with risk of early death was ‘altered mental status’. So a patient comes to the ER for altered mental status and are subsequently discharged from the ER, and 7 days later they die unexpectedly even though they have no diagnosis of a life limiting illness, and you don’t think an inappropriate discharge or a lack of additional testing is a possible factor that contributed to their death?

The authors themselves in the paper outline reasons why they think error is at play - “some deaths might have reflected “baseline” mortality after discharge from the emergency department. We view this as unlikely given observed variation in risk of mortality over time and across hospitals.”

full of lies
Out of a study over 700 pages you were able to find a single time the wording was inaccurate, though the citation still in fact supported their overall claim. This is your evidence that the study is "full of lies"? Since you were inaccurate, does the imply you were lying? Inflammatory rhetoric is not helpful.
Bro. You're defending a piece of work that has been reamed by every Emergency Medicine society and multiple patient safety/quality/diagnostic error experts. I don't understand your stance, but I respect your dedication, wayward as it may be. I have no beef with neurology, and this has nothing to do with any of the authors, personally.

Nice of you to admit the authors are explicitly mischaracterising the BMJ Medicare study when supporting their plausible range. I also don't have much to say about your defence of the Medicare study. Again, 1 in 800 patients with an average age of 69 died within 7 days of an ED visit. If you look at the demographics in Table 1, you're going to see their comorbidities – more than enough cardiac disease, diabetes, COPD, etc. to cut down 0.12% for any reason. As for the "Altered Mental Status" cohort, you're right – the 0.3% of visits discharged with a "diagnosis" of AMS are probably a cerebrovascular or delirium event portending a serious outcome that wasn't found at the index visit. Does that make this study in any way a valid support of their 0.2% estimate? No. Heck, the BMJ paper literally says "These data should not be viewed as evidence of error." If there were a diagnostic error component, there ought not to be variation across hospital admit rates – low admission rate hospitals are *systematically* preventing admission for these incrementally higher-risk patients, not inducing the Emergency Department physicians to make diagnostic errors.

As for the "higher quality" evidence, since you don't specifically cite the study, I can't tell if you are citing Calder, Hautz, Nunez, or something else. The only one that matches your statement is Calder 2010, a grand total of 502 patients "registered in high-acuity areas of the emergency department"(i.e., CTAS scores 1-3 treated in the resuscitation or observation areas of the ED). One patient – referred for admission, no less – was a missed diagnosis of aortic dissection. I tend to appreciate "higher quality" evidence when more than a single event is observed!

1671516008846.png


My other favorite bit about this article – the 2008 "To Err is Human" report estimated 44,000 to 98,000 died each year due to preventable medical error – and that includes diagnostic error, management error, procedural errors, etc., let alone solely diagnostic error in patients discharged from the ED. The 250,000 number just doesn't have any face validity.
 
  • Like
Reactions: 4 users
Bro. You're defending a piece of work that has been reamed by every Emergency Medicine society and multiple patient safety/quality/diagnostic error experts. I don't understand your stance, but I respect your dedication, wayward as it may be. I have no beef with neurology, and this has nothing to do with any of the authors, personally.

Nice of you to admit the authors are explicitly mischaracterising the BMJ Medicare study when supporting their plausible range. I also don't have much to say about your defence of the Medicare study. Again, 1 in 800 patients with an average age of 69 died within 7 days of an ED visit. If you look at the demographics in Table 1, you're going to see their comorbidities – more than enough cardiac disease, diabetes, COPD, etc. to cut down 0.12% for any reason. As for the "Altered Mental Status" cohort, you're right – the 0.3% of visits discharged with a "diagnosis" of AMS are probably a cerebrovascular or delirium event portending a serious outcome that wasn't found at the index visit. Does that make this study in any way a valid support of their 0.2% estimate? No. Heck, the BMJ paper literally says "These data should not be viewed as evidence of error." If there were a diagnostic error component, there ought not to be variation across hospital admit rates – low admission rate hospitals are *systematically* preventing admission for these incrementally higher-risk patients, not inducing the Emergency Department physicians to make diagnostic errors.

As for the "higher quality" evidence, since you don't specifically cite the study, I can't tell if you are citing Calder, Hautz, Nunez, or something else. The only one that matches your statement is Calder 2010, a grand total of 502 patients "registered in high-acuity areas of the emergency department"(i.e., CTAS scores 1-3 treated in the resuscitation or observation areas of the ED). One patient – referred for admission, no less – was a missed diagnosis of aortic dissection. I tend to appreciate "higher quality" evidence when more than a single event is observed!

View attachment 363627

My other favorite bit about this article – the 2008 "To Err is Human" report estimated 44,000 to 98,000 died each year due to preventable medical error – and that includes diagnostic error, management error, procedural errors, etc., let alone solely diagnostic error in patients discharged from the ED. The 250,000 number just doesn't have any face validity.

Bro, it doesn't matter.
He's stuck in his Canadian Neurology box, with his unique population subset and his fixed healthcare resources and legal protections. Some animals have to live in zoos...
 
  • Like
Reactions: 1 users
It is as if we wanted to estimate Tom Brady’s stats for a season, so we watched one series out of one game of a random Canadian football QB’s career where he threw 3 passes, one of which was intercepted.

As such Quarterbacks have a 33% interception rate, and we do know with good certainty that Brady throws roughly 600 passes per season he plays, so we are rather sure he throws 200 interceptions per season.

Put me on sports center guys.
 
  • Like
  • Haha
Reactions: 7 users
It is as if we wanted to estimate Tom Brady’s stats for a season, so we watched one series out of one game of a random Canadian football QB’s career where he threw 3 passes, one of which was intercepted.

As such Quarterbacks have a 33% interception rate, and we do know with good certainty that Brady throws roughly 600 passes per season he plays, so we are rather sure he throws 200 interceptions per season.

Put me on sports center guys.
Need to convert this to a hockey or curling reference for the troll to actually comprehend this.
 
  • Like
Reactions: 3 users
Need to convert this to a hockey or curling reference for the troll to actually comprehend this.
Man I tried with the CFL.

… So imagine you’re in the flannel field, watching your favorite hockeyiesta warm up for the big match, and the jump-puck is about to trigger, but during the opening singing of Celine Dion’s majestic “My Heart Will Go On” a Zamboni driven by a Mountie punch-drunk on Alberta Rye crashes through the Blue Leaf’s front four, fouls the center back, then tips over and explodes, killing the entire starting 11 along with 42 cheerleaders and a few dozen supporters in their classic robes and top hats.

100 people killed prior to the start of the game! Knowing that over 100,000 games of hockey are played in Canada-land per annum, and that clearly 100 is an UNDER estimate of the death toll of a game (hell, the game hadn’t started yet! The death toll may approach infinity!), we estimate with confidence 10 million Canadians die at hockey games every year.

This constant downward drag on the population explains Canada’s lack of presence as a world super power, and we predict complete depopulation of the entire Continent of Canada by Spring 2025, returning it to the control of the native Moose.
 
  • Like
  • Haha
Reactions: 10 users
Man I tried with the CFL.

… So imagine you’re in the flannel field, watching your favorite hockeyiesta warm up for the big match, and the jump-puck is about to trigger, but during the opening singing of Celine Dion’s majestic “My Heart Will Go On” a Zamboni driven by a Mountie punch-drunk on Alberta Rye crashes through the Blue Leaf’s front four, fouls the center back, then tips over and explodes, killing the entire starting 11 along with 42 cheerleaders and a few dozen supporters in their classic robes and top hats.

100 people killed prior to the start of the game! Knowing that over 100,000 games of hockey are played in Canada-land per annum, and that clearly 100 is an UNDER estimate of the death toll of a game (hell, the game hadn’t started yet! The death toll may approach infinity!), we estimate with confidence 10 million Canadians die at hockey games every year.

This constant downward drag on the population explains Canada’s lack of presence as a world super power, and we predict complete depopulation of the entire Continent of Canada by Spring 2025, returning it to the control of the native Moose.

*Chef's kiss*
 
  • Like
Reactions: 1 user
Man I tried with the CFL.

… So imagine you’re in the flannel field, watching your favorite hockeyiesta warm up for the big match, and the jump-puck is about to trigger, but during the opening singing of Celine Dion’s majestic “My Heart Will Go On” a Zamboni driven by a Mountie punch-drunk on Alberta Rye crashes through the Blue Leaf’s front four, fouls the center back, then tips over and explodes, killing the entire starting 11 along with 42 cheerleaders and a few dozen supporters in their classic robes and top hats.

100 people killed prior to the start of the game! Knowing that over 100,000 games of hockey are played in Canada-land per annum, and that clearly 100 is an UNDER estimate of the death toll of a game (hell, the game hadn’t started yet! The death toll may approach infinity!), we estimate with confidence 10 million Canadians die at hockey games every year.

This constant downward drag on the population explains Canada’s lack of presence as a world super power, and we predict complete depopulation of the entire Continent of Canada by Spring 2025, returning it to the control of the native Moose.
I mean that's just science.
 
  • Like
Reactions: 1 user
If Neurologists want to criticise how I'm assessing patients in the Emergency Department, they're welcome to come down to the ED in a timely manner and see the patients themselves. I forget how naive some of our specialist colleagues can be, sitting in their ivory towers and throwing stones in a glass house.
 
  • Like
Reactions: 1 user
"The error and harm rates cited for ED visits, primary care patients, and hospitalized patients are very similar, even though emergency clinicians see any and all patients, unscheduled, under great time pressure, often in an overcrowded, chaotic environment with frequent distractions. As the AHRQ report acknowledges, “The ED is one of the most challenging clinical settings to practice medicine.” That diagnostic errors are not higher in emergency medicine is, in the words of the report, “a testament to the skill and capability of practicing emergency physicians.”
 
  • Like
Reactions: 4 users
"The error and harm rates cited for ED visits, primary care patients, and hospitalized patients are very similar, even though emergency clinicians see any and all patients, unscheduled, under great time pressure, often in an overcrowded, chaotic environment with frequent distractions. As the AHRQ report acknowledges, “The ED is one of the most challenging clinical settings to practice medicine.” That diagnostic errors are not higher in emergency medicine is, in the words of the report, “a testament to the skill and capability of practicing emergency physicians.”
Another highlight:

ED overcrowding is not an emergency medicine problem. It is a system problem and requires a system-level solution.
 
  • Like
Reactions: 4 users
What a reasonable editorial that looked at the AHRQ report objectively and extracted key insights from the report. In stark contrast to the many emotional reactions from others with a narrow focus on one small aspect of the study. Well done!

Some notable mentions below.

"The AHRQ report provides direction toward targeted solutions, a key insight being that just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms, which makes the problem far more tractable. Most of these conditions belong to 3 disease categories—vascular events, infections, and cancer (the “big three”).6 These are top causes of disease and death across clinical settings, so they should be prime targets for interventions."

"The AHRQ report documents that investments of these types that have been made for diagnosis of myocardial infarction can yield important dividends,"

"The AHRQ report should serve as a call to action."
 
What a reasonable editorial that looked at the AHRQ report objectively and extracted key insights from the report. In stark contrast to the many emotional reactions from others with a narrow focus on one small aspect of the study. Well done!

Some notable mentions below.

"The AHRQ report provides direction toward targeted solutions, a key insight being that just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms, which makes the problem far more tractable. Most of these conditions belong to 3 disease categories—vascular events, infections, and cancer (the “big three”).6 These are top causes of disease and death across clinical settings, so they should be prime targets for interventions."

"The AHRQ report documents that investments of these types that have been made for diagnosis of myocardial infarction can yield important dividends,"

"The AHRQ report should serve as a call to action."
Vascular events
Infection
Cancer.

Very finite small buckets with easy directed solutions.
 
  • Like
  • Haha
Reactions: 4 users
Top