Why Doctors So Often Get It Wrong.

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

QuikClot

Senior Member
10+ Year Member
5+ Year Member
15+ Year Member
Joined
Nov 20, 2005
Messages
616
Reaction score
12
Interesting article in the NYT online today:

Why Doctors So Often Get It Wrong


By DAVID LEONHARDT
Published: February 22, 2006
ATLANTA

ON a weekend day a few years ago, the parents of a 4-year-old boy from rural Georgia brought him to a children's hospital here in north Atlanta. The family had already been through a lot. Their son had been sick for months, with fevers that just would not go away.

The doctors on weekend duty ordered blood tests, which showed that the boy had leukemia. There were a few things about his condition that didn't add up, like the light brown spots on the skin, but the doctors still scheduled a strong course of chemotherapy to start on Monday afternoon. Time, after all, was their enemy.

John Bergsagel, a soft-spoken senior oncologist, remembers arriving at the hospital on Monday morning and having a pile of other cases to get through. He was also bothered by the skin spots, but he agreed that the blood test was clear enough. The boy had leukemia.

"Once you start down one of these clinical pathways," Dr. Bergsagel said, "it's very hard to step off."

What the doctors didn't know was that the boy had a rare form of the disease that chemotherapy does not cure. It makes the symptoms go away for a month or so, but then they return. Worst of all, each round of chemotherapy would bring a serious risk of death, since he was already so weak.

With all the tools available to modern medicine — the blood tests and M.R.I.'s and endoscopes — you might think that misdiagnosis has become a rare thing. But you would be wrong. Studies of autopsies have shown that doctors seriously misdiagnose fatal illnesses about 20 percent of the time. So millions of patients are being treated for the wrong disease.

As shocking as that is, the more astonishing fact may be that the rate has not really changed since the 1930's. "No improvement!" was how an article in the normally exclamation-free Journal of the American Medical Association summarized the situation.

http://www.nytimes.com/2006/02/22/business/22leonhardt.html?incamp=article_popular
 
How is it that this one article is the basis for "Why doctors often get it wrong"?

I realize doctors have one of the highest levels of responsibility to their patients, the public, each other, etc. etc., but it's called a doctor's practice not doctor's perfection. Some may not believe it, but doctors are *gasp* human and make human error. We go through all that education in order to reduce that error by expertise, but there will still be times that EVERY doctor faces error in some way, shape, or form.

Please don't view doctors as the bad guy all the time.
 
I'm shocked they get it wrong this often with the level of defensive medicine that is practiced.

Then again medicine is tricky business.
 
After clicking on the link, the NYT wanted me to "subscribe" so I didn't get to look at the article. What are the "autopsy studies" they talk about? That can be very misleading and a whole other can of worms.

I will admit that I don't look to the NYT for my medical news. They are way too biased for me. Others may, though, and that's ok, but I take it lightly.
 
Interesting, but essentially a useless fluff piece. What's the solution, do the autopsy sooner? 🙄
 
megboo said:
I will admit that I don't look to the NYT for my medical news. They are way too biased for me. Others may, though, and that's ok, but I take it lightly.

That's not medical news though, it's an editorial or opinion piece written by an individual.
 
KentW said:
Interesting, but essentially a useless fluff piece. What's the solution, do the autopsy sooner? 🙄
No. The solution, in part, entails eradicating the culture of the martyrdom in medicine. Most people understand that physicians are human, and that from time to time they are going to make mistakes. All they want is for them to admit when they do make mistakes, and make strides to improve upon their practice to minimize those mistakes. There are methods to do this (look in Gwande’s Complications; he devotes a section to the revolution anesthesiology). The fact that such a simple idea as the diagnostic PDA had to come from a non-Physician suggests to me that the desire for improvement is the exception rather than the rule in medicine. This is not to suggest that current physicians are willfully careless, just that they are comfortable with the level of inaccuracy present in their practice. There should not be a physician anywhere in the country that sees an article like this, shrugs and says “we’re only human.”
 
megboo said:
How is it that this one article is the basis for "Why doctors often get it wrong"?

The article is based on a study. The study is interesting. No conclusions were drawn, either by me or by the author.

I realize doctors have one of the highest levels of responsibility to their patients, the public, each other, etc. etc., but it's called a doctor's practice not doctor's perfection. Some may not believe it, but doctors are *gasp* human and make human error.

What was more striking to me is that the rate of error hasn't improved since the 30s. Better technology, better training, more money -- why isn't the rate of error falling?

We go through all that education in order to reduce that error by expertise, but there will still be times that EVERY doctor faces error in some way, shape, or form.

Please don't view doctors as the bad guy all the time.

You are chosing to view this article in an adversial light (see also your complaint that the NYT is "biased.") It isn't a questions of "doctors good" vs. "doctors bad." Rather, how can we improve the practice of medicine by understanding and challenging its present limitations.
 
JimiThing said:
he fact that such a simple idea as the diagnostic PDA had to come from a non-Physician suggests to me that the desire for improvement is the exception rather than the rule in medicine.

Not to digress, but what are you talking about? Are you referring to Personal Digital Assistants (PDAs)? If so, the growth of handheld computers in medicine has been almost entirely physician-driven, from the initial acquisition of the devices (typically paid for by physicians out of their own pockets), to the development of medical software to run on them. Most of the early medical software developers were physicians, even the founders of some of the larger companies. If anything, the growth of PDAs in medicine stands as testimony in favor of physicians' desire to improve the quality of care, quite contrary to your example.
 
QuikClot said:
The article is based on a study. The study is interesting. No conclusions were drawn, either by me or by the author.



What was more striking to me is that the rate of error hasn't improved since the 30s. Better technology, better training, more money -- why isn't the rate of error falling?



You are chosing to view this article in an adversial light (see also your complaint that the NYT is "biased.") It isn't a questions of "doctors good" vs. "doctors bad." Rather, how can we improve the practice of medicine by understanding and challenging its present limitations.


I would support these statements if you could specifically cite studies done on autopsies on a wide variety of diseases (not just a specific type of leukemia) and across disciplines that indicate we are failing to correct errors. a few studies on a few autopsies won't cut it for me. Then again, like I posted before, in order to read the OpEd link you posted, I had to subscribe, and I don't care for the NYT enough to do that. Maybe if it was JAMA.

People are living longer and healthier than they were in the 30's - obviously there are more doctors detecing illness faster and more accurately than back then.
 
This is all fluff because it doesn't take into account logic. Doctor misdiagnosis hasn't improved since the 30's? I think that's a load of bull, in part because in the 30's our medical record keeping and everything was much worse. Can you imagine how many cases of pancreatic cancer or other cancers could NOT be diagnosed without our present tools?? The spectrum of diseases we could diagnose somebody as having is much greater now, with only subtle differences separating potentially fatal diseases from benign diseases in many cases. Example: do you really believe HIV, prion diseases, and other diseases were suddenly created in the latter half of the 20th century? We just didn't know these diseases existed. You would have to ONLY include the subset of diseases that exist with similar criteria and definitions then and now and then compare how well we're diagnosing. The problem is that with modern advances, we start to realize that MANY of the things that we thought were one particular thing in the past actually weren't, but this is on the basis of molecular, genetic, or other evidence, not necessarily gross or histopathologic evidence alone. But we can't retrospectively look at patients from the 30's, dig up their bodies, and say "oh, based on our modern day criteria, they actually had this." Another good example is a "heart attack." Do you actually know what a heart attack is? It isn't defined in association with CAD or necessarily necessitate CABG or stenting or anything. In fact, today, it's just any rise in cardiac-specific troponins. So Cheney's victim had a heart attack but not because of CAD or induced CAD or atherosclerosis.

We do practice defensive medicine much more today. And part of the problem is that we have SO much stuff now to diagnose patients and often encounter "ancillary" findings that COULD have a bearing on the patient's real diagnosis but really don't. I do agree that this creates a problem. However, technology will only create finer differences between disease X and disease Y, and may often split previously disease Z into possibly X, possibly Y depending on some subtle molecular differences discovered in a research lab, with potentially different treatments.

There are a million things that can cause particular illnesses. I'm not surprised at all at the rate of misdiagnosis of fatal illnesses, and I don't believe it will change for another 100 years. In fact, I'd be surprised if it did. And if it did, all it would mean is we'd be spending 1000x + more on diagnosis for every patient. And you have no idea how many patients don't submit to certain tests or procedures because *everything* has risks, often including death.

Even in medicine, hindsight is NOT 20/20. Our findings on autopsy are partly made difficult by the fact that the patient is DEAD and many problems may be physiologic and the physiology of death is not compatible with life. Autopsy provides further structural and histopathologic evidence of disease, but still may be mistaken as to the absolute disease itself.
 
KentW said:
Not to digress, but what are you talking about?

He's talking about the diagnostic software mentioned in the article.
 
megboo said:
I would support these statements if you could specifically cite studies done on autopsies on a wide variety of diseases (not just a specific type of leukemia) and across disciplines that indicate we are failing to correct errors. a few studies on a few autopsies won't cut it for me. Then again, like I posted before, in order to read the OpEd link you posted, I had to subscribe, and I don't care for the NYT enough to do that. Maybe if it was JAMA.

You don't have to subscribe for that article, just register. And you don't have to register to read that multiple studies are being cited, the studies are not limited to leukemia, and in all likelihood cover a lot more than "a few autopsies" if the Journal of the American Medical Association is taking notice.

People are living longer and healthier than they were in the 30's - obviously there are more doctors detecing illness faster and more accurately than back then.

Sorry, that doesn't follow. People could be living longer for many other reasons; improved nuitrition, better treatments, etc. Ditto to the poster who said it was "not logical" to say that the rate of errors has not improved; it is contrary to the conventional wisdom, not illogical.
 
I think the standard or specifity of what is considered a correct diagnosis has probably risen over time.
 
You don't have to subscribe for that article, just register. And you don't have to register to read that multiple studies are being cited, the studies are not limited to leukemia, and in all likelihood cover a lot more than "a few autopsies" if the Journal of the American Medical Association is taking notice. [/QUOTE]

I prefer not to register at web sites I don't frequent. Plus I don't make it a habit to give out personal information when I don't have to. But I gave a fake name to look at it anyway.

QuikClot said:
Sorry, that doesn't follow. People could be living longer for many other reasons; improved nuitrition, better treatments, etc. Ditto to the poster who said it was "not logical" to say that the rate of errors has not improved; it is contrary to the conventional wisdom, not illogical.

My final response to this thread is to really echo what TheOneTwo wrote. I take issue with the word "often" in the title of the thread. I know that doctors do get it wrong - medicine is not an exact science. In echoing TheOneTwo, there are so many more advances today that save lives vs. what was available in the 30's.

For example, at 19 I had thyroid cancer that wasn't detected until late-stage. When they took the tumor, it was a golf-ball sized tumor in my neck. Perhaps in the 30's they would have taken the tumor, but the radiation therapy didn't exist, and probably the benefit of the medicine I take today wouldn't be there. Many of the symptoms I had prior to detection were sluggishness, tiredness, weight-gain, all textbook symptoms today, but back then maybe not. I might have been labeled lazy and left to die - who knows? At least today there are a gamut of tests to be run to be more precise, cutting down on ERROR.

I think it's silly to say that medical staff are not paid to come up with the right diagnosis but to only run lab tests - how would we ever know our condition? It was clear to me that I was diagnosed with cancer. Of course they are paid to diagnose, and are penalized if they don't do it right (malpractice). Sometimes there are symptoms which cannot be explained. It's only in the recent history of healthcare that things such as Chronic Fatigue Syndrome, ADHD, etc. were recognized as bona fide diagnoses.

The author of this OpEd is clearly biased in his opinion of doctors using lab tests to help with diagnosis, and what's even worse is that he does not provide a source for the "studies" he describes to title his piece with "often".
Another example of irresponsible "reporting" and a slam against those doctors who DO get it right, which are the majority.
 
QuikClot said:
He's talking about the diagnostic software mentioned in the article.

Well, then he's confused. Isabel (the software mentioned in the article) does not even run on PDAs. Moreover, its creator is Dr. Joseph Britto, a pediatric intensivist in London. The concept of software that can perform differential diagnosis is far from new, and every program that I've seen to date was designed by physicians.
 
KentW said:
Well, then he's confused.

So what? He's not here to score cheap points . . . unlike you. Some of us want to engage with this material in a critical and reflective way, but you seem to want to play "gotcha." Whatever.

Moreover, its creator is Dr. Joseph Britto, a pediatric intensivist in London.

Find me where on that website Dr. Britto is called the software's "creator." You can't, and he isn't. According to your own site "During their time in hospital, the idea of a diagnostic tool slowly evolved during conversations between the Maudes and Dr Britto." i.e., they both deserve credit for coming up with the idea.
 
megboo said:
I prefer not to register at web sites I don't frequent. Plus I don't make it a habit to give out personal information when I don't have to. But I gave a fake name to look at it anyway.

Better not take any chances; you should cut your phone line and stay in your Minnesota hills compound. :laugh:

Seriously, who cares about your surfing habits? You implied the article wasn't avalible to you, and I pointed out that it was. End of story. You chose not to look at it, because the NYT doesn't affirm your right to marry your first cousin or whatever . . . fine. You chose to lie about your name and read it . . . great. I don't really need to know.

My final response to this thread is to really echo what TheOneTwo wrote. I take issue with the word "often" in the title of the thread.

40% of the time is "often." Deal.

The author of this OpEd is clearly biased in his opinion of doctors using lab tests to help with diagnosis, and what's even worse is that he does not provide a source for the "studies" he describes to title his piece with "often".
Another example of irresponsible "reporting" and a slam against those doctors who DO get it right, which are the majority.

Yeah, you're wrong. Here is the reference you wanted (one click away at the bottom of the article in question):

How Often Are Patients Misdiagnosed?

Published: February 21, 2006
The only sure way to study the extent of misdiagnosis is to compare autopsy results to a patient's final diagnosis. When researchers have done this, they have generally found a contradiction between the two in about 40 percent of cases. Roughly half of these misdiagnoses prevented the patient from getting treatment that could have made a difference.

Skip to next paragraph A good summary of the research appeared in a 1998 article in The Journal of the American Medical Association, by George D. Lundberg, then the publication's editor. He said recently that it still reflected his views.

Low-Tech Autopsies in the Era of High-Tech Medicine by George D. Lundberg, JAMA, Oct. 14, 1998 (pdf) [http://nytimes.com/packages/pdf/business/22leonhardt.pdf]
 
QuikClot said:
you seem to want to play "gotcha." Whatever.

You seem to expect to get away with making inaccurate and uninformed statements. Whatever. 🙄
 
QuikClot said:
Find me where on that website Dr. Britto is called the software's "creator."

The full quote is: "the idea of a diagnostic tool slowly evolved during conversations between the Maudes and Dr Britto who gradually gathered the support of a number of high profile professionals from hospitals around the UK and aboard [sic] to start work on the Isabel system.

[emphasis mine] Dr Joseph Britto MD, conceived the structure of the system, namely the application of pattern recognition software to a tutored diagnosis taxonomy and knowledge database with added layers of heuristics."

If you'd rather believe that a couple of laypeople had more to contribute to Isabel than Dr. Britto and the other medical professionals who were involved in its development, go right ahead.
 
KentW said:
You seem to expect to get away with making inaccurate and uninformed statements. Whatever. 🙄

Sorry, that would be you. I didn't say anything inaccurate, unlike your mischaracterization of the software's creation. You, it would seem, cannot even be accurate in attacking someone else for inaccuracy.

What I expect is to have a conversation with grown-ups about the issues, rather than crossing paper swords with a virgin slumped over a computer desk in his parents' basement.

If you have something to say about the article, or the software, or medicine in general . . . please. If you just want to shake your finger at other posters, while making the same kind of mistakes yourself . . . all that proves is that you're an a$$hole. And we already knew that.
 
QuikClot said:
What I expect is to have a conversation with grown-ups about the issues, rather than crossing paper swords with a virgin slumped over a computer desk in his parents' basement.

:laugh: You're obviously a lot more invested in this stuff than I am.
 
KentW said:
If you'd rather believe that a couple of laypeople had more to contribute to Isabel than Dr. Britto and the other medical professionals who were involved in its development, go right ahead.

Geeze, you can't get anything straight. Find where I said that laypeople "had more to contribute." I said they both deserved credit. You are the one who tried to ignore the contribution of the "laypeople."

You used the genesis of this software to argue that doctors didn't need or receive outside help. Now you are using your own inflated concept of "medical professionals" to argue that the "laypeople" can't have contributed much to the development of the software. That's called circular reasoning. 😉
 
QuikClot said:
Better not take any chances; you should cut your phone line and stay in your Minnesota hills compound. :laugh:

I live in Illinois.


QuikClot said:
Seriously, who cares about your surfing habits? You implied the article wasn't avalible to you, and I pointed out that it was. End of story. You chose not to look at it, because the NYT doesn't affirm your right to marry your first cousin or whatever . . . fine. You chose to lie about your name and read it . . . great. I don't really need to know.

If you don't care, why do you even reply? Gosh, someone disagrees with you and you go and get all personal.

But really, it's a pain to have to filter junk mail my filter doesn't catch.

BTW - my husband WAS adopted - who knows, maybe he IS my first cousin!

😱 :laugh:

NOT.

QuikClot said:
40% of the time is "often." Deal.

40%? I thought the article said 20% of autopsies. Oh, I see what you are talking about - you mean the JAMA article, not the OpEd from NYT. You need to be more clear.


QuikClot said:
Yeah, you're wrong. Here is the reference you wanted (one click away at the bottom of the article in question):

Yeah, that link is a RELATED article, not a reference. English 101. Deal. :laugh:

As far as that article, the author discusses autopsy rates for public facilities that are too low in his opinion and not much more accurate than 1923. He goes on to include how a mandated HCFA reform could up the percentage of autopsies performed on Medicare-beneficiaries would up the total amount of autopsies performed. He is concerned with high-tech tools at the mercy of human error, but if you read closely, his most recent reference is 1998 - 8 years ago!

This article, too is an OpEd piece from the JAMA editor - not a research study.

A quick search on PubMed turned up these and many more studies supporting correct diagnoses antemordem after postmordem autopsy:
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8114341&dopt=Abstract
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=1855953&dopt=Abstract
http://www.ncbi.nlm.nih.gov/entrez/...t_uids=7890278&query_hl=6&itool=pubmed_docsum
http://www.ncbi.nlm.nih.gov/entrez/...t_uids=8119713&query_hl=6&itool=pubmed_docsum

It's really nice that you do worry whether doctors are doing the right thing, but be careful about picking an emotional piece and running with it.
 
QuikClot said:
Sorry, that would be you. I didn't say anything inaccurate, unlike your mischaracterization of the software's creation. You, it would seem, cannot even be accurate in attacking someone else for inaccuracy.

What I expect is to have a conversation with grown-ups about the issues, rather than crossing paper swords with a virgin slumped over a computer desk in his parents' basement.

If you have something to say about the article, or the software, or medicine in general . . . please. If you just want to shake your finger at other posters, while making the same kind of mistakes yourself . . . all that proves is that you're an a$$hole. And we already knew that.

And how is this acting like a grownup? 🙄
 
So, tell me, do you agree or disagree with this statement?

"The fact that such a simple idea as the diagnostic PDA had to come from a non-Physician suggests to me that the desire for improvement is the exception rather than the rule in medicine."

Just curious, since this is the original comment that sparked our exchange.

I suspect we actually agree with each other, no matter how it looks so far.
 
I disagree with the statement. Medicine does a lot more than most professions to institutionalize innovation and change.

Still, I feel that the poster is at least trying to engage with the article. Why are so many cases of misdiagnosis being found? Why is a system like Isabel still the exception rather than the rule? So, even though I disagree, I think the poster is really thinking about the issues raised, whereas some other posters seem to take this concern -- which comes from within the medical community, and is only being repeated for a wider public -- as if it were a personal attack, or some sort of radical critque of Western medicine, which I don't think it is.

It is often the case, and not just in medicine, that innovative ideas come from people trained in other disciplines. Function of having a fresh pair of eyes, I suppose. Kuhn wrote about this phenomenon in The Structure of Scientific Revolutions.

I appreciate your making an effort to lower the rhetorical temp. a notch. That makes you the bigger man in this exchange.
 
KentW said:
If anything, the growth of PDAs in medicine stands as testimony in favor of physicians' desire to improve the quality of care, quite contrary to your example.
I meant to specifically refer to diagnostic software, which as you pointed out has had quite a few physican developers. However, from the WSJ

. . . for the most part, such DDS programs haven't been what the doctor ordered. Many physicians consider such programs too time-consuming or cumbersome, or not helpful enough to be worth the investment. Proponents of the software say the problem is also with the doctors themselves: Physicians, they say, prefer to rely on their own experience and training, and are reluctant to use computers in making judgments.

"DDS has remarkably little market share or presence," says Thomas Handler, a physician and research director at Gartner Inc., a technology consulting firm based in Stamford, Conn. Many agree with his assessment that such programs simply haven't proved to be more adept at diagnosis than physicians are on their own. Medicine, Dr. Handler says, "is really an art."

The result is that while estimates vary, the consensus is that no more than 2% of doctors in the U.S. use diagnostic-support software.
The first bolded section echoes the view I gathered from Complications and gets to the heart of what irks me about medical culture. While there are some physicians eagerly trying to improve upon the art and science of medicine, they do so in the face of detractors, who see such innovations as crutches or impediments to true practice. When confronted with their own fallability, these detractors strike a near immediate defensive posture. For example, rather than viewing diagnostic software as a useful tool which can assist them in making a diagnosis, they point to it's limitations and claim that they can do better, an assertion that a few studies have undermined.
I know much of this is changing as the first generation raised with modern medicine takes over, but there still seems to be a sizeable contingent that focuses too heavily on tradition. Currently, the ratio would appear to be ~49:1.

KentW said:
Well, then he's confused. Isabel (the software mentioned in the article) does not even run on PDAs. Moreover, its creator is Dr. Joseph Britto, a pediatric intensivist in London. The concept of software that can perform differential diagnosis is far from new, and every program that I've seen to date was designed by physicians.
From the WSJ again: ". . . some DDS programs, including Isabel, can be used on PDAs as well as desktops."
Furthermore, if you listen to the interview from the NYT article (or go here) the idea for Isabel was Jason Maude's, he then recruited Dr. Britto to design the structure of the software.
 
Thank you for the clarification. However, I still take exception to the premise of your statement, that physicians are not interested in improving quality of care.

The lack of enthusiasm about DDx software is mutifactorial. You can't just throw money or software at the problem. DDx programs are not our salvation, although they're a potentially useful tool in the appropriate circumstances. I agree that physicians as a group need to embrace computer technology (particularly EHR) on a larger scale, and believe (without being able to quote studies to back me up off the top of my head, although they're out there) that doing so has the potential to improve quality of care.

Being an advocate for change can be frustrating. As an example, I've touted the benefits of a little Palm OS program called eDerm, which is designed to assist primary care clinicians in the diagnosis and management of skin lesions, to many of my colleagues who use PDAs. As far as I know, I'm still the only one using it.

IMO, the biggest thing that would improve quality of care is the ability to spend more time with our patients.
 
As I’m probably the only person on this thread who does autopsies (43 and counting), I’ll weigh in with my two cents.

First off, in response to TheOneTwo, most of the advances in diagnostics since the 1930’s have been etiological, not presence/absence of disease. You mentioned prions, but Creutzfeld and Jacob described the spongiform encephalopathy in the 1920’s. When we find a serious discordance from the clinical diagnosis at autopsy, it’s not a matter of deciding on whether the patient had Morganella or Enterobacter in his blood. A more realistic situation is having a clinical diagnosis of respiratory distress, and then finding a raging peritonitis at autopsy. That is the kind of thing that has been going since the inception of autopsies.

My unscientific recollection is that about one out of every three or four cases has some significant new finding: metastatic ovarian cancer, hepatocellular carcinoma, giant retroperitoneal bleed, multiple cavitary lung lesions, etc.

The biggest recent review of this phenomenon was published in JAMA in 2004: Changes in Rates of Autopsy Detected-Diagnostic Errors Over Time: A Systematic Review by Shojania et al. (Curr. Surg., 61:151-5). They performed a meta-analysis of 53 autopsy series from 1960 to 2002 and determined that the contemporary error rate in the US ranged from 8.4 to 24.4%. They also calculated 19.4% per-decade reduction of major errors.

I’m sure we’re all aware of the pitfalls of meta-analyses, but the thrust of this article is probably pretty true: major diagnostic errors have decreased over time but are still very pervasive.
 
megboo said:

I read each of the abstracts you posted...

From #1: "The percentage of the correct clinical diagnosis was 88.4% of 2,858 cases with subarachnoid hemorrhage, 84.7% of 3,051 cases with cerebral hemorrhage and 80.3% of 3,602 cases with cerebral infarction. The clinical diagnosis was correct in 66.6% of 9,891 cases with myocardial infarction."

From #2: "The difference in incorrect clinical diagnoses between the two periods is statistically significant. With respect to infectious diseases, the concordance between clinical and autopsy diagnoses is even poorer."

From #3: "Even taking into account the biases that affect selection of patients for autopsy, the notable discrepancy found between clinical and autopsy diagnoses underlines the fact that autopsy, despite improvements in diagnostic techniques, maintains its fundamental importance in assessing the reliability of clinical diagnoses and furthermore shows the underestimation of the incidence of tumors in epidemiological studies based solely on death certificates."

From #4: "We conclude that the role of the autopsy has not diminished in spite of advanced diagnostic methods and it remains an effective tool in the assessment of medical care."
 
You might also consider the fact that autopsy rates have diminished significantly within the last several decades. Rather than being commonplace as they once were, we only perform them when we have doubts about diagnosis. Thus, there may be selection to do autopsies on cases that are specifically equivocal. Just a thought.
 
Havarti666 said:
I read each of the abstracts you posted...

From #1: "The percentage of the correct clinical diagnosis was 88.4% of 2,858 cases with subarachnoid hemorrhage, 84.7% of 3,051 cases with cerebral hemorrhage and 80.3% of 3,602 cases with cerebral infarction. The clinical diagnosis was correct in 66.6% of 9,891 cases with myocardial infarction."

From #2: "The difference in incorrect clinical diagnoses between the two periods is statistically significant. With respect to infectious diseases, the concordance between clinical and autopsy diagnoses is even poorer."

From #3: "Even taking into account the biases that affect selection of patients for autopsy, the notable discrepancy found between clinical and autopsy diagnoses underlines the fact that autopsy, despite improvements in diagnostic techniques, maintains its fundamental importance in assessing the reliability of clinical diagnoses and furthermore shows the underestimation of the incidence of tumors in epidemiological studies based solely on death certificates."

From #4: "We conclude that the role of the autopsy has not diminished in spite of advanced diagnostic methods and it remains an effective tool in the assessment of medical care."


Yes, but all in all, these articles acknowledged that overall clinical diagnosis has improved but the rate of autopsies needs to improve.

I posted late at night so I wasn't able to comment on all the details - sorry about that!
 
chef_NU said:
You might also consider the fact that autopsy rates have diminished significantly within the last several decades. Rather than being commonplace as they once were, we only perform them when we have doubts about diagnosis. Thus, there may be selection to do autopsies on cases that are specifically equivocal. Just a thought.

I see that trend from the literature as well. Those physicians doing research and/or writing opinion articles on it seem to be concerned with daignosis of Medicare-recipients and then what the corresponding autopsy diagnosis is. Some would argue that Medicare-recipients don't receive as high standard of care as a private-pay or insurance-pay patient, therefore there may be more cases of misdiagnosis. (I'm don't necessarily agree with that!)
 
chef_NU said:
You might also consider the fact that autopsy rates have diminished significantly within the last several decades. Rather than being commonplace as they once were, we only perform them when we have doubts about diagnosis.

Selection bias could definitely be a factor, but it's very difficult to gauge how much (if any) is present.

If autopsies were only performed when there are doubts about the diagnosis, I'd be a much happier guy. It's annoyingly common to get cases like an 89 year-old train wreck who has been living in a nursing home for years, is DNR/DNI, and is found expired one morning. Reason for autopsy? The spouse wants one. Why does the spouse want one? The world will never know.
 
I think it's great that we're getting the skinny from someone who actually does autopsies.

Tell me, did you look at this article, and what do you think of it?

Low-Tech Autopsies in the Era of High-Tech Medicine by George D. Lundberg, JAMA, Oct. 14, 1998 (pdf) [http://nytimes.com/packages/pdf/bus...2leonhardt.pdf]

Sorry . . . the pdf won't cut and paste, so I can't copy out a section of the text.
 
What about litigation. Maybe physicians feel they can’t be wrong because they will be sued. Patients expect perfection (IMO) and when they don’t receive it some are quick to get a lawyer. The problem with a diagnosis is that it is not always a simple PDA based algorithm (can be most of the time) but can be some rare genetic disorder (especially in children) that is not often seen or accurately diagnosed the first time.

I think physicians need an environment where patients understand the risk. Healthcare is tricky and changing everyday. We do not have all the answers and surgery does not always correct a lifetime of excess (smoking, ETOH, obesity) and you may die with even the correct diagnosis.

I have oversimplified the problem but I am on my way to work and just wanted to weigh in….
 
oldManDO2009 said:
What about litigation. Maybe physicians feel they can’t be wrong because they will be sued.

That is true. A couple of times I've had very nervous clinicians call and/or come see the autopsy because they were petrified that they had screwed up. I can think of one case where they did mess up and it killed the patient, but their mistake was a known complication of an invasive procedure that happens 1-2% of the time. Sometimes the family explicitly states that they want an autopsy because they feel that the doctors might have done something wrong. I tread gingerly in those.

One of the biggest benefits to autopsy is that it affirms that the family could not have done anything differently to save the decedent. Even if unexpected findings are uncovered at autopsy, they usually are things that no competent clinician could have been expected to find or cure.

And if we find that the clinicians really did drop the ball then hey, maybe the family deserves to be compensated. In the end I think autopsy serves to extinguish more litigation than it promotes.
 
oldManDO2009 said:
I think physicians need an environment where patients understand the risk.

A small number of physicians attract a highly disproportionate amount of lawsuits. The number one factor cited in successful malpractice suits (and by that I mean settled out of court or in the plaintiff's favor at trial) is lack of adequate communication. Hence, if physicians need an environment where patients understand the risk, then we the physicians are going to have to create it. Nobody else is going to step in a do it for us.
 
Top