Chiropractors can interpret films just as good as radiologists...

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

MacGyver

Membership Revoked
Removed
15+ Year Member
20+ Year Member
Joined
Aug 9, 2001
Messages
3,757
Reaction score
5
http://www.ncbi.nlm.nih.gov/entrez/...ve&db=PubMed&list_uids=12221360&dopt=Abstract

I didnt want to believe it either. Tell me why this study is flawed.

STUDY DESIGN: A cross-sectional diagnostic study was conducted in two sessions. OBJECTIVE: To determine and compare the reliability and validity of contraindications to chiropractic treatment (infections, malignancies, inflammatory spondylitis, and spondylolysis-listhesis) detected by chiropractors, chiropractic radiologists, and medical radiologists on plain lumbosacral radiographs. SUMMARY OF BACKGROUND DATA: Plain radiography of the spine is an established part of chiropractic practice. Few studies have assessed the ability of chiropractors to read plain radiographs. METHODS: Five chiropractors, three chiropractic radiologists and five medical radiologists read a set of 300 blinded lumbosacral radiographs, 50 of which showed an abnormality (prevalence, 16.7%), in two sessions. The results were expressed in terms of reliability (percentage and kappa) and validity (sensitivity and specificity). RESULTS: The interobserver agreement in the first session showed generalized kappas of 0.44 for the chiropractors, 0.55 for the chiropractic radiologists, and 0.60 for the medical radiologists. The intraobserver agreement showed mean kappas of 0.58, 0.68, and 0.72, respectively. The difference between the chiropractic radiologists and medical radiologists was not significant. However, there was a difference between the chiropractors and the other professional groups. The mean sensitivity and specificity of the first round, respectively was 0.86 and 0.88 for the chiropractors, 0.90 and 0.84 for the chiropractic radiologists, and 0.84 and 0.92 for the medical radiologists. No differences in the sensitivities were found between the professional groups. The medical radiologists were more specific than the others. CONCLUSIONS: Small differences with little clinical relevance were found. All the professional groups could adequately detect contraindications to chiropractic treatment on radiographs. For this indication, there is no reason to restrict interpretation of radiographs to medical radiologists. Good professional relationships between the professions are recommended to facilitate interprofessional consultation in case of doubt by the chiropractors.

Members don't see this ad.
 
Can you draw the conclusions they did by only looking at the work of 3 chiro radiologists and 5 med radiologists? I would think the study is lacking in power.
 
I agree... it's absolutely impossible to generalize these results.

This study is totally dependent upon the expertise of a very select few participants. There's no way to assume that the abilities of these few chiropractors and "chiropractic radiologists" are in any way representative of other people in the field.

Not having read the article, the abstract begs several other salient questions...

Were the physicians and chiropractors blinded with regard to patient identity and clinical presentation?

Were the participant groups (physicians/chiropractors) matched with regard to years of experience?

How did the training differ between the groups of study participants? Had any of the chiropractors or "chiropractic radiologists" completed any type of formalized radiological training?

How was diagnostic agreement defined? The abstract makes it sound like the chiropractor's question was whether or not there was a contraindication to treatment.

What's the gold standard? If this study uses specificity and sensitivity, then surely even the most experienced radiologist has a false positive/false negative rate. How was this assessed?

I know that criticizing a study by the abstract is like judging a book by its cover, but this trial appears to have a number of problems. No matter how the above questions are answered, the bottom line is that the investigators absolutely cannot generalize the results based on the few study participants. As was mentioned earlier, this trial was inadequately powered and probably did not warrant publication.

doepug
 
Members don't see this ad :)
I agree...the study appears to lack power.

I does appear that the study participants were blinded:
"300 blinded lumbosacral radiographs"

In addition, as was alluded to earlier, the participants were asked to look for four specific pathologies:
"infections, malignancies, inflammatory spondylitis, and spondylolysis-listhesis"
which would represent contraindications to manipulation by the chiroprator. So, the most one could generalize from this is that medical radiologists, chiropractic radiologists, and chiropractors are equivalent in recognizing if any of these four pathologies are present. It does not allow one to say that these three physicians are equivalent in all aspects of reading spine films. Also, this is only plain films and one cannot comment on MRI or CT.
 
I agree with the above who note that the study lacked power, and that trends toward increased agreement and accuracy among radiologists did not reach significance in the study. In addition, when they ran pooled analysis, even in this small study, general chiros were worse that chiropractic "radiologists" and real radiologists!!

However, a much more important issue is why these radiographs are needed in the first place. Plain films can detect spondylolisthesis, however they are not the best way to look for "infections, malignancies, inflammatory spondylititis". In fact, to rule those diagnoses out on the basis of plain films would be malpractice for a real doctor.

In other words, this study picks one test -- not nearly the best test -- for looking for pathology that is uncommon in the general population. And chiros have made a big fuss to congress that so called "subluxations" are a clinical diagnosis that don't have to be visible in films. So why do the films at all, when the pre-test probability of pathology is so low?????

The lower specificity among chiros speaks volumes. This corresponds to identifing that an abnormality existed, but not to correctly identify it. Lumbrosacral plain films are not the hardest study to interpret, and yes, one group of health care workers could probably be trained to pick out abnormalities. But who do you want looking at your family's imaging and making the call???

(BTW another issue is what the gold-standard interpretation was, and who made it.)
 
We had a chiropractor bring us an MRI for an over-read. Yes, apparently, he was attempting to read his own MRI's. He wanted to know what the tumor was anterior to the thoracic spine. It was the esophagus.

We refused to officially over-read it as did the other radiologists in the city.

Non-radiologists will make mistakes. There may be those who can honestly do an adequate job, but they will make mistakes. Hey, radiologists can expect to be sued 4 times in their carreers. The difference is that the non-radiologists cannot defend themselves. They certainly cannot argue that they've provided "the standard of care" without radiology board certification. Lawyers will love that.
 
Originally posted by eddieberetta
I agree with the above who note that the study lacked power, and that trends toward increased agreement and accuracy among radiologists did not reach significance in the study.

....

The lower specificity among chiros speaks volumes. This corresponds to identifing that an abnormality existed, but not to correctly identify it. Lumbrosacral plain films are not the hardest study to interpret, and yes, one group of health care workers could probably be trained to pick out abnormalities. But who do you want looking at your family's imaging and making the call???

I still agree that the study is far from conclusive (or even suggestive).

However, statistical tests measure differences between groups, not similarities. So, it is the fact that there was no statistical difference that allows the authors to claim that there is no difference in the abilities of these three groups to read films. That is why it is so important to know the power. If the study had very little chance of finding any difference, then the fact that they found none is much less meaningful. Had this study included hundreds of practitioners, had fantastic power, and found no statistical difference, then the argument for no real difference would be more compelling.

Also, the low specificity means that the practitioners identified the film as abnormal when it was actually normal. So, they likely identified more abnormalities than there actually were.
 
Originally posted by Brewster
Also, the low specificity means that the practitioners identified the film as abnormal when it was actually normal. So, they likely identified more abnormalities than there actually were.

Yep you are correct. What I meant to say was chiros calling a lot of stuff abnormal but not being correct in their estimation. Also would depend if the study was simply based on Normal/Abnormal or whether they factored in the correct diagnosis.

Not worth looking up. Spine series for chiropractic complaints are bogus anyways. As for chiros ordering MRIs? That's just wrong.

Cheers
 
i see that the study was done in the netherlands... but what was the periodical responsible for peer reviewing this study? i
 
Top