How do you tell if a systematic review (meta) is garbage or valid?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

sgv

Full Member
10+ Year Member
Joined
Sep 5, 2013
Messages
1,065
Reaction score
889
I don't know where to post this and I assumed medical students would know more about clinical trials than dental students so I'm venturing off into this side of the forum.

Here's my question.

Is a systematic review or meta analysis paper worthless if almost all the studies that it's comparing have different number of outcomes assessed, follow up length, criteria of bias assessment, and protocols (for example different stimuli to assess pain)?

Could you help me figure out if this paper using meta analysis with standard mean difference is garbage because the studies that comprise the meta analysis are almost all garbage?

http://jdr.sagepub.com/content/90/3/304.full.pdf html

Also, on a more unrelated note, is conducting a meta analysis really easy and cheap to do? It seems all you need are a couple of statisticians familiar with the topic of the clinical trials involved.

Members don't see this ad.
 
I don't know where to post this and I assumed medical students would know more about clinical trials than dental students so I'm venturing off into this side of the forum.

Here's my question.

Is a systematic review or meta analysis paper worthless if almost all the studies that it's comparing have different number of outcomes assessed, follow up length, criteria of bias assessment, and protocols (for example different stimuli to assess pain)?

Could you help me figure out if this paper using meta analysis with standard mean difference is garbage because the studies that comprise the meta analysis are almost all garbage?

http://jdr.sagepub.com/content/90/3/304.full.pdf html

Also, on a more unrelated note, is conducting a meta analysis really easy and cheap to do? It seems all you need are a couple of statisticians familiar with the topic of the clinical trials involved.

The whole point of a meta-analysis is to take a bunch of things and make actual sense of it. It's a lot more powerful than a few specific studies. I'd say almost never is one worthless.
 
The whole point of a meta-analysis is to take a bunch of things and make actual sense of it. It's a lot more powerful than a few specific studies. I'd say almost never is one worthless.
When would it be unreasonable to combine results from multiple sources that used different protocols and had different levels of bias? Is there some sort of algorithm because I would expect this meta to show the algorithm if they did use it. But it's not there.
 
Members don't see this ad :)
I don't know where to post this and I assumed medical students would know more about clinical trials than dental students so I'm venturing off into this side of the forum.

Here's my question.

Is a systematic review or meta analysis paper worthless if almost all the studies that it's comparing have different number of outcomes assessed, follow up length, criteria of bias assessment, and protocols (for example different stimuli to assess pain)?

Could you help me figure out if this paper using meta analysis with standard mean difference is garbage because the studies that comprise the meta analysis are almost all garbage?

http://jdr.sagepub.com/content/90/3/304.full.pdf html

Also, on a more unrelated note, is conducting a meta analysis really easy and cheap to do? It seems all you need are a couple of statisticians familiar with the topic of the clinical trials involved.


GIGO.

I'm not sure whether conducting a meta-analysis is "cheap," but taking the time and consideration to comb through the literature and honestly assess each paper, then do the appropriate statistical analysis, then write the entire thing up, and then get it published is not really "easy."
 
No PRISMA criteria, no funnel plots, no care.
The reason why I find this article so sketchy is because in the abstract it says, "Of the 677 unique citations, 12 studies with high risk-of-bias were included." This begs the question, what is the total number of studies involved if there were 12 studies with high risk of bias?

So I went to the results and looked at study selection. It says, "...retrieved 677 unique sources...excluded 503 citations...pared remaining 174 citations to 15...four articles not previously found through electronic search was discovered in the references of the citations...4 did not meet inclusion criteria and 3 were previous reports of included studies...the remaining 12 reports were subject to detailed analysis."

That means 12 out of 12 total sources used had high risk of bias?!

Did they intentionally leave out how many studies were involved in the abstract because they didn't want to explicitly say that all of their sources had high risk of bias?
 
The reason why I find this article so sketchy is because in the abstract it says, "Of the 677 unique citations, 12 studies with high risk-of-bias were included." This begs the question, what is the total number of studies involved if there were 12 studies with high risk of bias?

So I went to the results and looked at study selection. It says, "...retrieved 677 unique sources...excluded 503 citations...pared remaining 174 citations to 15...four articles not previously found through electronic search was discovered in the references of the citations...4 did not meet inclusion criteria and 3 were previous reports of included studies...the remaining 12 reports were subject to detailed analysis."

That means 12 out of 12 total sources used had high risk of bias?!

Did they intentionally leave out how many studies were involved in the abstract because they didn't want to explicitly say that all of their sources had high risk of bias?

Id assume that means 12 out of the 677 that were initially pulling from. I don't think that reflects upon the final 12.
 
Id assume that means 12 out of the 677 that were initially pulling from. I don't think that reflects upon the final 12.
Thanks everyone.
 
yes,it's a lot more powerful than a few specific studies. I'd say almost never is one worthless.thanks
JM1IkP
 
The whole point of a meta-analysis is to take a bunch of things and make actual sense of it. It's a lot more powerful than a few specific studies. I'd say almost never is one worthless.

Start with bad data, end with bad statistics about said data. Almost never worthless seems a bit extreme.
 
So I run the journal club for our residents as well as teach the research course. Something I harp on again and again is that meta-analyses sound like they should be awesome, but the can be very worthless if done poorly. But unlike a straight forward RCT it's harder to pick out the good ones from the bad ones, especially just from the abstract. A few ways meta-analyses can go south...

1) Poor article selection- if they don't do a good job of picking articles for analysis then it's garbage-in, garbage-out. All meta-analyses should tell you how they selected articles. If there isn't a good exhaustive search, or the terms they used to comb pubmed aren't the best search terms, then start to be wary.

2) Comparing apples to oranges- it's tough when you bring a bunch of articles together; making sure that when you lump the data together that the endpoints are supported by the combination data. For example if one article had a primary endpoint of hospital discharge and stopped collecting data at that time, and a second article had a primary endpoint at 6 month followup, then a combination analysis looking at 6 month mortality is useless.

3) Bias- 5 small biased studies added together don't make one good big study. If the individual studies were done poorly then all they do is poison the well for a metaanalysis.

4) Overreaching conclusions- happens in a lot of papers. But just because 5 articles that were underpowered on their own suggested something, aggregating the data doesn't necessarily give you enough power to make a statistical conclusion. You may be able to make an anecdotal series argument, but unless they dredged the initial data tables into one sheet and re-analyzed, often times the statistics section has some shady outcomes. Reading and understanding the statistical output is key to knowing if the data is good or bad.
 
I'm generally not on board with meta analysis, and several professors I've worked with have warned against them (which is probably why I don't like them). One of the big issues with them is they take publishing bias and compound it to make it even worse. They also risk overlooking many methodological flaws/limitations in experiments they pull and can combine data that really isn't quite the same and provide an interpretation that shouldn't be there. Really, if you want to look at a topic over several published papers, write a good review paper. Don't try to mush a bunch of data together. Examine each of many articles individually to consider what they can tell us, what their limitations could be, and look at negative results similarly. Then try to interpret these results in light of what else we know on the subject.
 
Start with bad data, end with bad statistics about said data. Almost never worthless seems a bit extreme.

I agree with you. Nice username.
I'm generally not on board with meta analysis, and several professors I've worked with have warned against them (which is probably why I don't like them). One of the big issues with them is they take publishing bias and compound it to make it even worse. They also risk overlooking many methodological flaws/limitations in experiments they pull and can combine data that really isn't quite the same and provide an interpretation that shouldn't be there. Really, if you want to look at a topic over several published papers, write a good review paper. Don't try to mush a bunch of data together. Examine each of many articles individually to consider what they can tell us, what their limitations could be, and look at negative results similarly. Then try to interpret these results in light of what else we know on the subject.

Well wouldn't this be a subjective qualm of them, depending on how consistent the methods and etc between all the studies polled in an meta-analysis? I agree with the concept that you can't make good data out of polling from 50 junk studies, however I thought that generally meta analysis were pretty selective and for the most part included only studies that were quite similar.
 
Well wouldn't this be a subjective qualm of them, depending on how consistent the methods and etc between all the studies polled in an meta-analysis? I agree with the concept that you can't make good data out of polling from 50 junk studies, however I thought that generally meta analysis were pretty selective and for the most part included only studies that were quite similar.
It's not that a meta analysis can't be good. But even if the studies were very well designed, there's still strong publishing bias, which will just be further strengthened. And yeah, there's an element of subjectivity, but really, the only thing that is truly objective is the raw data set. But raw data generally aren't very useful to us without some kind of meaningful interpretation. That doesn't mean that we can't assess how logical the interpretation is, though. Still, if you want to analyze the literature on a topic, a review paper seems the better route. That way you are just looking at what is out there and not creating a "new" data set from a bunch of separate ones. I know this may seem like an odd position given that there are many who tout meta analysis as the highest level of evidence, but it just seems too prone to compounded error. If you want a larger data set, make a new experiment with a larger sample. Yeah, it's difficult and expensive, but that's science for ya.
 
Top