- Joined
- Nov 16, 2010
- Messages
- 8,233
- Reaction score
- 342
- Points
- 4,641
- Fellow [Any Field]
Advertisement - Members don't see this ad
Sorry for the sensationalist title, I just wanted to draw attention to a deserving topic that has been in the news recently.
After a few recent newsworthy scientific scandals (Climategate, CFS XRMV Virus, Flu vaccine leads to Autism, etc) folks have been looking closely at how these studies could go unchecked for so long.
In fact, the most recent issue of Science has been dedicated to raising awareness on the overall lack of independent reproduction of studies and the bias towards positive results.
Here's a great article in The New Yorker called "The Truth Wears Off." It discusses the propensity for journals to select for positive findings and how this relates to various conflicts of interest. A couple brief excerpts:
Anyway, just thought I'd share.
(... here's where I'm supposed to say "thoughts?")
After a few recent newsworthy scientific scandals (Climategate, CFS XRMV Virus, Flu vaccine leads to Autism, etc) folks have been looking closely at how these studies could go unchecked for so long.
In fact, the most recent issue of Science has been dedicated to raising awareness on the overall lack of independent reproduction of studies and the bias towards positive results.
Here's a great article in The New Yorker called "The Truth Wears Off." It discusses the propensity for journals to select for positive findings and how this relates to various conflicts of interest. A couple brief excerpts:
Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren't favorable. But it's becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they're subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn't random at all but instead skewed heavily toward positive results.
Anyway, just thought I'd share.
(... here's where I'm supposed to say "thoughts?")


