False data?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

gstrub

Member
20+ Year Member
Joined
Dec 6, 2002
Messages
750
Reaction score
11
We have a "journal club" for teaching how to critically review scientific literature. More than once the old "I just don't believe this data" or "how could the reviewers have accepted this" or "they probably picked this blot because it was the only one that worked nicely" gets dropped. SO I was wondering, how pervasive do you think falsified data is within the scientific literature? How do journals evaluate data for possible fraud? I heard a story once about a guy using photoshop to alter his images...I mean really?? I understand it can get frustrating when nothing works but seriously! Any thoughts?
 
I know some journals require you to submit high-resolution pictures so they can see if theres any photoshopping.
 
I think it's more common than we can possibly imagine. There is just so much pressure to obtain results and publish that sometimes people act stupid.

I actually had an experience fairly early during grad school where a post doc told me (in front of another student, so I had a witness!) to lie and say that I had results of some spectra of my compounds when I hadn't taken the spectra. The other student and I just looked at each other in shock like, "Did we just hear what we thought we heard?" Not only did I not falsify the spectral data on my own compounds, but I even took spectra on all of the post doc's compounds that the post doc had supposedly "lost." Nothing ultimately happened because I had no concrete proof that the post doc had falsified the original spectra. But I have no doubt whatsoever that this person did falsify the spectra and would have submitted the false data to the journal, almost certainly undetected. It really made me aware of the potential for falsification and how easy it is to get away with it.
 
I heard a story once about a guy using photoshop to alter his images...I mean really??

Haha there was one time recently (within the past few years) that was international news. A Korean scientist got a paper in what i think was Science (maybe Nature, regardless a HIGH impact journal) about human cloning. And his results were reported all over the world. He was hailed as a hero in Korea and then BAM. they found out it was all a scam and that he pshoped it all. Anyways, you are supposed to write the paper so that you someone else can redo the experiment and get the same results and if someone is trying to build off your work they generally will redo your experiment.

However, realize that you dont publish failed experiments. Especially in industry, if you run 20 trials and 19 fail and 1 succeeds you advertise your one success (could be an anomoly but hey it was a successful trial).
 
Sometimes when I think about science I get very jaded... We need statistics to show that our results are significant...thats pretty shady in of itself. I personally do binding assays, and working with kinetics isn't the most reproducible thing in the world...
 
We have a "journal club" for teaching how to critically review scientific literature. More than once the old "I just don't believe this data" or "how could the reviewers have accepted this" or "they probably picked this blot because it was the only one that worked nicely" gets dropped. SO I was wondering, how pervasive do you think falsified data is within the scientific literature? How do journals evaluate data for possible fraud? I heard a story once about a guy using photoshop to alter his images...I mean really?? I understand it can get frustrating when nothing works but seriously! Any thoughts?
Several years ago a few Harvard students wrote a text-generating program that created real-sounding comp scientific papers. They submitted them to conferences and one was accepted.
 
That is the most awesome thing I have ever heard in a long time hahaha!
 
Sometimes when I think about science I get very jaded... We need statistics to show that our results are significant...thats pretty shady in of itself. I personally do binding assays, and working with kinetics isn't the most reproducible thing in the world...
How are statistics shady? They are a tool to differentiate one set of things from another in a quantifiable way. Like any tool, they can be abused, but how else could we go about things?

On stats reform, I point you to this article, which I thought was pretty good. The idea of estimating the likliehood of your findings pre- and post-study seems to me to be a good idea. Either links the study to good science or warns you that the study is junk.
 
Also, there are very few (almost none) journals that publish negative results. It makes you wonder how many redundant experiments are repeated because other researchers didn't report their negative results.
 
Because such statistics aren't always applicable to such things as seraph mentioned.

Every good statistics teacher/professor I've had has always been quick to denounce the reliance on statistics in all situations.


As far as falsified data, do any of you remember high school?

Remember in chemistry labs where half the class would falsify data in order to not get a poor grade?

That happens a lot. I tend to read journals with a salt shaker handy.
 
Because such statistics aren't always applicable to such things as seraph mentioned.

Every good statistics teacher/professor I've had has always been quick to denounce the reliance on statistics in all situations.


As far as falsified data, do any of you remember high school?

Remember in chemistry labs where half the class would falsify data in order to not get a poor grade?

That happens a lot. I tend to read journals with a salt shaker handy.


that happens in college too.....
 
this is also why double blind experimentation exists. experimenters have a natural tendency to want their hypothesis to be right (this is why they hypothesized it in the first place) so they will often take inconclusive evidence as supporting their point. I wouldnt say many people falsify data in that they make it up, but i think it is more the case that people massage their data to make it right alot more than they should

but i dont really see the point in falsifying data. I understand its a competative environment but so much of it is building a reputation. If you make up data you are putting alot at risk
 
this is also why double blind experimentation exists. experimenters have a natural tendency to want their hypothesis to be right (this is why they hypothesized it in the first place) so they will often take inconclusive evidence as supporting their point. I wouldnt say many people falsify data in that they make it up, but i think it is more the case that people massage their data to make it right alot more than they should

but i dont really see the point in falsifying data. I understand its a competative environment but so much of it is building a reputation. If you make up data you are putting alot at risk

$$$
 
It's an interesting question that is receiving more and more attention. I heard the a scientist and editor named John Ioannidis make the point that most scientific publications are wrong when he recently spoke at my institution. He was controversial, for sure, but he had a point. His arguments can be found in a PLOS paper.

http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pmed.0020124

Second, my professor visits with the high profile scientist when they come around and give talks. Both my mentor and one of these scientist shared their view of the scientific literature. This visiting professor read papers and put them into 3 piles: "complete crap", "crap", and put just a handful of papers into the final category that he held with high regard. I think the guy has a point. A bit arrogant, yes. And it takes significant experience to know when to call bull****.
 
Also, there are very few (almost none) journals that publish negative results. It makes you wonder how many redundant experiments are repeated because other researchers didn't report their negative results.

I've also found this to be true. A negative result can say quite a bit. It's been annoying because some of my work has yielded negative results, and therefore has been pushed aside as "dud experiments."

Sorta sucks when you put lots of time and energy into something and think through its implications, but no one cares because there's no " * " over one of the bar graphs.
 
I've also found this to be true. A negative result can say quite a bit. It's been annoying because some of my work has yielded negative results, and therefore has been pushed aside as "dud experiments."

Sorta sucks when you put lots of time and energy into something and think through its implications, but no one cares because there's no " * " over one of the bar graphs.

I don't know about your field but in mine if your experiment doesnt work out you figure out why and then you can move to the next step
 
Is "there's no effect" a possible reason why?

One of the grad students in a primate behavioral lab I was working in in undergrad got kicked out for falsifying data. Apparently our PI noticed that somehow her data had changed in the past week even though she had not actually been in the primate cage collecting. Stupid.
 
Hmm i take it back. I suppose I could see how a number of things could be useless if they didnt work out (pharm drug testing, gene knockouts etc). My work just happens to be VERY quantitative in that any information we get is useful even if it doesnt support the original hypothesis.
 
There is a journal out there called The Journal of Negative Results. Of course, it probably isn't the most prestigious of all places to publish, but it at least exists.
 
Omg, that's awesome! I could have had like 50 first author papers by now! 😛
 
I've also found this to be true. A negative result can say quite a bit. It's been annoying because some of my work has yielded negative results, and therefore has been pushed aside as "dud experiments." ..
There are some important negative results, and these are published. Frequently there will be large multicenter clinical studies (I can't think of any at this time, ask Q 'cause she'd know) where the drug examined will have no effect. And that is something important, especially if the drug was expected to chage practice guidelines or like in Vioxx, make outcomes worse.

Actually, replacement hormone therapy is another. Those sorts of things got to get out because they will change how people practice medicine. Or science, or what have you.

...Sorta sucks when you put lots of time and energy into something and think through its implications, but no one cares because there's no " * " over one of the bar graphs.
Yes, that sensation always blows. I think a lot of the papers we see, the ones where one minor effect is seen, and certain retrospectives, are the ones where the authors put a lot of effort into the study but didn't get no results. They're trying to salvage something for their efforts. And you can argue how the policy of "no negative studies" is in reality a driving force for more weak studies.
 
Top