More Social Science Fraud, in SCIENCE

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DynamicDidactic

Still Kickin'
10+ Year Member
Joined
Jul 27, 2010
Messages
1,814
Reaction score
1,525
I remember hearing all about this study. My reconstructed and completely untrustworthy memory tells me that I was skeptical back then.

http://www.vox.com/2015/5/20/8630535/same-sex-marriage-study

http://retractionwatch.com/2015/05/...riage-after-colleague-admits-data-were-faked/

This one is particularly flagrant and brazen. Enjoy the details!

Members don't see this ad.
 
The data were posted publicly. He had a master's in statistics, he did the falsification pretty elaborately.
 
Members don't see this ad :)
I was under the impression that the raw data wasn't made available. Researchers refusing to release their raw data has been a problem in the past. There was a recent-ish study done where they looked at problems with replication and refusal to provide. It was pretty shocking. wouldn't eliminate all fraud, but may help, or at least dissuade some of it, as well as keeping out poorly analyzed data from journals.
 
The dataset is here: https://www.openicpsr.org/repoEntity/show/24342. You have to apply to access it, so I don't know what it looks like. I'm unsure how you could get the info in their report (test-retest reliability, response distributions, etc.) without accessing the raw data, though.
 
Just saw this yesterday. I'm of the opinion that with your submission to a journal, you should also be required to submit the data for the analyses in that article.

I do think this would help. However, even raw data is quite easy to fake. A lot if the papers that get caught have been fairly unsophisticated attempts, but if we're just talking spreadsheets - any statistician worth their salt could back up from a desired result and run simulations til they obtain the desired effects with a realistic sample. Furthermore, given the pretty much endless ways to muck around with data I always wonder how often even two people analyzing the same raw data would arrive at the exact same answer. With journals pushing for briefer and briefer papers, not all of these details make it into the methods section and substantial variability is possible. This goes double the more complex the analysis. T-test comparing two single-item measures? Sure, that we can do. Random effects model of post-processed ERP data that underwent a spatiotemporal PCA following ICA blink correction? I'm not convinced I could replicate my own analyses if I didn't save the scripts (which while we're on the subject...should probably be included too).

I think more to the point though is...who is actually going to check these things? Journals have enough trouble getting qualified reviewers a lot of the time and re-analyzing data can take hours upon hours (if not days) of time.

I'm by no means disagreeing that this would help and fully agree that it needs to happen. However, I think its important to recognize exactly how much we need to do if we are going to effectively check work. Requiring posting of datasets is a step in the right direction....but still a VERY small one in the grand scheme of things.
 
I do think this would help. However, even raw data is quite easy to fake. A lot if the papers that get caught have been fairly unsophisticated attempts, but if we're just talking spreadsheets - any statistician worth their salt could back up from a desired result and run simulations til they obtain the desired effects with a realistic sample. Furthermore, given the pretty much endless ways to muck around with data I always wonder how often even two people analyzing the same raw data would arrive at the exact same answer. With journals pushing for briefer and briefer papers, not all of these details make it into the methods section and substantial variability is possible. This goes double the more complex the analysis. T-test comparing two single-item measures? Sure, that we can do. Random effects model of post-processed ERP data that underwent a spatiotemporal PCA following ICA blink correction? I'm not convinced I could replicate my own analyses if I didn't save the scripts (which while we're on the subject...should probably be included too).

I think more to the point though is...who is actually going to check these things? Journals have enough trouble getting qualified reviewers a lot of the time and re-analyzing data can take hours upon hours (if not days) of time.

I'm by no means disagreeing that this would help and fully agree that it needs to happen. However, I think its important to recognize exactly how much we need to do if we are going to effectively check work. Requiring posting of datasets is a step in the right direction....but still a VERY small one in the grand scheme of things.

This is what Stapel did. He created entire data sets that yielded "accurate" results. All the data was just made up, not actually collected (frankly, that sounds like more work than actually running the study, massive ethical issues aside).
 
Last edited:
Interesting interview with the first author, denying that the data were faked. http://www.nytimes.com/2015/05/30/science/michael-lacour-gay-marriage-science-study-retraction.html

And here's the first author's full rebuttal of any data issues (which has been put in gallery proofs...?): https://www.dropbox.com/s/zqfcmlkzjuqe807/LaCour_Response_05-29-2015.pdf?dl=0
this guy is a real piece of work. more on his response
http://nymag.com/scienceofus/2015/05/strangest-thing-about-lacours-response.html
http://www.latimes.com/science/sciencenow/la-sci-sn-retraction-response-20150529-story.html

He is going to be the poster child for unethical behavior in research.
 
this guy is a real piece of work. more on his response
http://nymag.com/scienceofus/2015/05/strangest-thing-about-lacours-response.html
http://www.latimes.com/science/sciencenow/la-sci-sn-retraction-response-20150529-story.html

He is going to be the poster child for unethical behavior in research.
This retractionwatch post has some interesting tweets, including some from a former collaborator on the study: http://retractionwatch.com/2015/05/...of-retracted-gay-canvassing-study/#more-28742
 
I actually like Buzzfeed's reporting a lot. They broke the story about Denny Hastert being indicted a few days ago.
 
  • Like
Reactions: 1 user
I actually like Buzzfeed's reporting a lot. They broke the story about Denny Hastert being indicted a few days ago.

Yeah, some of their actual (non-listicle) articles are quite good.
 
This retractionwatch post has some interesting tweets, including some from a former collaborator on the study: http://retractionwatch.com/2015/05/...of-retracted-gay-canvassing-study/#more-28742
And apparently, LaCour may have faked data in a study that is currently under review (and possibly his other published one, depending on where that data came from): http://polisci.emory.edu/faculty/gjmart2/papers/lacour_2014_comment.pdf

(This link came from a Buzzfeed article of all things originally--who knew that Buzzfeed published stuff with actual paragraphs? ;) http://www.buzzfeed.com/virginiahughes/michael-lacour-apparently-faked-another-study-about-media-bi)

:wow: and to think I did not realize it could get worse for him.
 
Top