More Psych Science Fraud

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
It seems as if he was caught a little while back as he went from lecturer or grad student to server. Not really a big deal. Lets fix the real problem rather than sensationalizing the petty stuff. We as a country tie promotions and material gain to results. When that happens, there will always be those that cheat to prosper. This is especially true in a system that willingly trains many more people than there are jobs for upon completion and debt becomes an issue.
 
Last edited:
Can you imagine being his fellow grad students or PI and seeing all your real, hard work just go down the drain....I would be sooo mad
 
Sad and very frustrating. Todd has put out so many excellent studies and for this to happen is a blow to a distinguished career. All it takes is one person to ruin multiple careers. To me, I feel like those who engage in these in activities are arrogant about ever getting caught, but also lazy in a way, as it is easier to exaggerate effects then run more subjects, reprocess, and rewrite the paper accurately. Sorry for the rant but tired of this bs that taints all of us.
 
(in response to LucidMind)

True....but it is a necessary evil to protect the field. I have significant concerns about the tendency to only publish "positive" results, which I think plays at least part of a roll in these types of fabrications. These stories definitely pull down the field and many good people who didn't know about the data manipulation.
 
Sad and very frustrating. Todd has put out so many excellent studies and for this to happen is a blow to a distinguished career. All it takes is one person to ruin multiple careers. To me, I feel like those who engage in these in activities are arrogant about ever getting caught, but also lazy in a way, as it is easier to exaggerate effects then run more subjects, reprocess, and rewrite the paper accurately. Sorry for the rant but tired of this bs that taints all of us.

Hopefully it won't be held against Braver as he's the one who actually caught and reported it.

But, yes, one person's stupid behavior definitely impacts many others in this situation.
 
Makes me more bitter about the peer review process because it isn't catching this stuff, and really isn't that what its biggest purpose is?
 
Makes me more bitter about the peer review process because it isn't catching this stuff, and really isn't that what its biggest purpose is?

It sometimes feels more like a popularity contest.
 
Makes me more bitter about the peer review process because it isn't catching this stuff, and really isn't that what its biggest purpose is?

The problem is that the peer review process involves people and little standardized procedure. It is hard to catch data manipulation unless a person does what Braver did and reviews all of the data in detail. Many supervisors would not bother to get that involved in reviewing a dissertation or paper. It is simply too time consuming to do that.
 
Makes me more bitter about the peer review process because it isn't catching this stuff, and really isn't that what its biggest purpose is?

I firmly believe the peer review process is a very lazy, sloppy form of quality control for the scientific community and have felt that way for a long time. I don't think I'm alone in that some of the best-known folks (ridiculous numbers of grants, publish in the best places, etc.) I've worked with over the years have unquestionably done the worst work. Truthfully, peer review is a review of the writing moreso than the science. Sure, we can watch out for the big things (e.g. no or poor control group), point out some confounds that may have been missed or alternative interpretations, etc. That says little about the quality of the science though.

I'm convinced the things we ignore are the things that matter. I'm about to submit a manuscript that used an extremely well known measure (cited > 2000 times) in a standard way. The distribution of the data is such that even standard "Robust to departure" statistics are pretty tough to justify using. Yet I've never seen anyone do anything else. I doubt our sample is that unusual. So in those 2000 papers either no one ever looked, or they did and thought they wouldn't be able to publish so they swept it under the rug. Similarly - nearly every study has SOME issue with it. Equipment breaks, an RA screws up a couple sessions, physio data is noisy and has to get dropped etc.. Yet you'd never know that from most manuscripts (and in fact, if you are honest about such things your papers get rejected). Either I've happened to find the only labs in the country that aren't run perfectly, or everyone else just covers it up.

I genuinely believe we need a dramatic reform of the system and hope to contribute to that as I progress in my career. Its not an easy issue though because most of these things "can't" be caught unless you actually see the raw data (and oftentimes, not even then). With more advanced analyses (HLM/SEM/etc.) I always am left wondering how many papers are published where someone wrote their syntax wrong. Reviewers aren't going to say "send me your data files and syntax so I can check your numbers" though - in part due to time constraints, and in part due to social factors.

I actually DON'T worry about cases like this guy. These are extreme, likely quite rare, and not deemed acceptable to the field. I worry much more about the faculty member writing a paper based off the data collected by the RA who incorrectly explained a computer task to several participants and a project coordinator who took it upon themselves to "impute" a couple data points on an interview because they forgot to write it down, that was analyzed by the new grad student who forgot to include one of the lower-order terms in the interaction model, before being passed off to the post-doc who wrote the methods section not realizing that the computer task itself was programmed wrong by the investigator they borrowed it from and didn't work the way they thought it did. THAT is the stuff that scares me. Most PIs are not in a position to catch many of those errors (let alone the peer reviewers).
 
Last edited:
I firmly believe the peer review process is a very lazy, sloppy form of quality control for the scientific community and have felt that way for a long time. I don't think I'm alone in that some of the best-known folks (ridiculous numbers of grants, publish in the best places, etc.) I've worked with over the years have unquestionably done the worst work. Truthfully, peer review is a review of the writing moreso than the science. Sure, we can watch out for the big things (e.g. no or poor control group), point out some confounds that may have been missed or alternative interpretations, etc. That says little about the quality of the science though.

I'm convinced the things we ignore are the things that matter. I'm about to submit a manuscript that used an extremely well known measure (cited > 2000 times) in a standard way. The distribution of the data is such that even standard "Robust to departure" statistics are pretty tough to justify using. Yet I've never seen anyone do anything else. I doubt our sample is that unusual. So in those 2000 papers either no one ever looked, or they did and thought they wouldn't be able to publish so they swept it under the rug. Similarly - nearly every study has SOME issue with it. Equipment breaks, an RA screws up a couple sessions, physio data is noisy and has to get dropped etc.. Yet you'd never know that from most manuscripts (and in fact, if you are honest about such things your papers get rejected). Either I've happened to find the only labs in the country that aren't run perfectly, or everyone else just covers it up.

I genuinely believe we need a dramatic reform of the system and hope to contribute to that as I progress in my career. Its not an easy issue though because most of these things "can't" be caught unless you actually see the raw data (and oftentimes, not even then). With more advanced analyses (HLM/SEM/etc.) I always am left wondering how many papers are published where someone wrote their syntax wrong. Reviewers aren't going to say "send me your data files and syntax so I can check your numbers" though - in part due to time constraints, and in part due to social factors.

I actually DON'T worry about cases like this guy. These are extreme, likely quite rare, and not deemed acceptable to the field. I worry much more about the faculty member writing a paper based off the data collected by the RA who incorrectly explained a computer task to several participants and a project coordinator who took it upon themselves to "impute" a couple data points on an interview because they forgot to write it down, that was analyzed by the new grad student who forgot to include one of the lower-order terms in the interaction model, before being passed off to the post-doc who wrote the methods section not realizing that the computer task itself was programmed wrong by the investigator they borrowed it from and didn't work the way they thought it did. THAT is the stuff that scares me. Most PIs are not in a position to catch many of those errors (let alone the peer reviewers).

Replication. Over time the dust settles.
 
Replication. Over time the dust settles.

But no journal wants to publish replication studies. Or you attempt to replicate a study, find contrasting results, and the person whose idea you are arguing against pans your manuscript during the review process and you get a rejection. 🙄

Ollie, agreed with everything you said. Perhaps it's you and I that have been in labs that aren't perfect.
 
Bingo, Ollie hit the nail on the head. Truly, the best way to fight such corruption is to begin publishing articles that replicate findings and those that have no significant findings. However, that is a thought sell. Really, we are tip of the iceberg. How many replications of a study do you think a drug company does to have one significant study? I don't know...and neither does anyone not on their payroll. Think about that the next time you reach for a pill.
 
But no journal wants to publish replication studies. Or you attempt to replicate a study, find contrasting results, and the person whose idea you are arguing against pans your manuscript during the review process and you get a rejection. 🙄

Ollie, agreed with everything you said. Perhaps it's you and I that have been in labs that aren't perfect.

Yep, the current publishing system as it is now does NOT encourage replication (this is true in hard science as well as psychology---hence the fear of getting "scooped")

I don't know how peer review would have caught this, though--there's no way for a peer reviewer to tell if the numbers in an article are "real." FWIW, I do know of one journal that requests your actual data set when you submit, although I imagine that this practice could also create IRB problems.
 
Bingo, Ollie hit the nail on the head. Truly, the best way to fight such corruption is to begin publishing articles that replicate findings and those that have no significant findings. However, that is a thought sell. Really, we are tip of the iceberg. How many replications of a study do you think a drug company does to have one significant study? I don't know...and neither does anyone not on their payroll. Think about that the next time you reach for a pill.

I agree that journals are unlikely to publish pure replication studies. An alternative way to go about it is to replicate a design or certain aspects of a study, and add a novel element to it. I've done this myself twice. It's not ideal - I agree that pure replication should be encouraged - but it can circumnavigate some of the obstacles while adding to the literature.
 
I agree that journals are unlikely to publish pure replication studies. An alternative way to go about it is to replicate a design or certain aspects of a study, and add a novel element to it. I've done this myself twice. It's not ideal - I agree that pure replication should be encouraged - but it can circumnavigate some of the obstacles while adding to the literature.

Agreed--replication and extension tends to go over much better, although I also agree that journals really should be more supportive of pure replication studies.
 
I agree that journals are unlikely to publish pure replication studies. An alternative way to go about it is to replicate a design or certain aspects of a study, and add a novel element to it. I've done this myself twice. It's not ideal - I agree that pure replication should be encouraged - but it can circumnavigate some of the obstacles while adding to the literature.

Ironically, I read that one of the retracted studies was at least partially replicated by another research team and the results were consistent with that of the retracted study. What's really odd to me is that it sounds like the grad student already had decent data and just committed fraud to make it better, which makes the whole thing all the more WTF? to me.
 
Ironically, I read that one of the retracted studies was at least partially replicated by another research team and the results were consistent with that of the retracted study. What's really odd to me is that it sounds like the grad student already had decent data and just committed fraud to make it better, which makes the whole thing all the more WTF? to me.

Again, this is why I have long stated that trying to publish empirical data/articles was probably one of the most unrewarding experiences I have ever had.

Once I got through the process (which by this time I already concluded had a poor ROI given the work/labor time), I was left with an article that would make little impact, get buried under the thousands of other articles published on the general topic that year, and that very few people will actually read (its true, folks). Ra-Ra!
 
Again, this is why I have long stated that trying to publish empirical data/articles was probably one of the most unrewarding experiences I have ever had.

Once I got through the process (which by this time I already concluded had a poor ROI given the work/labor time), I was left with an article that would make little impact, get buried under the thousands of other articles published on the general topic that year, and that very few people will actually read (its true, folks). Ra-Ra!

In neuropsychology at least, I have found that the research does inform practice - particularly the psychometric studies examining score base rates among clinical populations. In addition, the research does build upon each other - over time you can see how neuropsychological constructs vary and covary among different populations and how they impact outcome and tie with neural circuitry. It's a very exciting field to be part of.

From a purely self-interest perspective, publishing is a real notch for your career. If you can do it with notable people, it ties you to them which can be real helpful for career building. And while this is certainly a bit elitist, there is an underlying degree of respect that well-published neuropsychologists share which pure practitioners do not enjoy. Granted, not everyone wants or cares about this and I do think the actual benefits may not be worth all the backscratching...but sometimes it has opened doors for me, and there are some financial benefits as well as lifestyle ones such as academic freedom.
 
Yep, the current publishing system as it is now does NOT encourage replication (this is true in hard science as well as psychology---hence the fear of getting "scooped")

I don't know how peer review would have caught this, though--there's no way for a peer reviewer to tell if the numbers in an article are "real." FWIW, I do know of one journal that requests your actual data set when you submit, although I imagine that this practice could also create IRB problems.

Actually, I was wrong--they request your full analysis output but not your actual data set.
 
Again, this is why I have long stated that trying to publish empirical data/articles was probably one of the most unrewarding experiences I have ever had.

Once I got through the process (which by this time I already concluded had a poor ROI given the work/labor time), I was left with an article that would make little impact, get buried under the thousands of other articles published on the general topic that year, and that very few people will actually read (its true, folks). Ra-Ra!

A lot of what I do is program development/intervention research, and to quote one of my old supervisors, "If you don't publish, it doesn't exist." Without publishing, people will have no idea that you tried something or what your program is like and won't be able to replicate or expand on that. So, that's one reason for publishing. I also got feedback on a recent non-intervention presentation along the lines of "Thank you for doing this research. We see this issue in clinical work frequently but there's very little acknowledgement of it in the literature." Fundamentally, research is a way to communicate ideas and results (although paywalls do hamper this, admittedly).

That said, I do agree it can be a very frustrating and very Sisyphean process and that many (most?) articles may never be cited.
 
I think, as usual, the interwebs are a little too gloom and doom. I agree that there are problems with the peer review process and also dishonesty/fudging in research. There is no evidence that this is a widespread problem but that only a small percentage of people are doing something really wrong.

I also agree that there are a lot of labs and prolific academics that probably producing a lot of crap work due to data mining, massaging, and carelessness.

On the other hand, there is a movement to address these issues and deal with the areas that the peer review process cannot.

First of all there is this website
http://psychfiledrawer.org/
Exactly what we need. One could replicate studies and submit them without worrying about the bias of journal editors desiring new results.

Second, there is a movement to sniff out the data fudgers,
http://www.theatlantic.com/magazine/archive/2012/12/the-data-vigilante/309172/

And finally, it is now much easier to include full data sets and analyses online with your publication. As an example, the new APA open access journal requires your data.
http://www.apa.org/science/about/psa/2012/09/access-journal.aspx

I hope in the future these will become the standard for publication.
 
You've got to give those details, when you catch them, when you publish a paper.

Oh I agree and I think even most of the PIs in those labs would absolutely agree with this (often vociferously - these are some of the top researchers in their respective fields!). The key is "When you catch them" and we're working within a system that seems almost deliberately designed to encourage scientists to cover their eyes and ears so they AVOID catching mistakes. That said, given my career goals I certainly don't think its hopeless. Its frustrating, but I think we can change that. Replication will certainly be an important piece of that but it will only get us so far.
 
And finally, it is now much easier to include full data sets and analyses online with your publication. As an example, the new APA open access journal requires your data.
http://www.apa.org/science/about/psa...s-journal.aspx

I *really* can't see most IRBs being okay with people turning over actual data sets to journals. Even if you remove explicitly identifying information, there's always the argument that certain combinations of variables could be identifying. In many cases, I think that's a stretch, but it is an argument IRBs make, and so I can't see them agreeing to having full data sets go to journals.
 
The journal Decision Making requires your data set. My IRB said it was okay.
 
Last edited:
I *really* can't see most IRBs being okay with people turning over actual data sets to journals. Even if you remove explicitly identifying information, there's always the argument that certain combinations of variables could be identifying. In many cases, I think that's a stretch, but it is an argument IRBs make, and so I can't see them agreeing to having full data sets go to journals.

For every federal grant, you are required to note that you are willing to share your data with others. I really don't see why the IRB would care so long as the dataset is deidentified. If it is a small dataset, perhaps that could be an issue. But I have even heard of raw data sharing.

I think it is a good thing that there are journals with reviewers willing to take a look at the data (in case people did their analyses wrong). But it usually isn't going to solve the problem of falsified data.
 
For every federal grant, you are required to note that you are willing to share your data with others. I really don't see why the IRB would care so long as the dataset is deidentified. If it is a small dataset, perhaps that could be an issue. But I have even heard of raw data sharing.

I think it is a good thing that there are journals with reviewers willing to take a look at the data (in case people did their analyses wrong). But it usually isn't going to solve the problem of falsified data.

Idk. I've seen IRBs have issues with people emailing deidentified data sets to collaborators, for example. I agree it's over the top in most cases, but it happens.
 
Last edited:
Idk. I've seen IRBs have issues with people emailing deidentified data sets to collaborators, for example. I agree it's over the top in most cases, but it happens.
you have a particularly miserly IRB. But if there are issues, lets say with a small data set, if its is not important variables like age and gender can be left out and help mask identities.
 
It's a challenge. There are 18 hippa identifiers. The VA even counts the date of a visit as identifying in the context of no other data.

Yeah I don't think that everyone realizes what a pain it is to actually deideintify a dataset "by the book." But if you do it correctly (which I am sure many don't without their IRB's knowledge), then there is no problem.
 
But no journal wants to publish replication studies. Or you attempt to replicate a study, find contrasting results, and the person whose idea you are arguing against pans your manuscript during the review process and you get a rejection. 🙄

Ollie, agreed with everything you said. Perhaps it's you and I that have been in labs that aren't perfect.

Saw this today...
http://www.psychologicalscience.org/index.php/publications/observer/obsonline/aps-journal-seeks-labs-to-participate-in-first-registered-replication-report-project.html
 
Bumping this up because I happened to look at Savine's LinkedIn and noticed that he's getting a 2nd bachelor's in nursing--where he works as an RA and has won research awards. Huh.

It also reminded me the most baffling thing about this situation to me--that his research was legitimately good (some of the retracted research has been replicated) and yet he doctored and faked data anyway. It's just bizarre.

ETA: On the broader topic of fraud, I'd highly, highly recommend this fascinating article on Diederik Stapel and the social psych fraud he committed: http://www.nytimes.com/2013/04/28/m...cious-academic-fraud.html?pagewanted=all&_r=0

I find this quote to be a fascinating example of cognitive dissonance:

"And yet as part of a graduate seminar he taught on research ethics, Stapel would ask his students to dig back into their own research and look for things that might have been unethical. “They got back with terrible lapses,” he told me. “No informed consent, no debriefing of subjects, then of course in data analysis, looking only at some data and not all the data.” He didn’t see the same problems in his own work, he said, because there were no real data to contend with."
 
Last edited:
He should move to California. He may make more with a BSN that he ever would as a psychologist. Sad, really.
 
It also reminded me the most baffling thing about this situation to me--that his research was legitimately good (some of the retracted research has been replicated) and yet he doctored and faked data anyway. It's just bizarre.
short cuts are tempting, especially with reliance on self-oversight.
 
short cuts are tempting, especially with reliance on self-oversight.
I always wonder, with the "publish or perish" mindset, how often data is actually manipulated. Like you said, it's probably extremely tempting. It's something we might never know.
 
I always wonder, with the "publish or perish" mindset, how often data is actually manipulated. Like you said, it's probably extremely tempting. It's something we might never know.
From my experience, data is often mistreated but not as blatantly as the cases above. Again my anecdotal experience, this is more likely in social psych rather than treatment outcome research (especially with the creation of clinicaltrials.gov).
 
Top