- Joined
- Oct 30, 2016
- Messages
- 995
- Reaction score
- 635
A top Cornell food researcher has had 13 studies retracted. That’s a lot.
Sent from my SM-G950U using SDN mobile
Sent from my SM-G950U using SDN mobile
"A 2012 survey of 2,000 psychologists found p-hacking tactics were commonplace. Fifty percent admitted to only reporting studies that panned out (ignoring data that was inconclusive). Around 20 percent admitted to stopping data collection after they got the result they were hoping for. Most of the respondents thought their actions were defensible. Many thought p-hacking was a way to find the real signal in all the noise."A top Cornell food researcher has had 13 studies retracted. That’s a lot.
Sent from my SM-G950U using SDN mobile
"A 2012 survey of 2,000 psychologists found p-hacking tactics were commonplace. Fifty percent admitted to only reporting studies that panned out (ignoring data that was inconclusive). Around 20 percent admitted to stopping data collection after they got the result they were hoping for. Most of the respondents thought their actions were defensible. Many thought p-hacking was a way to find the real signal in all the noise."
No bueno.
Sent from my SM-G950U using SDN mobile
Wow. I hear it happens but I am honestly shocked to hear that.Unfortunately, I witnessed this kind of behavior as a grad student by other students (and tacitly sanctioned by the professor). The research assistant of the prof would plop down a big 'ole intercorrelation matrix, they'd note the 'significant' correlations, and then proceed to (post hoc) come up with 'hypotheses' (really, post hoc explanations) and then proceed to write the papers/posters as if they'd predicted the correlations from theory. It's one thing that really turned me off to academia as a career choice. I used to joke about them 'Bonferroni-ing around' with the data (my phrase 🙂 ) and 'harvesting asterisks' (a phrase borrowed from Paul Meehl). I later studied quite a bit in the philosophy of science proper and it bothered me even more as time went on. It also made me inherently skeptical of the presumed 'sanctity' of empirical research findings from the literature.
Wow. I hear it happens but I am honestly shocked to hear that.
Sent from my SM-G950U using SDN mobile
I think that increasing dependence on the project-based funding model makes it more tempting to do shoddy if not fraudulent work.
This. x100000. Ideas have been tossed around for doing "lab based" funding, which I think would solve a lot of issues if it was done right, though I'm not convinced the present group would do so.
I will also say that while I am absolutely an advocate for improving what we do, I worry sooooo much of the "compliance" stuff is misguided. Preregistration helps for clinical trials and small lab-based things, but has the potential to kill innovation in some areas where it is just untenable (i.e. large epi studies that might generate 400-500 discrete papers). Open data assumes the data posted is accurate. My biggest concerns are all the errors that go into collecting the data in the first place, which I feel like isn't a part of the conversation. Mostly, my concern is who pays for all these things. I'm happy to pre-register my trials, post my datasets and share my code. I cannot and will not dedicate my time to doing so unless you agree to reduce my output expectations accordingly, especially given I'm also now expected to draft and sign memos every time someone misses an item on a questionnaire (this is seriously a thing we have to do here). You want more done, do it/pay for it yourself or STFU.
I agree somewhat. I actually like the idea of pre-registration for most things. But I also think post-hoc analyses are important for exploration and innovation. But, post-hoc analyses should be clearly labeled as such. Far too often someone just p-hacks a large dataset and writes up "positive" findings as if that's what they set out to do in the first place. The data is still important in some contexts, but it should just be taken in the context of the analysis, which should then guide future confirmatory studies.
I agree somewhat. I actually like the idea of pre-registration for most things. But I also think post-hoc analyses are important for exploration and innovation. But, post-hoc analyses should be clearly labeled as such. Far too often someone just p-hacks a large dataset and writes up "positive" findings as if that's what they set out to do in the first place. The data is still important in some contexts, but it should just be taken in the context of the analysis, which should then guide future confirmatory studies.
Pre-registration certainly has a role to play - I've become quite jaded about the state of academia so hope I didn't come across as completely diminishing its importance. I do think it is the proverbial bandaid on the broken limb. Some random related thoughts:
4) Why is post-hoc analysis a badge of shame? Its a tool, its not inherently evil. Yes, label it as post-hoc. Yet when you are reviewing for some crummy IF=2 specialty journal, don't recommend rejecting something because the authors openly said it was post-hoc and mention the need for replication in the limitations section.
You sound a lot like my old professor who I spoke with the other day. He is very jaded by academia and basically said he just wants to tune out, collect a paycheck, and do hobbies he enjoys. I'm not from a PhD program but I'm from a small psyd program. What he basically said is because the psyd program doesn't make enough money for the school the faculty are expected to do it all for less money. This obviously includes classes, research, charing dissertations, publishing, etc. The chair of our department doesn't want to increase the number of students we accept due to the integrity of the program but the university wants more money. I noticed a similar problem at the university associated with my internship. I never wanted to go into academia but what I've learned in the past couple of years has made pay attention to job satisfaction for academic psychologists.Pre-registration certainly has a role to play - I've become quite jaded about the state of academia so hope I didn't come across as completely diminishing its importance. I do think it is the proverbial bandaid on the broken limb. Some random related thoughts:
1) Time (see above)
2) The absurdity of the idea that we CAN know pre-specify everything we will do. This came out of clinical trials that have "duh" outcome measures with reasonably known distributions (i.e. dead vs. not dead). It works fairly well there. Simple social psych vignette studies with 1-2 outcomes...makes sense. The kind of stuff I and most of my colleagues are doing? I have no earthly idea what the spatial distribution of GPS contact with certain environmental features or what the distribution of neural connectivity indices on a novel MRI task will be. I can't specify every nuance of my analytic plan because we quite literally may have to invent new analyses. Currently pre-registration does an extremely poor job of addressing these things. So its become a game of trying to write things vaguely to give myself the freedom to make well-informed decisions later.
3) We seem locked into pre-registration without acknowledging the multitude of other well-established means of ethically conducting these types of analysis. Machine learning is literally an entire field dedicated to it. Cross-validation techniques, etc. - tons of tools exist that we don't use. It won't work in all cases, but I think we're so focused on shoving everything into a clinical trials framework that we aren't considering other valid options.
4) Why is post-hoc analysis a badge of shame? Its a tool, its not inherently evil. Yes, label it as post-hoc. Yet when you are reviewing for some crummy IF=2 specialty journal, don't recommend rejecting something because the authors openly said it was post-hoc and mention the need for replication in the limitations section.
I don't think it's a badge of shame at all, I just think it should be explicitly stated.
What he basically said is because the psyd program doesn't make enough money for the school the faculty are expected to do it all for less money.