Recent Big Psych Replication Problems

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DynamicDidactic

Still Kickin'
10+ Year Member
Joined
Jul 27, 2010
Messages
1,814
Reaction score
1,525
Just wanted to throw out some bigger stuff that may influence what we know about human behavior:
Does smiling actually make you happier?

Ummm... how good are those RCTs for EBPs?

Ohhh, the stereotype threat...

Members don't see this ad.
 
  • Like
Reactions: 1 user
I'm glad people are starting to take this seriously. I wish they'd just take it seriously in all fields, as these issues are not unique to psychological research. Look up the replicability of genetics research in psychiatry sometime.
 
  • Like
Reactions: 4 users
BMJ had a huge thing where an author called an RCT for colorectal cancer “the worst rct ever”
 
Members don't see this ad :)
From the article:

A small number of ESTs (e.g., both Cognitive Processing Therapy and Prolonged Exposure for PTSD) scored consistently well across all or most metrics, whereas a larger number of ESTs— including a number classified as Strong (e.g., Behavioral Activation for Depression, Cognitive Remediation for Schizophrenia, Dialectical Behavior Therapy for Borderline Personality Disorder)— performed relatively poorly across most or all of our metrics of evidential value.

Well, that's upsetting about DBT but at least CPT and PE did well. My professional identity crisis is averted, at least for now.
 
  • Like
Reactions: 2 users
Well, that's upsetting about DBT but at least CPT and PE did well. My professional identity crisis is averted, at least for now.
To be fair, this is only looking at the select RCTs that Division 12 lists on their website. Not all RCTs for a certain treatment. Though, a lot of statistical reporting isn't great in DBT trails (I say this as someone that is trained in DBT and researches DBT).
 
  • Like
Reactions: 3 users
I am convinced that many many many studies are flawed.

The insurance companies pressured everyone to provide an axis I diagnosis. Even the DSM says that adjustment disorders are the most common. So everyone started calling personality disorders and problems of living something else under Axis I. Then we throw that all together in rcts and wonder we don’t have a response from the lady who is $5 away from homelessness and the narcissist who is hollow inside.
 
  • Like
Reactions: 2 users
I am convinced that many many many studies are flawed.

The insurance companies pressured everyone to provide an axis I diagnosis. Even the DSM says that adjustment disorders are the most common. So everyone started calling personality disorders and problems of living something else under Axis I. Then we throw that all together in rcts and wonder we don’t have a response from the lady who is $5 away from homelessness and the narcissist who is hollow inside.

One bit of good news is that some of the 'big dogs' in the field have recognized that the 'protocol for syndrome' approach has many serious issues as a main approach to treatment planning and implementation. One recently published book that summarizes their position (which is mostly to re-focus efforts on ensuring actual 'competencies' in assessment/treatment and implementing transdiagnostic principles of behavior change in context) is Process-Based CBT--The Science and Core Competencies of Cognitive Behavioral Therapy.
 
  • Like
Reactions: 3 users
I am convinced that many many many studies are flawed.

The insurance companies pressured everyone to provide an axis I diagnosis. Even the DSM says that adjustment disorders are the most common. So everyone started calling personality disorders and problems of living something else under Axis I. Then we throw that all together in rcts and wonder we don’t have a response from the lady who is $5 away from homelessness and the narcissist who is hollow inside.
I agree with what you say.

Additionally, I'd like to highlight that there is no such thing as a flawless study in psychology or health research. We are always leveraging validity with reliability against pragmatics of studying people.

Consequentially, I often run into psychologists questioning the validity of all RCTs/evidence because of this and falling back on clinical judgment or intuition or the dodo bird effect. My point is that while all studies are flawed some are much more flawed than others. Similarly, some theories are much more flawed than others. While there is still a lot clear up, we do have very strong evidence of certain psychological phenomenon and certain psychological treatments.

So please, no more energy therapies, acupuncture, or repressed memories (many more I could add) {steps down from soap box}
 
  • Like
Reactions: 3 users
From the article:

A small number of ESTs (e.g., both Cognitive Processing Therapy and Prolonged Exposure for PTSD) scored consistently well across all or most metrics, whereas a larger number of ESTs— including a number classified as Strong (e.g., Behavioral Activation for Depression, Cognitive Remediation for Schizophrenia, Dialectical Behavior Therapy for Borderline Personality Disorder)— performed relatively poorly across most or all of our metrics of evidential value.

Well, that's upsetting about DBT but at least CPT and PE did well. My professional identity crisis is averted, at least for now.

For those who choose to participate and to complete those protocols. I'm finding that the actual 'uptake' (i.e., number of patients who are willing/able to agree to participate in and to complete one of these protocols) is a fairly small percentage (and definitely a numerical minority) in an open-access mental health clinic within the VA system.
 
  • Like
Reactions: 1 user
For those who choose to participate and to complete those protocols. I'm finding that the actual 'uptake' (i.e., number of patients who are willing/able to agree to participate in and to complete one of these protocols) is a fairly small percentage (and definitely a numerical minority) in an open-access mental health clinic within the VA system.
I am fairly certain that the majority of those studies are not done in VAs.

But your greater point isn't new, tertiary intervention is the end of the road. Public health approaches are much more effective.
 
I agree with what you say.

Additionally, I'd like to highlight that there is no such thing as a flawless study in psychology or health research. We are always leveraging validity with reliability against pragmatics of studying people.

Consequentially, I often run into psychologists questioning the validity of all RCTs/evidence because of this and falling back on clinical judgment or intuition or the dodo bird effect. My point is that while all studies are flawed some are much more flawed than others. Similarly, some theories are much more flawed than others. While there is still al to clear up, we do have very strong evidence of certain psychological phenomenon and certain psychological treatments.

So please, no more energy therapies, acupuncture, or repressed memories (many more I could add) {steps down from soap box}
So, we had a psychology department meeting yesterday where we carved out some special celebrity time for a presentation by a recently beknighted 'Whole Health Champion' social worker who was touting the just wonderful/powerful 'whole health' activities/therapies they were instituting and--I **** you not--emphasizing what she called 'drum circles' as a means of addressing every psychiatric and medical issue under the sun.

I had to stifle the impulse to ask her if these were 'evidence-based drum circles' or merely 'supportive drum circles.'

Meanwhile, if I employ evidence-based principles of behavior change such as motivational interviewing, behavioral activation, cognitive restructuring, assertiveness training, mindfulness skills--you name it--outside of a 'formal' 'EBT' recipe-based protocol, everyone in mental health administration all of a sudden is a hard-assed penny-pinching critic regarding what I'm doing and whether it's a 'waste of resources.'

Drum-circle therapy. 'Whole Health.'
 
  • Like
  • Haha
Reactions: 3 users
Fun fact, I actually knew and lived in the same residence hall as Dr. Prasad there. He used to sing a very entertaining song about a hunter's unfortunate encounter with a bear in the woods. Haven't kept up with him, glad to see he's doing good work.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
For those who choose to participate and to complete those protocols. I'm finding that the actual 'uptake' (i.e., number of patients who are willing/able to agree to participate in and to complete one of these protocols) is a fairly small percentage (and definitely a numerical minority) in an open-access mental health clinic within the VA system.

It's not as many as I'd like, but our VA clinic (OPMH) has a fair number of patients who agree to do PE or CPT.
 
  • Like
Reactions: 1 user
It's not as many as I'd like, but our VA clinic (OPMH) has a fair number of patients who agree to do PE or CPT.

My success rate for PE therapy completion among survivor of sexual assault (non-VA) was light years better than getting Vets to engage in it within the VA system for any index trauma.
 
  • Like
Reactions: 1 user
Yeah, I've often felt frustrated by how many VA patients turn down PTSD EBPs when a lot of the general public can't access them despite how much they'd probably benefit from them. Oh well.

Btw, that "smile study" article critique was interesting to me because I frequently cite it when teaching opposite action in DBT. Guess I'll have to rework what I say.

Also, if you read the source paper about EBP studies, it suggests that PE and CPT are way more effective than EMDR. Not surprising, of course, but it's nice to have further evidence.
 
Btw, that "smile study" article critique was interesting to me because I frequently cite it when teaching opposite action in DBT. Guess I'll have to rework what I say.
you and I both. All I can do is... smile. Maybe some willing hands.
 
  • Like
Reactions: 1 user
I had to stifle the impulse to ask her if these were 'evidence-based drum circles' or merely 'supportive drum circles.'
I would have asked. I am not fun around clinicians that practice quackary.
everyone in mental health administration all of a sudden is a hard-assed penny-pinching critic regarding what I'm doing and whether it's a 'waste of resources.'
Can you elaborate? What waste are they talking about?
 
Meanwhile, if I employ evidence-based principles of behavior change such as motivational interviewing, behavioral activation, cognitive restructuring, assertiveness training, mindfulness skills--you name it--outside of a 'formal' 'EBT' recipe-based protocol, everyone in mental health administration all of a sudden is a hard-assed penny-pinching critic regarding what I'm doing and whether it's a 'waste of resources.'

Drum-circle therapy. 'Whole Health.'

This seems backwards. MH admin at my facility were beginning to get nasty with clinicians who were not doing this and were continuing to do do generic supportive therapy with goals that were not SMART before I left.
 
  • Like
Reactions: 1 user
This seems backwards. MH admin at my facility were beginning to get nasty with clinicians who were not doing this and were continuing to do do generic supportive therapy with goals that were not SMART before I left.

The pendulum is swinging back towards "we don't care what therapy you actually do as long as access is good" IMO. Granted, it also depends on your local administration.
 
I had to save my questions. My main one had to do with whether there was an (even theoretical) maximum number of clients we can be expected to have in our caseload and still manage to complete all the requirements that they are laying out, especially in relation to formal measurement of treatment outcomes via questionnaires (which, of course, I agree with, but like everything else takes extra time), processing all the clinical reminders, authoring 'mental health suite treatment plans' (which they just said they were going to become more critical about and require a higher level of detail), etc., etc.

I currently have well over 100 clients in my psychotherapy caseload and am only doing four 8-hr shifts in the post-deployment clinic per week (fifth day is TBI clinic / internship / meetings). At our facility, they have us do six appointments per 8hr shift (with one of those usually a 90-min intake). I have an intake clinic, a separate 'intensive' clinic (for high-risk suicide flags and EBP protocols mostly, with whom I can meet weekly, approx 6 slots/wk). So this leaves about 15 weekly slots for 'standard' clinic cases which--at this point--is 100+ patients and steadily growing. We have no social worker mental health case management services. The arithmetic doesn't work but admin keeps doubling-down with the schema that the provider is always at fault. So, there is no maximum caseload--even in principle. A couple of the other providers in CBOCs have it a bit worse than me but the distribution of cases is somewhat uneven across the department. So, mental health admin types (social workers) who don't and never really have done any real therapy in their lives prattle on about protocols and 'evidence-based' therapies and consider anything outside a protocol to be 'not evidence-based.' Tsk tsk. It's frustrating.
 
This seems backwards. MH admin at my facility were beginning to get nasty with clinicians who were not doing this and were continuing to do do generic supportive therapy with goals that were not SMART before I left.

There's a bizarre disconnect. It goes back to what I've always said...VA isn't a healthcare organization. VA is a public-relations (PR) organization masquerading as a health care organization. What really matters is perception of 'quality' of care (and 'quality' is a fuzzy/mutating concept). Right now, one of the main PR thrusts is 'Whole Health.' The absolutely jaw-dropping, revolutionary, and entirely new (rolls eyes) idea that physical health can influence psychological health (& vice versa) and--get ready for this one--actual behavior can influence both.

So, anything (even sitting in a damn circle beating on drums)--even empty rituals--that are LABELED 'Whole Health' gets administration all hyped up and they drive it down everyone's throats (including the veterans).

Same deal with the weekly (or more often) 'raising awareness' walkathons that they do around the quad. Instead of ensuring that the veterans on my caseload have access to actual prescribing providers to actually refill their antidepressant medications, we have admin staff push rituals/activities organized around 'raising awareness' for suicide prevention. The notion that actually getting depressed vets access to antidepressant medication refills JUST MIGHT be more effective in reducing suicide risk than walks (that vets don't even participate in--just staff who have nothing better to do) to 'raise awareness' about suicide being a problem for veterans is totally lost on administration. Seriously, at this point, who works at VA who isn't 'aware' of suicide as an issue facing veterans?
 
VA isn't a healthcare organization. VA is a public-relations (PR) organization masquerading as a health care organization.
I love that line. At the same time, this may actually be an advantage:

I feel like I can also say that most medical system are profit (even the non-profits) generating institutions masquerading as a health care organization.

These are places that don't even try to roll out EBPs and provide training. Do whatever you got to do as long as it doesn't get us sued, allows us to bill someone, and increases our status. At least the VA tries to address the public's concern, sometimes. Its evident in the empirical data, VAs generally do a better job than other healthcare systems.
 
  • Like
Reactions: 1 users
I love that line. At the same time, this may actually be an advantage:

I feel like I can also say that most medical system are profit (even the non-profits) generating institutions masquerading as a health care organization.

These are places that don't even try to roll out EBPs and provide training. Do whatever you got to do as long as it doesn't get us sued, allows us to bill someone, and increases our status. At least the VA tries to address the public's concern, sometimes. Its evident in the empirical data, VAs generally do a better job than other healthcare systems.
Well-said. The organizational psychopathology underpinning all of this is definitely not limited to VA.
 
  • Like
Reactions: 1 user
I had to save my questions. My main one had to do with whether there was an (even theoretical) maximum number of clients we can be expected to have in our caseload and still manage to complete all the requirements that they are laying out, especially in relation to formal measurement of treatment outcomes via questionnaires (which, of course, I agree with, but like everything else takes extra time), processing all the clinical reminders, authoring 'mental health suite treatment plans' (which they just said they were going to become more critical about and require a higher level of detail), etc., etc.

Good Lord, why don't they fix that travesty of a program already? Treatment plans are not that Fing difficult, and they don't need to be very long. In fact the shorter and simpler, the better. IME, that MH suite thing encouraged people to do crappy treatment plans or not do them at all. I was the latter, at times...
 
Last edited:
  • Like
Reactions: 1 users
There's a bizarre disconnect. It goes back to what I've always said...VA isn't a healthcare organization. VA is a public-relations (PR) organization masquerading as a health care organization.

While I think that's a bit overstated...I have often told people (in multiple settings, since the gobment has its nose in alot of QA) that "the government" will never truly care about quality of...anything. Only metrics that can show a semblance of it.
 
The phrase I've heard that I like is that the VA isn't clinically driven, it's politically driven
 
  • Like
  • Love
Reactions: 2 users
The phrase I've heard that I like is that the VA isn't clinically driven, it's politically driven

After becoming pretty darn bored in PCMHI, I applied for an additional "Suicide Prevention Coordinator" at our main hospital (i was working in a CBOC in the suburbs). This was 3.5 years ago. During the interview, I was talking about some ideas and thoughts I was having about what I could do with/in the position and the service, citing the literature and the current national concerns about veteran suicide. I was interrupted and told..."Oh, well, the coordinators don't really see patients at all."

So...yea.
 
Last edited:
  • Like
Reactions: 1 user
Agree with one of the early posts above that there are deeper issues at play. Just going to leave this here: https://www.liebertpub.com/doi/full/10.1089/ees.2016.0223

I worry a lot that these things tend to turn into "Just pre-register your study and everything will be okay!" I am extraordinarily doubtful this will accomplish much of anything. The system is much more broken than that. Quantity > quality. Peer-review is an assessment of the presentation of the work...not the work itself. No peer-reviewer will spend anywhere near the amount of time needed to provide an actual detailed review of the work. Stupid things...I recently caught a bug in a colleague's R code who is one of our strong advocates for open science and quality work. R calculates sum of squares differently than other stats software in their base ANOVA function so you have to jump through some hoops to get an ANOVA that is interpretable in the same way as the ANOVA we are all thinking of. This is the crap that terrifies me. Its a non-obvious error that I would guess impacts hundreds of studies by anyone naively trusting that a major stats software packages "ANOVA" function does what all the other software packages do. It doesn't, but its not clear in the documentation and you would have no way to know that unless you spend 5-6 hours digging super deep in documentation, trying things in multiple programs and eventually getting pissed off and doing calculations by hand...which I did because I'm a co-author and the author is a friend - sure as hell wouldn't have as a reviewer. This was ANOVA so its about the simplest stats technique anyone does these days. I'm probably more stats-savvy than most reviewers in our field. How many of these issues are out there for some actual complicated approaches?

Across the board, this is a societal problem. Maybe a capitalism problem? You certainly see the same thing in the business world. I certainly see it in the tech industry right now. For every 1 company making a major leap forward, I see 99 companies who seem to exist just to re-purpose other people's code to produce shoddy half-finished products they can then sell to administrators in other businesses who are wowed by the GUI and ignore the complete lack of substance. This is why I remain in academia despite my issues above.
 
  • Like
Reactions: 1 users
I think a big part of it is an 'ideological possession' problem within the context of escalating polarization between ideological camps. And I think it's also a problem of politicians/administrators lacking a strong grasp of or an appreciation for the actual philosophy of science underpinning the in vogue term 'evidence-based X.'

And the most pernicious push I've experienced over my career is the absolute all out warfare against individualized clinical decisionmaking in context in favor of an ever-increasing body of one size fits all rules, policies, procedures and 'thou shalt's'---many of which are even logically contradictory.

But, hey, Orwell taught us that 'doublethink' is actually an asset in an inner party member.
 
it's also a problem of politicians/administrators lacking a strong grasp of or an appreciation for the actual philosophy of science underpinning the in vogue term 'evidence-based X.'
not just politicians/administrators but many (likely most) of the providers.
 
  • Like
Reactions: 1 user
@Ollie123 Maybe a bit to pessimistic?

No peer-reviewer will spend anywhere near the amount of time needed to provide an actual detailed review of the work. Stupid things...I recently caught a bug in a colleague's R code who is one of our strong advocates for open science and quality work. R calculates sum of squares differently than other stats software in their base ANOVA function so you have to jump through some hoops to get an ANOVA that is interpretable in the same way as the ANOVA we are all thinking of. This is the crap that terrifies me. Its a non-obvious error that I would guess impacts hundreds of studies by anyone naively trusting that a major stats software packages "ANOVA" function does what all the other software packages do. It doesn't, but its not clear in the documentation and you would have no way to know that unless you spend 5-6 hours digging super deep in documentation, trying things in multiple programs and eventually getting pissed off and doing calculations by hand...which I did because I'm a co-author and the author is a friend - sure as hell wouldn't have as a reviewer. This was ANOVA so its about the simplest stats technique anyone does these days. I'm probably more stats-savvy than most reviewers in our field. How many of these issues are out there for some actual complicated approaches?
I think some reviewers do fabulous jobs at reviewing manuscripts. You are right, no one re-runs the analysis or interviews coworkers about recruitment practices or other unrealistic endeavors.

However, no single study is conclusive. The article I talked about and @cara susanna posted is exactly looking at the problems you highlight. Reporting of statistics is not flawless. We can get better at assessing the quality of statistics and methodologies as time goes on.

Across the board, this is a societal problem. Maybe a capitalism problem? You certainly see the same thing in the business world. I certainly see it in the tech industry right now. For every 1 company making a major leap forward, I see 99 companies who seem to exist just to re-purpose other people's code to produce shoddy half-finished products they can then sell to administrators in other businesses who are wowed by the GUI and ignore the complete lack of substance. This is why I remain in academia despite my issues above.
I think it is important to take into consideration that anything that motivates people can have beneficial and harmful effects (the DBT therapist inside me is talking). The perverse incentives found in the current world of research that have lead to inaccurate/overblown/fabricated findings are also the incentives that have driven a great deal of quality research. Of course it can get better. Primarily, I think that the majority of work needs to only be addressed within academia. Quantity of publications, the status of journals, and media attention has superseded quality of work being done. Hiring committees, P&T committees, and advisors/mentors could change this fairly quickly.
 
  • Like
Reactions: 1 user
From the article:

A small number of ESTs (e.g., both Cognitive Processing Therapy and Prolonged Exposure for PTSD) scored consistently well across all or most metrics, whereas a larger number of ESTs— including a number classified as Strong (e.g., Behavioral Activation for Depression, Cognitive Remediation for Schizophrenia, Dialectical Behavior Therapy for Borderline Personality Disorder)— performed relatively poorly across most or all of our metrics of evidential value.

Well, that's upsetting about DBT but at least CPT and PE did well. My professional identity crisis is averted, at least for now.
as a colleague/friend of the authors, I cant tell you how much it pained some of them to write this criticism of EST/EBP lol.
 
  • Like
Reactions: 1 users
@Ollie123 Maybe a bit to pessimistic?


I think some reviewers do fabulous jobs at reviewing manuscripts. You are right, no one re-runs the analysis or interviews coworkers about recruitment practices or other unrealistic endeavors.

However, no single study is conclusive. The article I talked about and @cara susanna posted is exactly looking at the problems you highlight. Reporting of statistics is not flawless. We can get better at assessing the quality of statistics and methodologies as time goes on.


I think it is important to take into consideration that anything that motivates people can have beneficial and harmful effects (the DBT therapist inside me is talking). The perverse incentives found in the current world of research that have lead to inaccurate/overblown/fabricated findings are also the incentives that have driven a great deal of quality research. Of course it can get better. Primarily, I think that the majority of work needs to only be addressed within academia. Quantity of publications, the status of journals, and media attention has superseded quality of work being done. Hiring committees, P&T committees, and advisors/mentors could change this fairly quickly.

Isn't this what the clinical science programs are suppose to be doing (as of the past 15 years or so)...the dissemination, implementation, and education/modeling aspect of EBT out in the clinical world somewhere? I don't know where such jobs actually are (VACO programs and some MIRRECs maybe), but certainty most of these graduates end up in traditional, 100% academic roles at major universities. And... the programs brag about that outcome. That has never made much sense to me.
 
Last edited:
  • Like
Reactions: 1 users
as a colleague/friend of the authors, I cant tell you how much it pained some of them to write this criticism of EST/EBP lol.

No need for them to feel guilty! This is important work. I also feel like they did a good job saying "this doesn't mean we should all start doing Moonbeam Therapy"
 
  • Like
Reactions: 2 users
Oh I fully agree with almost everything that you posted. And in re-reading that, I do come across a bit more as a downer than I intended. There are definitely ways to improve things. However, my biggest concern links to what you said "We can get better at assessing the quality of statistics and methodologies as time goes on." We will undoubtedly get better at those things. My question is...how much will it help the actual problem? It seems a bit like the proverbial "bandaid on the broken limb." I don't see much (if any) effort to address the actual causes of questionable science. Its not going to be fixed with new requirements for CONSORT diagrams.

I think a lot of it boils down to people trying to do much in too little time. Not having time to stop and think things through. RCT methodology is not rocket science. Yes, we can always make improvements and refinements. There are certain subfields (e.g. psychoanalysis) where methodology routinely falls short of 6th grade science fair standards, but that isn't the norm in the field. Science is somewhat self-correcting in this regard and that is largely working as planned. However, there is still an elephant juggling polka-dot dinosaur in the room that I think is mostly being ignored right now.

This may be somewhat setting dependent, but I don't know that hiring committees, P&T committees, etc. could change this quickly. Soft money is a major barrier on the medical side...we've created a system where not just long-term success, but even continued employment is (in some ways) contingent on study outcomes. There isn't an easy path out of that short of massive overhauls to AMC structure and NIH funding programs. On the university side - research infrastructure and staffing are major barriers. Yet I see 10x the effort dedicated to "open science initiatives", research compliance requirements and whatnot then I do to resolving those problems. Our focus is wrong.

Keep in mind, I say this all as an insider. I'm faculty in a top psychiatry department, spend 85% of my time on research and have multiple active grants. So this isn't "Science is bunk, come to my voodoo therapy clinic." Just frustration with what I view as very misguided efforts to address important problems.
 
Last edited:
  • Like
Reactions: 2 users
This may be somewhat setting dependent, but I don't know that hiring committees, P&T committees, etc. could change this quickly. Soft money is a major barrier on the medical side...we've created a system where not just long-term success, but even continued employment is (in some ways) contingent on study outcomes. There isn't an easy path out of that short of massive overhauls to AMC structure and NIH funding programs. On the university side - research infrastructure and staffing are major barriers. Yet I see 10x the effort dedicated to "open science initiatives", research compliance requirements and whatnot then I do to resolving those problems. Our focus is wrong.

Keep in mind, I say this all as an insider. I'm faculty in a top psychiatry department, spend 85% of my time on research and have multiple active grants. So this isn't "Science is bunk, come to my voodoo therapy clinic." Just frustration with what I view as very misguided efforts to address important problems.

Agreed with this. Maybe it is the jaded cynic in me. However, the business/money side of things always seems to infect idealized versions of science and healthcare, turning something that was intended to be good into a perverted game. Not just in these two fields either, I feel like there is more money and success in BS than there is in actually addressing problems and fixing systems.
 
  • Like
Reactions: 1 user
@Ollie123 Maybe a bit to pessimistic?


I think some reviewers do fabulous jobs at reviewing manuscripts. You are right, no one re-runs the analysis or interviews coworkers about recruitment practices or other unrealistic endeavors.

However, no single study is conclusive. The article I talked about and @cara susanna posted is exactly looking at the problems you highlight. Reporting of statistics is not flawless. We can get better at assessing the quality of statistics and methodologies as time goes on.


I think it is important to take into consideration that anything that motivates people can have beneficial and harmful effects (the DBT therapist inside me is talking). The perverse incentives found in the current world of research that have lead to inaccurate/overblown/fabricated findings are also the incentives that have driven a great deal of quality research. Of course it can get better. Primarily, I think that the majority of work needs to only be addressed within academia. Quantity of publications, the status of journals, and media attention has superseded quality of work being done. Hiring committees, P&T committees, and advisors/mentors could change this fairly quickly.
Agreed. In a philosophy of science seminar I used to teach interns I was very fond of this quote by Charles Sanders Pierce:

'There is one thing even more vital to science than intelligent methods and that is, the sincere desire to find out the truth, whatever it may be.'

I always loved that quote, and always emphasized the 'sincere,' and the 'whatever it may be' parts.
 
  • Like
Reactions: 2 users
Oh I fully agree with almost everything that you posted. And in re-reading that, I do come across a bit more as a downer than I intended. There are definitely ways to improve things. However, my biggest concern links to what you said "We can get better at assessing the quality of statistics and methodologies as time goes on." We will undoubtedly get better at those things. My question is...how much will it help the actual problem? It seems a bit like the proverbial "bandaid on the broken limb." I don't see much (if any) effort to address the actual causes of questionable science. Its not going to be fixed with new requirements for CONSORT diagrams.

I think a lot of it boils down to people trying to do much in too little time. Not having time to stop and think things through. RCT methodology is not rocket science. Yes, we can always make improvements and refinements. There are certain subfields (e.g. psychoanalysis) where methodology routinely falls short of 6th grade science fair standards, but that isn't the norm in the field. Science is somewhat self-correcting in this regard and that is largely working as planned. However, there is still an elephant juggling polka-dot dinosaur in the room that I think is mostly being ignored right now.

This may be somewhat setting dependent, but I don't know that hiring committees, P&T committees, etc. could change this quickly. Soft money is a major barrier on the medical side...we've created a system where not just long-term success, but even continued employment is (in some ways) contingent on study outcomes. There isn't an easy path out of that short of massive overhauls to AMC structure and NIH funding programs. On the university side - research infrastructure and staffing are major barriers. Yet I see 10x the effort dedicated to "open science initiatives", research compliance requirements and whatnot then I do to resolving those problems. Our focus is wrong.

Keep in mind, I say this all as an insider. I'm faculty in a top psychiatry department, spend 85% of my time on research and have multiple active grants. So this isn't "Science is bunk, come to my voodoo therapy clinic." Just frustration with what I view as very misguided efforts to address important problems.

If I'm understanding correctly, the "juggling polka-dot dinosaur" in your metaphor is a reference to the perverse incentives that are set up by the soft-money funding structures of NIH that most AMC faculty positions are dependent on. Specifically, the need to be perceived as "fundable" in order to move up the AMC hierarchy (and often be paid at all), which creates an incentive to increase number of publications, which is in turn dependent on "publishable findings" that avoid the file-drawer problem, which then leads to sloppy/lazy (e.g., overlooking statistical errors) or even intentionally poor science (e.g., fishing).

As a current intern considering whether to pursue a more research focused vs. more clinical/education focused career, I'm curious what you see as a solution? How do we do more than "put a bandaid on a broken limb"?

Frankly, your description, and my own experience (very involved in a premier AMC working directly with numerous jr/sr faculty) makes the prospect of a research career pretty unappealing. I'm hopeful, because I do love research, and at the same time I'm at a point in my life where I'm not sure it's worth the risk of downstream consequences for myself or my family.
 
If I'm understanding correctly, the "juggling polka-dot dinosaur" in your metaphor is a reference to the perverse incentives that are set up by the soft-money funding structures of NIH that most AMC faculty positions are dependent on. Specifically, the need to be perceived as "fundable" in order to move up the AMC hierarchy (and often be paid at all), which creates an incentive to increase number of publications, which is in turn dependent on "publishable findings" that avoid the file-drawer problem, which then leads to sloppy/lazy (e.g., overlooking statistical errors) or even intentionally poor science (e.g., fishing).

As a current intern considering whether to pursue a more research focused vs. more clinical/education focused career, I'm curious what you see as a solution? How do we do more than "put a bandaid on a broken limb"?

Frankly, your description, and my own experience (very involved in a premier AMC working directly with numerous jr/sr faculty) makes the prospect of a research career pretty unappealing. I'm hopeful, because I do love research, and at the same time I'm at a point in my life where I'm not sure it's worth the risk of downstream consequences for myself or my family.

Just FYI, I started out research (had a research-focused fellowship) and decided to go clinical because the revolving door of pursuing funding just sounded so exhausting. It was like, okay, here's how you get your CDA. Then during your CDA, here's how you get your R1... etc etc. At some point I decided that I just wanted stability.
 
  • Like
Reactions: 1 users
If I'm understanding correctly, the "juggling polka-dot dinosaur" in your metaphor is a reference to the perverse incentives that are set up by the soft-money funding structures of NIH that most AMC faculty positions are dependent on. Specifically, the need to be perceived as "fundable" in order to move up the AMC hierarchy (and often be paid at all), which creates an incentive to increase number of publications, which is in turn dependent on "publishable findings" that avoid the file-drawer problem, which then leads to sloppy/lazy (e.g., overlooking statistical errors) or even intentionally poor science (e.g., fishing).

As a current intern considering whether to pursue a more research focused vs. more clinical/education focused career, I'm curious what you see as a solution? How do we do more than "put a bandaid on a broken limb"?

Frankly, your description, and my own experience (very involved in a premier AMC working directly with numerous jr/sr faculty) makes the prospect of a research career pretty unappealing. I'm hopeful, because I do love research, and at the same time I'm at a point in my life where I'm not sure it's worth the risk of downstream consequences for myself or my family.

Just FYI, I started out research (had a research-focused fellowship) and decided to go clinical because the revolving door of pursuing funding just sounded so exhausting. It was like, okay, here's how you get your CDA. Then during your CDA, here's how you get your R1... etc etc. At some point I decided that I just wanted stability.

You have to already be "fundable" (i.e.., track record of funding) to even be hired as tenure-track faculty at any research oriented university. Some R2s too.

My opinion is that the decision to chose a primarily academic path in psychology is largely a function of personality once you have crossed the minimal threshold barrier, which as I mentioned above, is already very high. The hoop jumping, for most people, has to stop somewhere. Personally, I was not sooo in-love with psychological science that I wanted to depend on that machine to support my family for 30 years. I mean, my wife works too, but we have had 3 children, and I was certainly raised to be pragmatic enough to think about how to ensure/protect (within reason) that my family is stable and supported without having to work non-stop or stress-out about the sustainability of the income stream. Hence, some of the appeal of VA employment for many psychologists, right? Especially early-career.
 
Last edited:
  • Like
Reactions: 1 user
I think you captured the essence of my concerns. My colleagues in psych departments seem to have their own versions (e.g. the counterpoint to being reliant on grants is that it can be hard to find outstanding help without grant money), but at least on the AMC side I think soft money coupled with the overarching structure of the current NIH system is the "juggling polka-dotted dinosaur."

My personal solution so far has been to strike a balance. As best I can tell at this stage of my career, I'm reasonably good at grant writing. This helps since (for the most part) as long as funding is there, people in AMCs will tend to leave you alone. We'll see if that continues once I push to the next level since I'm about to start working on my first R01. I'm not as productive as some of my colleagues. Being obsessively detail-oriented, quality-focused, having a penchant for insanely complicated studies along with a strong desire to dabble and explore new areas versus building a clear programmatic line of work is...problematic. I chase interesting ideas and scientific rigor more than professional success. So far its worked out though at some personal expense (I work...a lot). That said, the pay is pretty solid for someone early career with even greater long-term potential if you "make it". The job can be downright fun at times, which isn't something many people can say about their work. Being a clinician makes things significantly less scary. Many places will find a way to keep you around if you are willing to see more patients. I'm not sure I would want to do this if I was a neuroscientist doing what I do.

Bigger picture, we need to completely overhaul a lot of infrastructure. We probably need to downsize medical schools significantly - which I say even knowing it could cost me my job. We need to find a better balance where medical schools are contributing to their own research missions versus just agreeing to warehouse faculty during the time they have grants. There needs to be institutional commitment in that regard. Relatedly, we need to completely overhaul the NIH system. At one point, there was discussion circulating of funding "labs" versus "projects". While it has its own problems, I think the idea has merit. Project-based funding has always seemed inherently problematic to me. Its easy to set up the system, but it creates many of those perverse incentives. We need to pull back on a lot of our quantity metrics...both from within (tenure review) and without (funding considerations). Peer review needs a massive overhaul. We need to create systems where the research quality is legitimately reviewed. Compliance should not mean hiring a nurse to go around and yell at research coordinators who cross things out "wrong"...but that's often what it is. If you want to up the quality, have someone actually checking the things that matter. Make sure the analysis was done correctly - for every 1 instance of p-hacking, I will virtually guarantee you there are 100 instances of well-meaning people legitimately effing up their complicated statistical model in ways that our current system would never catch.

I'll leave it at that. We could create a whole other post (or forum?) just on this topic.
 
  • Like
Reactions: 3 users
You have to already be "fundable" to even be hired at any research oriented university. Some R2s too.

My impression is this doesn't really stop after hiring. E.g., in order to move from assistant to associate you have to demonstrate your work is continuing to be funded, you're continuing to publish, etc.

My opinion is that this is largely a function of personality once you have crossed the minimal threshold barrier, which as I mentioned above, is already pretty high. You think Paul Meehl could have been a tenure-track psychology professor at a place like Minnesota today? Even in the 70s? No way.

I get the impression that there's a certain level of devotion to curiosity, uncertainty tolerance, and faith in the funding system necessary to commit to a research career.

The hoop jumping, for most people, even high achieving peeps usually has to stop somewhere. I was not so in-love with science that I wanted to depend on that machine to support my family. I mean, my wife works too, but we have had 3 children, and I was certainly raised to be pragmatic enough to think about how to ensure/protect (within reason) that your family is stable and supported without having to work non-stop or stress-out about the sustainability of the income stream.

I think you and I are very similar. I appreciate you sharing your thoughts, and I also wish I/we were missing something.

Hence, some of the appeal of VA employment for many psychologists, right? Especially early-career.

You say that like there is any real likelihood of going from a VA clinical position to a research position later on in a career. My impression is that doesn't really happen, but maybe I'm wrong?
 
You say that like there is any real likelihood of going from a VA clinical position to a research position later on in a career. My impression is that doesn't really happen, but maybe I'm wrong?

Within the VA system it can happen, certainly.... so long as you have the requisite background. VA Central Office (VACO) has opportunity for psychologists to move into training and research oriented paths. However, they are competitive, no doubt. If employed at a solid VA that allows you to continue some strong scientist-practitioner work, I am sure you can move to move to any number of R3 academic institutions. Maybe some AMC affiliated programs or sub-programs (there are quite a few of the latter...I don't know what the pay is like). From VA Staff Psychologist to tenure track faculty at "University of any state?" No.

The "VA early career" comment meant to convey that you start there and then move on to...something else. Clinical...or kinda clinical related. There are other things you can do besides be a professor or see patients all day.

I, personally, am not motivated or disciplined enough to build my own private-practice business empire, or do R1 type work. 15-20 years ago, I probably was. Priorities change. I think I many have sent you a PM about a month ago?
 
Last edited:
You say that like there is any real likelihood of going from a VA clinical position to a research position later on in a career. My impression is that doesn't really happen, but maybe I'm wrong?

I did this once upon a time, and I know of at least 2 others who did as well. Only 1 is still in that research position full time a few years later (by choice).
 
I’m commenting on something that I saw a bit farther back, but per PE research I thought that the attrition rate for Prolonged Exposure therapy is said to be at ~20%. Are VA folks seeing increased rates in practice or are you saying that many vets refuse to even start PE?
 
I’m commenting on something that I saw a bit farther back, but per PE research I thought that the attrition rate for Prolonged Exposure therapy is said to be at ~20%. Are VA folks seeing increased rates in practice or are you saying that many vets refuse to even start PE?

Not sure about others, but I struggle to get Vets to agree to start PE when presented with the requirements. I have also had a few that dropped out previously.
 
  • Like
Reactions: 1 user
I’m commenting on something that I saw a bit farther back, but per PE research I thought that the attrition rate for Prolonged Exposure therapy is said to be at ~20%. Are VA folks seeing increased rates in practice or are you saying that many vets refuse to even start PE?
In my experience, there is a lot of discussion among vets that PE is "dangerous." I heard that from more than a handful of Vets.
 
  • Like
Reactions: 1 user
I’m commenting on something that I saw a bit farther back, but per PE research I thought that the attrition rate for Prolonged Exposure therapy is said to be at ~20%. Are VA folks seeing increased rates in practice or are you saying that many vets refuse to even start PE?

Is the ~20% attrition rate VA-specific? I honestly haven't looked into it, but can imagine there are multiple factors that would contribute to attrition and/or poorer outcome (e.g., questionable/inaccurate PTSD diagnoses, incentive to not improve, experiences Sanman and WisNeuro mentioned).
 
Top