New study questions efficacy of evidence-based therapies for PTSD in veteran and military population

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

cara susanna

Full Member
15+ Year Member
Joined
Feb 10, 2008
Messages
8,150
Reaction score
8,401
Hi all,

A colleague showed me an article about a new paper that questions the efficacy of EBPs for PTSD like PE and CPT. I was wondering what you all thought of it.

Here's the article discussing the paper - VA, DoD recommended PTSD therapies don’t help many military patients, review finds

Here is the paper itself - Psychotherapy for Military-Related PTSD

Members don't see this ad.
 
Last edited:
Wait, you're saying that a VA sample doesn't show the same treatment effects that other populations do? I wonder why that is...

Seriously, I want to see meta analyses that look at treatment effects in other countries who do not share our incentivized "it pays to be sick" model in the VA. I'd imagine you might find a very different effect when your research subjects are not paid to stay symptomatic.
 
Wait, you're saying that a VA sample doesn't show the same treatment effects that other populations do? I wonder why that is...

Seriously, I want to see meta analyses that look at treatment effects in other countries who do not share our incentivized "it pays to be sick" model in the VA. I'd imagine you might find a very different effect when your research subjects are not paid to stay symptomatic.


It actually doesn't even say that. The military times piece is a hack job of an article. The original piece actually shows that PE and CPT showed significant improvement over waitlist and treatment as usual. It also stated that 49-70% showed meaningful improvement in symptoms. It just didn't improve to the point of removing the diagnosis (in a population with secondary gain out the wazoo if you lose that diagnosis). So they proved it isn't a perfect treatment and have no better suggestions for treatment. Useful all around info, huh.
 
Members don't see this ad :)
It actually doesn't even say that. The military times piece is a hack job of an article. The original piece actually shows that PE and CPT showed significant improvement over waitlist and treatment as usual. It also stated that 49-70% showed meaningful improvement in symptoms. It just didn't improve to the point of removing the diagnosis (in a population with secondary gain out the wazoo if you lose that diagnosis). So they proved it isn't a perfect treatment and have no better suggestions for treatment. Useful all around info, huh.
Add to that the hint of general treatment equivalence across other non-PTSD focused therapies.. and we're off to the races.
 
So many thoughts rush to my mind:
1. The review was publish 5 years ago but the MT article is written today.
2. The review is narrative rather than a meta-analysis
3. The review throws shade at psychology:
PT and prolonged exposure, the 2 most widely used first-line (ie, recommended) therapies, show large within-group (pretreatment to posttreatment) effect sizes. However,effect sizes, which are more commonly used in psychology literature than in medical literature, reflect mean outcomes and do not adequately capture heterogeneity in patient outcomes; between one-third and one-half of patients receiving CPT or prolonged exposure did not demonstrate clinically meaningful symptom change (when this outcome was reported).
To put it another way 1/2-2/3 do demonstrate a clinically significant symptom change.

4. The MT article, quoting one of the authors provides a confusing statement that contradicts the review
Still, just 31 percent to 50 percent of patients actually achieved what Steenkamp would call “a clinical success.”

“We found that a third to half the patients respond well [to CBT or PE]. Of course that’s the same way as saying two-thirds to half don’t respond in a way that we would consider successful,” Steenkamp said.
Can't tell what they mean by clinical success since its the reverse from what the review states. Or more precisely, here is the abstract quote:
Forty-nine percent to 70% of participants receiving CPT and prolonged exposure attained clinically meaningful symptom improvement (defined as a 10- to 12-point decrease in interviewer-assessed or self-reported symptoms)
Pretty damn good if you ask me. No quantitative comparison evidence indicating an alternative treatment would work better.

5. More shade thrown:
Approximately two-thirds of patients receiving CPT or prolonged exposure retained their diagnosis posttreatment. Mean PTSD scores have tended to remain at or above diagnostic thresholds after treatment, and the 2 studies reporting remission rates suggest that symptom remission is relatively uncommon.
In a void, this appears to be a poor result. However, very few treatments (mostly phobias and panic) are powerful enough - without long-term drawbacks (whether pharmacological or psychological) - to have a large percentage of people no longer meet criteria for the disorder and remain disorder-free long-term. The authors are setting the bar too high, for currently available treatments. Moreover, treating a mental disorder is not like treating an infection. Still, the review provides no evidence that alternative treatments would be any better.

6. They keep mentioning in the review that response rates are similar across trauma-focused treatments and non-trauma-focused treatments yet this is all based on a narrative interpretation.
 
Last edited:
"Two trauma-focused therapies, cognitive processing therapy (CPT) and prolonged exposure, have been the most frequently studied psychotherapies for military-related PTSD. Five RCTs of CPT (that included 481 patients) and 4 RCTs of prolonged exposure (that included 402 patients) met inclusion criteria. Focusing on intent-to-treat outcomes, within-group posttreatment effect sizes for CPT and prolonged exposure were large (Cohen d range, 0.78-1.10). CPT and prolonged exposure also outperformed waitlist and treatment-as-usual control conditions. Forty-nine percent to 70% of participants receiving CPT and prolonged exposure attained clinically meaningful symptom improvement (defined as a 10- to 12-point decrease in interviewer-assessed or self-reported symptoms). However, mean posttreatment scores for CPT and prolonged exposure remained at or above clinical criteria for PTSD, and approximately two-thirds of patients receiving CPT or prolonged exposure retained their PTSD diagnosis after treatment (range, 60%-72%). CPT and prolonged exposure were marginally superior compared with non–trauma-focused psychotherapy comparison conditions. "

These actually seems like pretty good outcomes, especially considering that they used intent-to-treat, and the VA often unintentionally reinforces dropping out and retaining a diagnosis.
 
Last edited:
I can kind of respect the broader point that I hope the original authors were trying to make. Even our best treatments for most things ain't all that great. I do think a part of that is the natural of mental disorders and we are unlikely to ever find a "cure" in the same manner as say...antibiotics. At the same time, I also think many of us have rose-colored glasses on when it comes to what we do. I think we are complacent. We're too happy to accept relatively poor recovery rates as "good enough" and too quick to place the blame on patient compliance or a specific therapist. We call it a win if we do something that beats our version of "placebo" but studies stacking things up against active comparisons are far and few between. Few people are even trying to drive up that success rate any more. Our framing is also all wrong. For most disorders we are very much in an NNT framework, but that is not how psychologists are usually thinking, planning or functioning. We create unrealistic expectations for patients and other providers as a result.

None of this is specific to PTSD. None of it is specific to the VA. None of it is specific to psychotherapy. I mostly do addiction work and my thoughts on the matter definitely formed there...where outcomes are (typically) far worse. My views on psychopharmacology are not any better (and arguably worse)...and I do as much pharmacology work these days as I do behavioral work. If the point is that our treatment development work isn't done....I am 100% on board. If it is "CPT and PE don't have 100% cure rates so its OK to do rebirthing therapy" then obviously that is insane. If my memory of the recent numbers is correct...the majority of women diagnosed with breast cancer now have a nearly 100% five-year survival rate. We do rigorous screening, catch most of it in early stages and the treatments at those stages are relatively curative. Yet I don't see oncologists saying "problem solved." We shouldn't either.
 
I can kind of respect the broader point that I hope the original authors were trying to make. Even our best treatments for most things ain't all that great. I do think a part of that is the natural of mental disorders and we are unlikely to ever find a "cure" in the same manner as say...antibiotics. At the same time, I also think many of us have rose-colored glasses on when it comes to what we do. I think we are complacent. We're too happy to accept relatively poor recovery rates as "good enough" and too quick to place the blame on patient compliance or a specific therapist. We call it a win if we do something that beats our version of "placebo" but studies stacking things up against active comparisons are far and few between. Few people are even trying to drive up that success rate any more. Our framing is also all wrong. For most disorders we are very much in an NNT framework, but that is not how psychologists are usually thinking, planning or functioning. We create unrealistic expectations for patients and other providers as a result.

None of this is specific to PTSD. None of it is specific to the VA. None of it is specific to psychotherapy. I mostly do addiction work and my thoughts on the matter definitely formed there...where outcomes are (typically) far worse. My views on psychopharmacology are not any better (and arguably worse)...and I do as much pharmacology work these days as I do behavioral work. If the point is that our treatment development work isn't done....I am 100% on board. If it is "CPT and PE don't have 100% cure rates so its OK to do rebirthing therapy" then obviously that is insane. If my memory of the recent numbers is correct...the majority of women diagnosed with breast cancer now have a nearly 100% five-year survival rate. We do rigorous screening, catch most of it in early stages and the treatments at those stages are relatively curative. Yet I don't see oncologists saying "problem solved." We shouldn't either.
I agree with all of that but it leads to an interesting question about how to address improving those treatments. It begs the larger question, if differences don't generally exist across treatment types- why continue to pump energy and resources into them as if they do? or should we invest in other aspects of psychotherapy? I don't see the narrative of 'unique pills for unique needs' changing in our treatment language, and that seems unfortunate.
 
I agree with all of that but it leads to an interesting question about how to address improving those treatments. It begs the larger question, if differences don't generally exist across treatment types- why continue to pump energy and resources into them as if they do? or should we invest in other aspects of psychotherapy? I don't see the narrative of 'unique pills for unique needs' changing in our treatment language, and that seems unfortunate.

Don't want to sidetrack things too much, but do want to respond since this is an issue I spend a lot of time thinking about:
- Failing to find differences is only part of the issue. There are differences for some things (the Dodo died quite a long time ago). The question is how we move past that. Developing a new treatment from the ground-up that does about as well as CPT isn't the way to do it. Let's start by looking at CPT and think through what might be missing. I think making things more modular is one direction to go. For all its flaws, RDoC has the right idea. Our diagnostic system is trash. We need to scrap it and rebuild it from the ground up. That won't happen overnight.
- For SUDs in particular, I think we do an abysmal job addressing all contributing factors. We can mitigate W/D symptoms, but that isn't enough alone. We can address craving, but without alternative reinforcers in your life you are fighting an uphill battle. Those alternative reinforcers may not be great if your social circle is primarily a bunch of ongoing drug users. No one treatment is doing a great job addressing all these things. Most are doing a mediocre-to-passable job at 1 or 2.
- Related to modular-ness, studies combining state-of-the-art meds and therapy need to become the norm rather than the exception. I would wager that 90% of even our most recent literature is testing psychotherapy with "established med regiment/usual care med regiment/no meds" or the converse. Let's throw the kitchen sink at people and see how high we can push that recovery rate. Screw RCTs, I just want to know how good we can do if we do EVERYTHING we can. And I don't know. That is a problem. Other fields operate this way. Primarily high-risk ones where you can't get away with wait-list controls or the kinda crap we do. We can work that way too, we just don't. There is even still plenty of room for testing things against weak comparators, it just can't be where we stop.
- We do a trash job of addressing environments and systemic influences on mental illness. We can say we're clinicians and delegate this out to others or throw our arms up in the air and say its too hard, but I don't think either of those is the right choice.
- We do need to start thinking about population health here. Even if we can't improve our treatment outcomes, we can shift our treatment model to better align with what we actually do. This may be the most painful and I don't think will make me friends here - we love our 45-60 minute appts and cozy offices. We love building deep relationships with patients and believe no smartphone or computer can replace that. What is the minimum necessary dose of psychotherapy? Can I make behavioral activation into something "bite-size" I can talk someone through in 1-2 15 minute appts and then give them an app? Can I do it in primary care clinic space or some similar office setting? If we can't bolster efficacy, we need to at least find ways to bolster reach. Admittedly my clinical work is only a relatively small portion of my salary but my stance has always been that if a computer can do it better than I can...I'll happily go find something else to do. Or I'll shift my focus to finding ways to make the computer better. Too many people seem to view it as a threat.

A million more thoughts on the matter if you want to PM. Definitely interested to hear what you meant by "invest in other aspects of psychotherapy" though.
 
Last edited:
Don't want to sidetrack things too much, but do want to respond since this is an issue I spend a lot of time thinking about:
- Failing to find differences is only part of the issue. There are differences for some things (the Dodo died quite a long time ago). The question is how we move past that. Developing a new treatment from the ground-up that does about as well as CPT isn't the way to do it. Let's start by looking at CPT and think through what might be missing. I think making things more modular is one direction to go. For all its flaws, RDoC has the right idea. Our diagnostic system is trash. We need to scrap it and rebuild it from the ground up. That won't happen overnight.
- For SUDs in particular, I think we do an abysmal job addressing all contributing factors. We can mitigate W/D symptoms, but that isn't enough alone. We can address craving, but without alternative reinforcers in your life you are fighting an uphill battle. No one does a great job addressing all these things.
- Related to modular-ness, studies combining state-of-the-art meds and therapy need to become the norm rather than the exception. I would wager that 90% of even our most recent literature is testing psychotherapy with "established med regiment/usual care med regiment/no meds" or the converse. Let's throw the kitchen sink at people and see how high we can push that recovery rate. Other fields operate this way. We can, we just don't. There is even still plenty of room for testing things against weak comparators, it just can't be where we stop.
- We do a trash job of addressing environments and systemic influences on mental illness. We can say we're clinicians and delegate this out to others or throw our arms up in the air and say its too hard, but I don't think either of those is the right choice.
- We do need to start thinking about population health here. Even if we can't improve our treatment outcomes, we can shift our treatment model to better align with what we actually do. This may be the most painful and I don't think will make me friends here - we love our 45-60 minute appts and cozy offices. We love building deep relationships with patients and believe no smartphone or computer can replace that. What is the minimum necessary dose of psychotherapy? Can I make behavioral activation into something "bite-size" I can talk someone through in 1-2 15 minute appts and then give them an app? Can I do it in primary care clinic space or some similar office setting? If we can't bolster efficacy, we need to at least find ways to bolster reach. Admittedly my clinical work is only a relatively small portion of my salary but my stance has always been that if a computer can do it better than I can...I'll happily go find something else to do. Or I'll shift my focus to finding ways to make the computer better. Too many people seem to view it as a threat.

A million more thoughts on the matter if you want to PM. Definitely interested to hear what you meant by "invest in other aspects of psychotherapy" though.
I don't think it's a sidetrack at all because it seems directly relevant to the question of efficacy of ESTs and what should constitute EBP. I agree with most of your points save for the dodo bird issue.

With regard to that point, I have not found that to bear out in the literature. The research on shared outcomes across treatments is fairly robust without moving into fringe therapies that I would hardly consider treatment (even considering research from multiple 'camps'). Sure, CPT works (I'm a fan of CPT personally - a big fan because I find the approach to jive with me and with clients), but so do other therapies and so to assume that we have to say "what is missing from CPT" assumes that we have any idea of why that therapy works or why, from a dismantling perspective, a given treatment (CPT) should become the prescribed intervention we start with. It puts the cart before the horse. I don't disagree about the diagnostic system being garbage in any way- but diagnosis aside (since this issue/pattern of treatment equality presents across diagnostic formulations and symptom sets), the issue still remains 'how do you pick which therapy is best'. Why start with CPT, ACT, or PE for trauma treatment? Or person-centered? Heck, WET works without hardly any intervention from a clinician with equal efficacy. Each assume different things (sometimes/often contradictory). Behavior activation doesn't even always follow the same principles that it lays out to explain itself (I was just reading a recent study of BA activation yesterday). If we don't know the mechanism we are targeting, perhaps its a bit too early to focus on what we need to do to fix the car.

Heck, I'm not even sure that our decision to focus on symptoms is the best way to assess treatment outcomes, but it is the easiest and so its the most common - it also fits well within an RCT framework. This is a related point because it underscores why its easier to select a therapy if you say 'the thing that matters is symptoms' - that may be more of an open debate that the assumption suggests. Honestly, I'm a fan of the whole 'prohibited treatments' Lilienfeld argues for much more so than the 'prescribed treatments'. It fits the state of our science a lot better in my eyes. Given all of this, having research identify mechanisms that reliably work and which we can explain/target seems critical. We simply don't have that, so putting specific TXs as standouts seems odd to me because it assumes we know something about those treatments that we don't (i.e., why they work / that they work differently than others). We spend a lot of time, money, and energy investing in training specific modalities, requiring them, and making sure that they roll out. Given what I said above (and the associated dropout rates, particularly for trauma treatments) I wonder if thats the best use of those massive resources.
 
Members don't see this ad :)
I don't think it's a sidetrack at all because it seems directly relevant to the question of efficacy of ESTs and what should constitute EBP. I agree with most of your points save for the dodo bird issue.

With regard to that point, I have not found that to bear out in the literature. The research on shared outcomes across treatments is fairly robust without moving into fringe therapies that I would hardly consider treatment (even considering research from multiple 'camps'). Sure, CPT works (I'm a fan of CPT personally - a big fan because I find the approach to jive with me and with clients), but so do other therapies and so to assume that we have to say "what is missing from CPT" assumes that we have any idea of why that therapy works or why, from a dismantling perspective, a given treatment (CPT) should become the prescribed intervention we start with. It puts the cart before the horse. I don't disagree about the diagnostic system being garbage in any way- but diagnosis aside (since this issue/pattern of treatment equality presents across diagnostic formulations and symptom sets), the issue still remains 'how do you pick which therapy is best'. Why start with CPT, ACT, or PE for trauma treatment? Or person-centered? Heck, WET works without hardly any intervention from a clinician with equal efficacy. Each assume different things (sometimes/often contradictory). Behavior activation doesn't even always follow the same principles that it lays out to explain itself (I was just reading a recent study of BA activation yesterday). If we don't know the mechanism we are targeting, perhaps its a bit too early to focus on what we need to do to fix the car.

Heck, I'm not even sure that our decision to focus on symptoms is the best way to assess treatment outcomes, but it is the easiest and so its the most common - it also fits well within an RCT framework. This is a related point because it underscores why its easier to select a therapy if you say 'the thing that matters is symptoms' - that may be more of an open debate that the assumption suggests. Honestly, I'm a fan of the whole 'prohibited treatments' Lilienfeld argues for much more so than the 'prescribed treatments'. It fits the state of our science a lot better in my eyes. Given all of this, having research identify mechanisms that reliably work and which we can explain/target seems critical. We simply don't have that, so putting specific TXs as standouts seems odd to me because it assumes we know something about those treatments that we don't (i.e., why they work / that they work differently than others). We spend a lot of time, money, and energy investing in training specific modalities, requiring them, and making sure that they roll out. Given what I said above (and the associated dropout rates, particularly for trauma treatments) I wonder if thats the best use of those massive resources.

I think we are mostly in agreement. RE: dodo bird...I agree that if we are talking about comparing well-established therapies, most work about the same. There are exceptions (e.g. psychoanalysis for specific phobias). Mostly I just don't like the dodo bird because I feel it gets used to justify doing things when no one actually put together a halfway-decent RCT (mainly analysis vs whatever). Certainly if we are talking things like "CPT vs PE" it applies.

And I want to be clear that I'm throwing these things out as "ideas" for new things we can try...not as definitive solutions. I've built my career on understanding mechanisms (seriously - first section in my biosketch is "Improving our understanding of mechanisms for behavioral and pharmacological interventions") for exactly the reasons you lay out, so needless to say I'm all for what you propose. I don't have any strong basis for saying we should use CPT or any other therapy as a starting point versus looking at mechanisms versus anything else. I think we should all of them. Anything different from "I'm going to write my own treatment manual that is kinda-like CBT but totally not so I can put my name on it" is a step up. I think adding modules to existing therapies has the <potential> to move us in the right direction. It may not. Let's just try some new stuff and see what sticks.
 
I think we are mostly in agreement. RE: dodo bird...I agree that if we are talking about comparing well-established therapies, most work about the same. There are exceptions (e.g. psychoanalysis for specific phobias). Mostly I just don't like the dodo bird because I feel it gets used to justify doing things when no one actually put together a halfway-decent RCT (mainly analysis vs whatever). Certainly if we are talking things like "CPT vs PE" it applies.

And I want to be clear that I'm throwing these things out as "ideas" for new things we can try...not as definitive solutions. I've built my career on understanding mechanisms (seriously - first section in my biosketch is "Improving our understanding of mechanisms for behavioral and pharmacological interventions") for exactly the reasons you lay out, so needless to say I'm all for what you propose. I don't have any strong basis for saying we should use CPT or any other therapy as a starting point versus looking at mechanisms versus anything else. I think we should all of them. Anything different from "I'm going to write my own treatment manual that is kinda-like CBT but totally not so I can put my name on it" is a step up. I think adding modules to existing therapies has the <potential> to move us in the right direction. It may not. Let's just try some new stuff and see what sticks.
I would get this tattood on my arm today if I could. lol
 
It begs the larger question, if differences don't generally exist across treatment types
The research on shared outcomes across treatments is fairly robust
I disagree on this point. I feel like I keep hearing this from psychologists and it drives me nuts. For some disorders, very broad disorders like depression, yes a lot of things work. Or, doing anything will make you less depressed. For more specific disorders that are more severe (e.g., eating disorders, BPD with recent suicidality), we have strong evidence that certain treatments do not work as well as others.
 
4Sure, CPT works (I'm a fan of CPT personally - a big fan because I find the approach to jive with me and with clients), but so do other therapies and so to assume that we have to say "what is missing from CPT" assumes that we have any idea of why that therapy works or why, from a dismantling perspective, a given treatment (CPT) should become the prescribed intervention we start with. It puts the cart before the horse.
Again, I have to disagree. The reason to use CPT or PE is b/c we have a theory (experiential avoidance leads to problems) for it that is better than other theories (eye movements reprogram memories or unconscious libido). The reason to use these as first-line treatments is b/c they have a sound, scientifically supported theories AND empirical evidence saying it works better than controls (apparently, WL and TAU).

Of course that doesn't mean we should stop trying to improve on the treatments and developing new ones that may be more efficacious.
 
Given all of this, having research identify mechanisms that reliably work and which we can explain/target seems critical. We simply don't have that, so putting specific TXs as standouts seems odd to me because it assumes we know something about those treatments that we don't (i.e., why they work / that they work differently than others).
While we don't have this for the majority of disorders, we do indeed have this research for some disorders. The one the most comes to mind is actually exposure, fear, and learning with research from people like Barlow and Craske. For example, we used to think that exposure to feared stimuli needed to be experienced until the SUDS start coming down. Now, we have evidence that a decrease is SUDS is not critical but, instead, the goal is to experience high SUDS levels without engaging in problematic behaviors (e.g., avoidance).

Similarly, we know some treatments do more than treat symptoms (e.g., DBT; Bedics et al., 2012).

This stuff takes time and we know a lot more now than we knew even 20 years ago. I think we are much better at treatment now than 100 years ago or 60 years ago, or 30 years ago (e.g., we have less harmful treatments, we have some better treatments, and we have more disorder-specific treatments). I have had faculty tell me that in the 80s the idea of cults and satanic rituals as the cause of many disorders was not a fringe idea in clinical practice. While we still have a lot of pseudoscience, we seem to be much better.
 
Last edited:
If my memory of the recent numbers is correct...the majority of women diagnosed with breast cancer now have a nearly 100% five-year survival rate. We do rigorous screening, catch most of it in early stages and the treatments at those stages are relatively curative.
Off-topic, but not so much... Although it is true that early stage breast cancer has good five-year survival rates, about 10% of women are diagnosed stage IV de novo (initial diagnosis), which is incurable, and about 20% of women diagnosed at earlier stages will eventually develop stage IV BC. Additionally, we are, for unknown reasons, seeing a concerning increase in young women (<40 years old) diagnosed with stage IV BC, with many/most of those dying within five years of dx. Digging into the BC survival data presents a much less rosy picture than is commonly assumed.
 
If my memory of the recent numbers is correct...the majority of women diagnosed with breast cancer now have a nearly 100% five-year survival rate. We do rigorous screening, catch most of it in early stages and the treatments at those stages are relatively curative.
Off-topic, but not so much... Although it is true that early stage breast cancer has good five-year survival rates, about 10% of women are diagnosed stage IV de novo (initial diagnosis), which is incurable, and about 20% of women diagnosed at earlier stages will eventually developed stage IV BC. Additionally, we are, for unknown reasons, seeing a concerning increase in young women (<40 years old) diagnosed with stage IV BC, with many/most of those dying within five years of dx. Digging into the BC survival data presents a much less rosy picture than is commonly assumed.
 
Off-topic, but not so much... Although it is true that early stage breast cancer has good five-year survival rates, about 10% of women are diagnosed stage IV de novo (initial diagnosis), which is incurable, and about 20% of women diagnosed at earlier stages will eventually developed stage IV BC. Additionally, we are, for unknown reasons, seeing a concerning increase in young women (<40 years old) diagnosed with stage IV BC, with many/most of those dying within five years of dx. Digging into the BC survival data presents a much less rosy picture than is commonly assumed.

Oh absolutely. Sorry if I wasn't clear and hope it goes without saying, but not trying to paint a rosy picture of breast cancer as a solved problem. I have mostly trained in cancer centers and have a cancer center appointment right now, so I used it as an example because I know that isn't the case. When you dig into the data we aren't as good at treating a lot of things. Except I think other fields (oncology among them) are doing a better job of honestly assessing the state of their treatments and taking more active steps to resolve it. If psychologists were in charge, I fear we would have been too focused on developing new names for roughly the same surgical techniques in lieu of the modern chemo cocktails or immunotherapy breakthroughs.

There are lots of other reasons why. People dying is more obvious than people living crappy, miserable lives for decades on end, for one. Cancer gets (oodles) more money, for another. We can't blame it all on those though.

Either way, my point was certainly not intended to come across as "breast cancer is a solved problem." Moreso "Look at these great numbers oncologists could use to advertise how effective they are....even our most-hyped heavily-p-hacked 'will never replicate' numbers won't come close...why are we resting on our laurels more than they are?"
 
I disagree on this point. I feel like I keep hearing this from psychologists and it drives me nuts. For some disorders, very broad disorders like depression, yes a lot of things work. Or, doing anything will make you less depressed. For more specific disorders that are more severe (e.g., eating disorders, BPD with recent suicidality), we have strong evidence that certain treatments do not work as well as others.
I can't speak extensively to the specifics of BPD or eating disorders (they're not my area of focus beyond its overlap in the broad domain of therapy effectiveness/process), much less with recent suicidality as a specific subsection of work for BPD. Given that, I don't know how much truly compariative work has been done between modalities even within those, but the reason that this is said so often is because it is the rule rather than the exception. In general, if you line up different treatments you get equal results on outcome measures. There can be exceptions but this is the exception, not the rule. The way this doesn't work is by invoking moonbeam treatments (running between trees, analytic perspectives, other far out there stuff, etc.) or comparing complex TX needs with basic TX (e.g., wrap around services with multiple types of services vs traditional 1/week therapy). For the purposes of the topic in the thread (e.g., PTSD treatments), there just isnt research to support different modalities. And this is true of most disorders and most client needs. Take for instance research on BPD outcomes - a recent meta-analysis comparing DBT to Psychodynamic found nearly equal effect sizes (DBT slightly less favored, although the .10 difference in ES doesnt suggest meaningful difference to me). This also all circles back to what is being measured and the timeline you are measuring it for.

Again, I have to disagree. The reason to use CPT or PE is b/c we have a theory (experiential avoidance leads to problems) for it that is better than other theories (eye movements reprogram memories or unconscious libido). The reason to use these as first-line treatments is b/c they have a sound, scientifically supported theories AND empirical evidence saying it works better than controls (apparently, WL and TAU).

Of course that doesn't mean we should stop trying to improve on the treatments and developing new ones that may be more efficacious.
You can't have competing mechanisms that produce the same result and tell me that the mechanism should be targeted as a specific ingredient. I'm yet to see studies substantially supporting a specific theory in terms of identifying a single mechanism explanation for anything (we could get into EMDR, but largely it focuses on exposure anyway so I'm not sure that makes the best case - plus there are all the issues with research surrounding treatments like that). We are a long way off from being able to substantiate any claim that X leads to Y in terms of treatment outcomes (I say this as someone conducting an effectiveness study on PE and CPT). Having a theory of change is different from having robust evidence for that theory which cannot be interpreted six other ways (see my comment above about research behavioral activation not lining up to theory)
 
@Justanothergrad
Unfortunately, it’s late and I won’t be able to respond in detail till Friday. In the meantime, I will say that there is very good evidence indicating some treatments are better than others. I am happy to provide empirical evidence for my assertations in detail later. For now, I can easily speak to the DBT meta you mentioned.

Those are not head2head trials they are examining. So, comparing average effect sizes across different trials of different treatments can be tricky. DBT was not designed as a treatment for BPD but instead as a treatment for mult-problem, difficult-to-treat, suicidal individuals. It is in those types of studies that you see the superiority of DBT. No one would expect better outcome in just BPD. So, of course the effect sizes would even out if those psychodynamic trials aren’t treating suicidal individuals.

A better study is Linehan et al., 2006.
Which is a tightly controlled internal validity trial. Here it examines DBT vs non-beh community experts (basically meaning psychodynamic practitioners that are consider suicide experts). After a year of Tx and a year of follow up, DBT had 50% less suicide attempts. Talk about a significant real-world, clinical effect. Who would you want your family member to see if they were suicidal and BPD?

Of course it would be great to replicate the study but that’s a costly enterprise.

Another study recently examined DBT vs a manualized supportive therapy for high risk for suicide teens. DBT was superior in self-injury reductions at end of Tx. But no difference aftrt follow up.

The meta you cite is simply answering a very different question, which obfuscates the superior efficacy of DBT in treating very serious and life threatening problems.

And as you ask, well how does it do that (mechanisms)?
It appears (based on other research as well) is that DBT is effective for suicidal behavior bc it relies on expert suicide practitioners that keep their clients out of inpatient units.
 
Last edited:
@Justanothergrad
Unfortunately, it’s late and I won’t be able to respond in detail till Friday. In the meantime, I will say that there is very good evidence indicating some treatments are better than others. I am happy to provide empirical evidence for my assertations in detail later. For now, I can easily speak to the DBT meta you mentioned.

Those are not head2head trials they are examining. So, comparing average effect sizes across different trials of different treatments can be tricky. DBT was not designed as a treatment for BPD but instead as a treatment for mult-problem, difficult-to-treat, suicidal individuals. It is in those types of studies that you see the superiority of DBT. No one would expect better outcome in just BPD. So, of course the effect sizes would even out if those psychodynamic trials aren’t treating suicidal individuals.

A better study is Linehan et al., 2006.
Which is a tightly controlled internal validity trial. Here it examines DBT vs non-beh community experts (basically meaning psychodynamic practitioners that are consider suicide experts). After a year of Tx and a year of follow up, DBT had 50% less suicide attempts. Talk about a significant real-world, clinical effect. Who would you want your family member to see if they were suicidal and BPD?

Of course it would be great to replicate the study but that’s a costly enterprise.

Another study recently examined DBT vs a manualized supportive therapy for high risk for suicide teens. DBT was superior in self-injury reductions at end of Tx. But no difference aftrt follow up.

The meta you cite is simply answering a very different question, which obfuscates the superior efficacy of DBT in treating very serious and life threatening problems.

And as you ask, well how does it do that (mechanisms)?
It appears (based on other research as well) is that DBT is effective for suicidal behavior bc it relies on expert suicide practitioners that keep their clients out of inpatient units.
As I said, I'm not a DBT expert and entering into an extensive debate on that isn't something I'm willing to do because the extensive literature surrounding that treatment is outside my expertise. That said, evidence for a mass list of treatments that outperform other treatments is far from strong. D12 has their EST standards and, time and time again, evidence shows that those treatments perform roughly equal to others in most cases (heck, a transference based treatment is on the list and described on the EST website as having performed equally to DBT in one study; Transference-Focused Therapy for Borderline Personality Disorder | Society of Clinical Psychology). Are there exceptions? Sure, I'm certain of it and I'm all in favor of supporting those in cases where they robustly exist. The problem is that the robustness of evidence is, at the very least, thin. Here is the real issue to me: For every study showing superiority of a given treatment, there are just as many showing other treatments being superior or equal to that treatment. Instead, the rule is that for 95% of problems and 95% of clients, treatments produce roughly equal effect. Or, at the very least, that there are a variety of treatments using very different assumptions/mechanisms of action and these produce equal results in most cases. The need to identify more treatments that do the same thing has led to a trend in 'designer CBT', rather than actual innovation in therapy process/understanding.

Along with that, we have not identified actual mechanisms of change associated with recovery to predict outcomes (see BA research, see ruminative thought research, etc; see Kazdin et al., 2007). If the assumption is that X theory makes people better (CPT, DBT, etc.) in a manner that is different than treatment Y or Z then surely we should have some reliable indicator of why that is if this is such a robust and clearly indicated finding specific to the things being done (i.e., what aspect of treatment unique to that therapy is causing improvement). Mechanistically, if the component is not unique, then neither should the impact of that component be on outcomes and, as such,

Of note here, I don't weigh publications/evidence from the people/lab who benefit from a specific treatment/theory/approach as strong comparative evidence - I've seen those same people selective approach research far too often (not in reference to Linehan specifically). Replications from independent labs and researchers are extremely important to me for any sort of 'X demonstrates superiority to Y' argument. All of this still generally relies on only one metric of measurement for client outcomes (symptom reduction). There are also measurement implications to this focus.
 
Last edited:
One thing I want to point out too with DBT is feasibility. The way it's delivered in the studies is far different from how it's delivered in "the real world."

I guess that's also one thing the study in the OP was also arguing--the treatment works on paper, but in real life it often doesn't play out the same way (no disagreement there). I was astonished by that figure about how many VAs are actually providing PTSD EBPs.

Also, thanks for your comments, everyone. I also felt like the research article's narrative wasn't really supported by the data, and that the Military Times article REALLY misinterpreted the research.
 
One thing I want to point out too with DBT is feasibility. The way it's delivered in the studies is far different from how it's delivered in "the real world."

I guess that's also one thing the study in the OP was also arguing--the treatment works on paper, but in real life it often doesn't play out the same way (no disagreement there). I was astonished by that figure about how many VAs are actually providing PTSD EBPs.

Also, thanks for your comments, everyone. I also felt like the research article's narrative wasn't really supported by the data, and that the Military Times article REALLY misinterpreted the research.
Yeh, I agree re: efficacy v effectiveness. This is true of most therapy research, sadly. Between the exclusion criteria (suicide in PTSD, for instance) and the strict controls on how therapy is done - it doesn't match how those same therapies are given in the real world from my experience, save for a few select settings.
 
You can't have competing mechanisms that produce the same result and tell me that the mechanism should be targeted as a specific ingredient.

Perhaps a minor point in the grand scheme of this discussion (or I am missing your point), but sure you could. Our diagnosis system is flawed and our associated outcome measures are also flawed. All paths can lead to Rome and even take a not-statistically-different amount of time to get there. I think this is especially true when we are at poor-to-average recovery numbers and not curative ones. Just because there are two competing approaches that produce a 20% decrease in symptoms doesn't mean their purported mechanisms are wrong and that you couldn't get a 30% reduction if you take an active ingredient from each of them. That was essentially my earlier point. Of course, it doesn't mean you necessarily would see that either, but it is certainly logically possible. We just don't know because studies attempting it are ungodly rare.

Part of the issue is also that mechanism studies are still comparatively rare and the fact that many are even more abstract than our outcomes (e.g. "experiential avoidance" is a damn tough construct to assess) creates even more potential for these measures to be flawed.

That said, if you were referring specifically to PTSD treatments it may well be the case. I try to keep tabs on the literature, but its definitely not my main focus.
 
Perhaps a minor point in the grand scheme of this discussion (or I am missing your point), but sure you could. Our diagnosis system is flawed and our associated outcome measures are also flawed. All paths can lead to Rome and even take a not-statistically-different amount of time to get there. I think this is especially true when we are at poor-to-average recovery numbers and not curative ones. Just because there are two competing approaches that produce a 20% decrease in symptoms doesn't mean their purported mechanisms are wrong and that you couldn't get a 30% reduction if you take an active ingredient from each of them. That was essentially my earlier point. Of course, it doesn't mean you necessarily would see that either, but it is certainly logically possible. We just don't know because studies attempting it are ungodly rare.

Part of the issue is also that mechanism studies are still comparatively rare and the fact that many are even more abstract than our outcomes (e.g. "experiential avoidance" is a damn tough construct to assess) creates even more potential for these measures to be flawed.

That said, if you were referring specifically to PTSD treatments it may well be the case. I try to keep tabs on the literature, but its definitely not my main focus.
Agreed all around. The flow of my point goes something like this -

W treatment "causes" Y outcome
Similarly, X treatment also "causes" Y outcome
Lo and behold, Y treatment also also "causes" Y outcome in the same way
We make a new treatment and yet again, Z treatment gets us there too
If we give more of X,Y,or Z as part of X/Y/Z treatment then we get somewhat of a dose contingent response
But if we combine X and Y ingredients together we don't necessarily get a X+Y response


Saying we should only do W or X treatment seems to ignore that all roads lead to Rome as you said, or that we don't know causal mechanisms. I'm not against people using any of those evidence-based approaches (quiet the opposite actually), but making restrictions like "you SHOULD use X and NOT Y" seems a bit more than problematic given what we know we dont know. I actually don't think any of the mechanisms are wrong - I think simplifying them to a single mechanism likely is.


On a related note to the OP, I (literally as I typed this) just got an email from a VA PTSD clinic where I am running effectiveness studies about the JAMA article by Steenkamp (2020) about ESTS and the broader discussion from VHA leadership about if its a good idea to offer alternatives to these ESTs.

I'll highlight a bit below as it relates
In contrast toprior investigations, themorerecent trials ofPTSD
treatment have had a greater emphasis on combat exposure instead
of sexual trauma, used active comparison groups, and examined active
duty personnel treated in garrison, rather than only veterans.

...

In all these trials, active treatments (PE, CPT ,PCT, sertraline, and
transcendental meditation) were not significantly different in all direct
comparisons of clinician-administered primary PTSD outcomes.
Neither PE nor CPT (individual or group) demonstrated clear superiority
over non–trauma-focused PCT, a finding consistent with prior
trials in civilians and veterans. PE plus placebo, sertraline hydrochloride,
and PE plus sertraline hydrochloride were comparably
efficacious,5 and transcendental meditation was found to be noninferior
to PE.6 Individually administered CPT significantly outperformed
group-delivered CPT.3 Although outcomes were statistically
comparable across the disparate treatments,notable differences were
observed in treatment dropout, particularly for PCT vs other conditions,
with individuals receiving PCT demonstrating less dropout.With
the exception of massed PE, rates of treatment non completion for
trauma-focused therapies and transcendental meditation ranged from
25% to 48%, compared with 12% dropout for those receiving individual
or group PCT. 1-6 Massed PE, likely because it could deliver a full
dose of treatment during a shorter time period, showed comparable
rates of non completion as PCT (14%).

Overall, these new findings suggest that first-line psychotherapies
do not effectively manage military-related PTSD in large proportions
of patients and do not outperform non–trauma-focused interventions.
 

Attachments

I was wondering how much of the studies this article examined were combat vs. military sexual trauma, as we know that PE and CPT work very well for the latter population. I feel like the article could have explained that better. As I've mentioned elsewhere on SDN, I work primarily with MST so that's important for me to know.
 
I was wondering how much of the studies this article examined were combat vs. military sexual trauma, as we know that PE and CPT work very well for the latter population. I feel like the article could have explained that better. As I've mentioned elsewhere on SDN, I work primarily with MST so that's important for me to know.
The treatments cited from STRONG STAR are largely for combat related trauma
 
I f'in peaked but will come back to this tomorrow.

@Justanothergrad that D12 list is poorly assembled:
LINK

Not all evidence is equivalent. If we look at lazy studies, we get lazy results.

@cara susanna while D&I are an important issue, that is separate from efficacy. I KNOW that there is bad/fake/poor every treatment out there (especially DBT). I can call myself an ACT therapist if I wanted to without having an real training or oversight. Important issue, relevant to the topic, but I am arguing that we do have evidence that some treatments are better than others. We also have lots of bad metas and poor RCTs.

Will come back tomorrow!!!
 
Really enjoying this discussion! More generally addressing the overall picture, the Dodo bird argument purpose is not to encourage psychologists to use whatever interventions they can come up willy nilly (i.e. "woo-woo" stuff discussed in here), but to simply make the point that adhering to a strong theoretical framework and having the common characteristics across therapies (alliance, emotional expression, etc.) are effective to people even more so than a specific intervention/ingredient. Practitioners who are using the Dodo bird argument to come up with their own interventions that they use indiscriminately (or failing to use any grounded theory) are missing the point completely.
At the same time, for some disorders, we have treatments that are reasonably effective over "treatment as usual," which is helpful in context of treating those particular disorders, but there are issues relating to the translation of those heavily-controlled environments to actual practice (as @Justanothergrad mentioned). I think we can make use of both, but there's still more to it.

Psychology is a relatively new field from a very narrow cultural viewpoint, so I think it makes since that we're having some bumps. There's room for improvement when so many of our concepts are ambiguous and hard to study (i.e. the unconscious, trauma, experiential avoidance, etc.--we're trying to operationalize as many concepts as we can), combined with the sometimes non-linear progress experienced in therapy and client factors that contribute to outcomes, like motivation, insight, openness, etc. Our field has tried to tackle a HUGE cluster of issues that are both internal and external in cause and address them all via "therapy." We're definitely not perfect, but my hope is that we'll continue to better understand all factors involved, because multiple viewpoints and approaches are needed. I think we will also continue to operationalize further concepts in our field that will help expand our treatment options, but there will always be limitations to boxing certain concepts in.

Being familiar with the literature of therapist mastery, I think we should also be looking more deeply at the traits/characteristics of effective therapists (Skovholt & Jennings qualitative studies) and furthering that line of research (branch over into quantitative if we can more clearly agree on and expand upon the qualities as a field). I think there’s more here to examine. Therapist factors are estimated to affect only ~10% of client outcomes, but I wonder if we might be underestimating the importance of particular therapist factors like interpersonal skill (or emotional intelligence, depending on how you frame it), etc. And the nebulous idea of goodness-of-fit between therapist and client, which is important in practice, but hard to pin down (is it a common factor or is it based in part on the characteristics of the therapist AND client characteristics?).

I think common factors, EBPs for specific disorders, client characteristics, therapist traits/characteristics, and the actual components used from session to session are all pieces of the treatment puzzle that we could be looking at to get a sense of why and how therapy works. I don't have easy answers to specifics, though. And given how long my list is of what goes into therapy, it makes sense to me why we are struggling to pin down exactly which mechanisms work in isolation--because there are so many aspects/factors that work in tandem.
 
Last edited:
Thoughts on this thread:

First, there's nothing so practical as good theory -- Second, here's a link to an article on issues of statistical power in the treatment outcome literature.

I agree with others that RDoC seems to be a step in the right direction, although almost anything seems like a step in the right direction relative to the DSM.

Personally, the interventions I implement are most commonly grounded in basic behavioral science (e.g., ACT, BA, exposure-based therapies). While placebo-y and common factors-y type stuff is also definitely at play, I have reason to believe that the behavior change I achieve via psychotherapy can be attributed, at least in part, to the behavioral principles I implement (e.g., exposure, reinforcement, extinction). I am confident that my use of ESTs renders me a "better" (i.e., more effective, efficacious, and/or efficient) therapist than if I relied solely on placebo effects and common factors.

Also, if a particular intervention fails to yield behavior change, then I can reflect on which mechanisms of that intervention were ineffective, why they were ineffective, what that might mean about my conceptualization, and what subsequent steps I need to take. Without a strong theoretical rationale to justify why I have selected a particular EST, I would just be randomly cycling through "EST for x" one after the other.
 
Thoughts on this thread:

First, there's nothing so practical as good theory -- Second, here's a link to an article on issues of statistical power in the treatment outcome literature.

I agree with others that RDoC seems to be a step in the right direction, although almost anything seems like a step in the right direction relative to the DSM.

Personally, the interventions I implement are most commonly grounded in basic behavioral science (e.g., ACT, BA, exposure-based therapies). While placebo-y and common factors-y type stuff is also definitely at play, I have reason to believe that the behavior change I achieve via psychotherapy can be attributed, at least in part, to the behavioral principles I implement (e.g., exposure, reinforcement, extinction). I am confident that my use of ESTs renders me a "better" (i.e., more effective, efficacious, and/or efficient) therapist than if I relied solely on placebo effects and common factors.

Also, if a particular intervention fails to yield behavior change, then I can reflect on which mechanisms of that intervention were ineffective, why they were ineffective, what that might mean about my conceptualization, and what subsequent steps I need to take. Without a strong theoretical rationale to justify why I have selected a particular EST, I would just be randomly cycling through "EST for x" one after the other.
I'm not sure I follow, are you equating common factors variance and placebo effects?
 
I'm not sure I follow, are you equating common factors variance and placebo effects?

Kind of... I meant that some patients will show signs of improvement because of the “non-specific” aspects of an intervention (e.g., common factors, placebo), while some others will not — I view psychotherapy at the doctoral level as being designed to better meet the needs of patients who do not benefit solely from the non-specific aspects of psychotherapy.

Despite the presence of methodological issues (e.g., inadequate power to detect small differences between TAU and novel interventions) within a lot of the treatment outcomes literature, most of the interventions I implement are derived from basic behavioral science (e.g., single case design). Because of this, I feel more confident in understanding “why” a particular intervention (e.g., differential delivery of positive reinforcement contingent on patient engagement in meaningful events) should yield a particular result (e.g., improved mood). If my intervention fails to achieve its intended result, then I either failed to adequately implement the intervention (e.g., maybe the "reinforcement" I provided didn't actually function as a reinforcer) or my conceptualization was off (e.g., maybe the patient is experiencing cognitive fusion that would be better addressed by an acceptance/exposure-based intervention). In the case of the former, I can tweak my intervention, and in the case of the latter, I can re-conceptualize the case (and also my intervention) -- IMO, this whole process of therapist trial-and-error learning builds on psychologists' expertise in psychological assessment, which sets us apart from other MH providers.

However, with that being said, as an individual clinician, I don’t really have a great way of knowing which aspects of my interventions are carrying the most weight. So, who knows? Maybe my use of ESTs is no more effective than if I were to provide everyone with some vague form of generic psychotherapy, so long as I maintained a decent frame and alliance. Based on theory and my understanding of the empirical literature though, I don't believe this to be the case. Even if I wanted to empirically test this hypothesis, I would need a large sample to detect the relatively small difference that's likely to exist between two active treatment conditions (e.g., EST vs. non-EST; this point is discussed in the Cuijpers article that I linked in my previous post).

Also, maybe my conflation of common factors and placebo effects into a single “non-specific” effects variable is overly simplistic? I'm not trying to dismiss the importance of these variables; rather, I'm implying that ESTs should be effective for either a larger audience than that reached by non-ESTs or, at the very least, for those individuals who do not adequately respond to non-ESTs.

I’m happy to see so much interest in this discussion — This is a topic I’m not particularly well read on but am very interested in.
 
Last edited:
Let us, for the moment, discuss the superiority of some treatments over others in relation to efficacy. Effectiveness is obviously important but prior to debating effectiveness (external validity), we need to establish efficacy (internal validity). So, do some treatments work better than others (or, do some treatments have stronger efficacy than others)?

Add to that the hint of general treatment equivalence across other non-PTSD focused therapies
It begs the larger question, if differences don't generally exist across treatment types
The research on shared outcomes across treatments is fairly robust without moving into fringe therapies that I would hardly consider treatment
In general, if you line up different treatments you get equal results on outcome measures. There can be exceptions but this is the exception, not the rule. The way this doesn't work is by invoking moonbeam treatments (running between trees, analytic perspectives, other far out there stuff, etc.) or comparing complex TX needs with basic TX (e.g., wrap around services with multiple types of services vs traditional 1/week therapy).
That said, evidence for a mass list of treatments that outperform other treatments is far from strong. D12 has their EST standards and, time and time again, evidence shows that those treatments perform roughly equal to others in most cases
Are there exceptions? Sure, I'm certain of it and I'm all in favor of supporting those in cases where they robustly exist. The problem is that the robustness of evidence is, at the very least, thin. Here is the real issue to me: For every study showing superiority of a given treatment, there are just as many showing other treatments being superior or equal to that treatment. Instead, the rule is that for 95% of problems and 95% of clients, treatments produce roughly equal effect.
By reading those quotes, which I do not believe are taken out of context, it sounds like the Dodo Bird argument (with the caveat that we are excluding more fringe treatments [we won’t specify fringe-ness for this convo]).

The arguments against the Dodo Bird has already been articulated better than I can (check out the Lilienfeld article):
www.abct.org/docs/PastIssue/37n4.pdf

If the Dodo Bird is too loaded of a term, let use this quote from Wampold (cited in the above link)
[treatments] that are intended to be therapeutic, are delivered by competent therapists, have a cogent psychological rationale, and contain therapeutic actions that lead to healthy and helpful changes in the patient’s life.
These are the therapies we are arguing over. Agreed?

I will make it even simpler, let us talk about the most common orientations:
Psychodynamic/analytic
Humanistic (and similar)
CBT
Biological/Medication (I think it would be a shame to exclude a distinctive and common treatment from this conversation. It helps inform our questions and has a lot of research).

I think, at the end of the day, WE CAN say that some are more superior than others. I’ll start by discussing the flawed logic that underlies the Dodo Bird argument or, as you stated first, “general treatment equivalence.”

The biggest error made in comparison of treatments is typically using overly broad evaluations. Lilienfeld discuss this as
(a) a main effects hypothesis or (b) an interactional hypothesis
The main effect is the idea that if I took every disorder and combined them, I see that there is no difference in treatments. I agree with this statement. HOWEVER, this is a poor question. It is like asking, does rain help plants grow? A little bit of rain doesn’t, too much rain (flooding) is harmful but the right amount does help. If I examined this too broadly, I would find a lack of effect of rain on helping plants grow. This is the main effect argument or a straw man (since I do not believe this is what people actually mean, though it is used to support their a prior beliefs) argument. In the health arena, this is like saying antibiotics are good for every possible ailment. This is the Shedler argument for efficacy of psychodynamic treatments (LINK). Simply speaking, collapsing across disorders to compare efficacy is a flawed methodology. The common/non-specific factors perspective is similarly flawed (will discuss later, maybe).

The real question: is there an interaction between diagnosis (more specific problems) and a treatment.

More coming...
 
Last edited:
As I mentioned before, let us start with the most often cited clinical problem that shows a lack of treatment-by-disorder effect (i.e., lack of differences in treatment efficacy); depression. I teach my students and I tell my friends/family that dozens of treatments are equally efficacious - broadly speaking - in treating depression. This is primarily due to the heterogeneity of the disorder. Now, let us address your second point about mechanisms.
If we don't know the mechanism we are targeting, perhaps its a bit too early to focus on what we need to do to fix the car.
having research identify mechanisms that reliably work and which we can explain/target seems critical. We simply don't have that, so putting specific TXs as standouts seems odd to me because it assumes we know something about those treatments that we don't (i.e., why they work / that they work differently than others).
I'm yet to see studies substantially supporting a specific theory in terms of identifying a single mechanism explanation for anything
Along with that, we have not identified actual mechanisms of change associated with recovery to predict outcomes

There are more but this is enough quoting

Each of our 4 orientations theorizes different etiologies/pathologies and, in turn, different therapeutic mechanisms. I will simply/broadly highlight those mechanisms (let us not nitpick, it is meant to be broad)

Psychodynamic/analytic: unresolved unconscious conflicts; resolved through a relationship with therapist (perhaps the one I understand least?)
Humanistic (and similar): lack of self-actualization; unconditional positive regard
CBT: learning/cognitions create maladaptive behaviors; new learning and new thoughts
Biological/Medication: biological malfunction; alter biology

What I mean by heterogeneity of depression is that depression is caused by a **** TON (damn SDN trying to censor me) of reasons. Numerous pathways to the disorder, numerous pathologies. We know that CT, BT, pyschodynamic, meds, and exercise all have relatively similar efficacy. The problem is that we use these treatments as a panacea. Thus, our relatively small effect sizes (~d = .24; see Cuijpers article posted above). Here is the crux of the argument:

People get depressed for a myriad of reasons. Daily hassles, existential concerns, social isolation, depressant substances (e.g., certain meds, alcohol), sedentary lifestyles, learned behaviors, maladaptive thoughts, going through a traumatic event, seasonality (I can go on for a long time)). So, it is true that, on average (very broad examinations), will all find similar outcomes. This is due to the dilution of the therapeutic “signal” among the heterogeneous etiologies (“noise”). For example, on average, exercise is as good as most other treatments. However, if I was working with an ultra-marathoner with depression (which I have), the idea that this person is depressed b/c of sedentary lifestyle is asinine. Having this person exercise more will not lead to a therapeutic effect (from the exercise itself, any therapeutic effect can be attributed to other factors [expectations, time, common factors, etc.]). So, comparing a sample of people with heterogenous pathologies that lead to depression across treatments will obfuscate the effect of any individual treatment. More specifically, if I have a trial of exercise for depression but my sample consists of a mix of etiologies (e.g., existential concerns as the main cause for depression plus sedentary folks, and those that consume too many depressants), the effect of exercise, overall, will be decreased. This is an example that some treatments are indeed better than others (theoretically) if we were able to parse out the etiological differences (I don’t think anyone is doing this kind of work). Simply speaking, exercise is great for depression but not for people who are depressed and already exercise a lot. That subset of people likely needs to use a different treatment after understanding the cause of their depression (maybe social, maybe existential).

Now let us look at a less heterogenous disorder, anorexia. We can safely conclude there isn’t a single pathology for anorexia, but it is definitely less heterogenous (or more homogenous) than depression (theoretically). Since there are fewer potential pathologies, then we should be able to see fewer effective treatments (that rely on those theoretical pathologies). Lo and behold, there is no medication (or other biological intervention) that will change a person’s eating behavior (this is not to say that there is no biological component to anorexia, only that any biological component is superseded by an alternative pathology). What works best? Family Therapy (a term often used broadly for any treatments incorporating families but, in this case, a specific treatment for youths with AN).
LINK 1
LINK 2

Adults with AN, nothing is great for adults. But for bulimia, we do have some evidence that certain treatments are worse. For example, Lilienfeld discusses Poulsen et al. (2014), which was an RCT that stacked the odds toward a psychoanalytic treatment. Yet, the results very heavily in favor of CBT.

The research on etiologies for disorders is poor for a few reasons. Mainly, we cannot do RCTs for what causes anorexia (or any disorder), that would be blatantly unethical. Furthermore, these need to be longitudinal studies, which are very costly. We often must infer etiological causality. But we can definitely rule out some theories (e.g., no support for the chemical imbalance/serotonin theory of depression; 5-HT transporter gene theory has been debunked; Collaborative meta-analysis finds no evidence of a strong interaction between stress and 5-HTTLPR genotype contributing to the development of depression).

You further point out that any treatment difference is an anomaly rather than the rule. So far, I have outlined that this isn’t true for youth AN and BN. Similarly, I already highlighted that this isn’t true for multi-problematic suicidal individuals (youth and adult). For depression, this is a consequence of heterogeneity of etiologies. Let us move onto (hopefully our last set of disorders), anxiety.

Still more, have I lost everybody now?
 
Last edited:
For anxiety, we have done a better job of parsing out the types of anxiety (e.g., phobias, panic, OCD, somatic illnesses), which (theoretically) decreases putative etiologies for those disorders (though, there is a lot of anxiety-comoribidity).

For a long time, we have had strong evidence that CBT with interoceptive exposure is superior to medication (e.g., Gould et al, 1995; Otto et al., 2001) for panic.

Do we need to argue for using psychodynamic or humanistic treatments for specific phobia? We have strong evidence that non-exposure based, active treatments are inferior to CBTs with exposure (https://labs.la.utexas.edu/telch/files/2015/02/Psychological-Approaches-In.pdf). I have never found a direct comparison of the two. And there are a lot more studies of CBT than Psychodynamic.

Psychodynamic for OCD? Again, seemingly no one is even trying to do that (An update on the efficacy of psychological therapies in the treatment of obsessive-compulsive disorder in adults).

CBT vs Psychodynamic: GAD and social anxiety, they seem to both do even. More importantly, do we even have comparative (or well-controlled) trials of humanistic therapies for these disorders?

That was a shorty, one final post, I promise. Then I will f off for a while. Spent half the day doing this.
 
Last edited:
Here are my limitations, I am OBVIOUSLY BIASED. I definitely had confirmation bias and cherry-picked studies. Can’t argue that. But there are some major take-aways:
  • Etiologies and mechanisms are vastly understudied (in comparison to Tx outcomes)
  • Evidence must be carefully examined, not all RCTs are equivalent.
  • Theories of treatment outcome are insufficient, we need well-controlled studies. Lacking alternative trials, we need to rely on available literature (unless we have strong counter-evidence).
  • There are definitely some treatments that do better than others, when maximizing internal validity (e.g., do you want psychodynamic therapy for your child with AN or non-behavioral community experts for your family member with suicide/BPD).
  • To say there are no differences (or marginal differences) speaks beyond the data. Especially, when we amp up the severity of the disorder and reduce the heterogeneity of etiology/pathology.
  • Is there a lot of equivalence, yes. Is there some superiority, yes! Is it 95% vs 5%, I don’t think so (or anywhere close to being that lopsided)?
  • Effectiveness & common factors, an argument for another day (we have no single controlled trial of the common factors theory, all correlational, The Role of Common Factors in Psychotherapy Outcomes. - PubMed - NCBI. You would think the champions of this would try a single RCT to prove their point).
So, what do we do?

First, psychotherapy is a health field that is about 100 years old and mired in Freudian thinking. Do we think the first 100 years of other health treatments were better? Trepanning and blood letting were around for 1000s of years. Did the surgeons throw in the towel and just expected infections/death as a common outcome? Who needs anesthesia! Dropping the ball now, saying they are all equal, or saying the placebo effect is enough is counter-productive to potentially future advances. To better understand this we need more research, more science.

And, my final point, the empirical evidence can be easily biased. As time marches on, we provide better ways of doing science. For example, the DIV 12 list was good for the 90s. Now, we have more novel methods to address the issues (e.g., Tolin, McKay, Forman, Klonsky, & Thombs, 2015; the Grading of Recommendations, Assessment, Development, and Evaluations [GRADE] approach [Guyatt et al., 2008]).

When we compare treatments, lets look at the scientific basis of the treatment (not simple the empirical outcomes of a treatment). Acupuncture, Though Field Therapy, and other similar treatments have evidence of efficacy, but the idea of meridians and Qi are not scientifically validated concepts (let’s not forget unconscious conflicts or eye movements). Sure, we may be limited currently in our scientific understanding but to eschew all of it is also ineffective. Let's not use non-science, pseudoscience, or untested therapies until the science progresses. Some treatments do work better b/c we at least can say they are rooted in science. Others b/c we have little or no empirical evidence of an alternative. Finally, some have the scientific theories and the empirical backing (the good stuff). We don't stop just b/c those Tx are slightly better than others, we keep looking for more knowledge.

Finally, debating in this manner is highly ineffective. We know people are more likely to take up stronger positions once presented with counter evidence. Frankly, the previous paragraph is the argument that I believe holds most sway.

Alright, I am off for a while. Happy to read any response (not today).
:=|:-):

Damn it, just spent 30 minutes editing and fixing typos. For realz, now!
 
Last edited:
You articulate the points against it well. I don't think it fully summarizes or accurately represents the full scope of the literature - I'm sure mine doesn't either. Like you said, this is complex. Also, hats off, you also spent way more time than I have or will on any response on here. I limit the time invested to around 5-10 minutes. There is certainly nuance and I am not a hard core common factors man - but I do see treatment equivilence more frequently than I see discrepancies for most presentations/people. That isn't saying that all need the same thing or that there aren't differences across disorders - there are AND there is a lot of client level variability that seems washed over in the whole EBT/EST debate. Heck, what we target, what timeframe is measured, etc. To be clear, I actually fall somewhere in the middle of the road on this whole debate.

- Agreed about mechanisms and etiology. We basically don't know anything about (1) why things happen diagnostically or (2) what makes them work. That's part of my hesitance against saying any specific thing is the 'go to'. It restricts us as a science when we can't even reliably explain either of those two things. People are probably far more complex than single, simple factors and the tendency is that its harder to loosen restrictions on action than it is to avoid restricting in the first place.

- The argument that common factors should do an RCT is something that cuts both ways (also. it misses the idea that all change should focus on symptom report, but thats another issue about if RCTs are the gold standard) we have good measures of working alliance. The same RCTs supporting specific factors include these measures in their study. Its the difference between ANOVA and ANCOVA. No one includes the covariate because no one wants to see the results (I say that half cynically). I, topically, saw this topic come up and get dismissed at the last PTSD conference I attended (they had the covariate and mentioned it, then excluded it entirely) all while also saying "everyone knows working alliance is important to change and outcomes". If we know something is a factor in therapy, it seems silly not to address it.

- Generally RCTs are vastly underpowered to find any of their results. This doesn't make the above point any easier to address. It does pose great problems, however, There are a variety of other methodological issues with RCTs that I disagree with conceptually in many ways (e.g., efficacy v effectiveness, restricted targets for specific types of problems, campyness, etc).

- Personal pet peave, symptoms are measured badly during RCTs. It drives me utterly nuts. Thats entirely an aisde, but... UGH.

- I'm wouldn't classify moonbeam therapy or acupuncture as a therapy at all. I think its interesting that despite how many times I say this, that continues to be a strawman I'm faced with as if my point isn't evidence-based. You don't need to tell me they don't work. I also don't classify 'eat ice cream therapy' as a valid comparison for any RCT.

- There are issues in RCTs that get flushed over (dropout rates, for instance). If you have a 40% dropout and can achieve a 1.0ES for a client (lets assume a static rate) or a 10% dropout but only a 0.5 ES in the same measure during the same time, which do you do? Do you roll the dice as a clinician for 1/3 chance of dropout? That becomes a critical issue to the debate of 'what is better'. (obviously a made up example). When we treat individuals, assuming its a group has flaws.

Anyway., off to the D12 midwinter. Have a good weekend.
 
Top