New Meta on Behavioral Activation

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DynamicDidactic

Still Kickin'
10+ Year Member
Joined
Jul 27, 2010
Messages
1,812
Reaction score
1,521
For anyone interested:

Apparently, BA does not beat out active control. I am sure the common factors folks will love this. However, I caution that camp not to conflate studies of depression and anxiety with more severe problems.

Members don't see this ad.
 
  • Like
Reactions: 1 users
Interesting. I can't open up the link right now. Anyone know how the operationalized 'active control'?

The conditions in the included studies in which participants received a treatment, such as CBT, CT, or mindfulness vs “inactive control” is waitlist or placebo.


Sent from my iPhone using Tapatalk
 
Members don't see this ad :)
The conditions in the included studies in which participants received a treatment, such as CBT, CT, or mindfulness vs “inactive control” is waitlist or placebo.


Sent from my iPhone using Tapatalk

I haven't had a chance to pull the article yet, but CBT and CT for depression both include behavioral activation, so it makes sense to me that they would perform similarly well to standalone BA.

In fact, if "BA + extras" (e.g., CBT, CT) performs similarly well to standalone BA, then, to me, that supports the notion that BA is one of the (if not 'the') active ingredients in those other treatment packages, which IIRC is consistent with previous dismantling studies -- Otherwise, shouldn't those more robust packages outperform standalone BA?
 
Last edited:
  • Like
Reactions: 1 users
The conditions in the included studies in which participants received a treatment, such as CBT, CT, or mindfulness vs “inactive control” is waitlist or placebo.


Sent from my iPhone using Tapatalk

That may be what they are claiming, but that doesn't seem to be holding true when you look at some of the studies they included as having an
active control."
Delgadillo et al. (2017) focuses on BA delivered by therapists vs. CBT self-help in sample of depression comorbid with SUD.

Hopko et al. (2011) studied BA vs problem solving therapy in a breast cancer sample.

Jaconbsen et al. (1996) compares BA vs what seems like CBT but without restructuring of schemas and core beliefs.

Jahoda et al. (2017) compared BA with a guided self-help treatment for adults with ID.

Kanter et al. (2015) compared BA with TAU in a Latino population

McIndoo (2016) compared a four-session Mindfulness and BA treatment with a waitlist control. I'm not sure how this constitutes a active control.

McNamara & Horan (1986) compared CT vs. BA vs. Combined CT & BA vs "high demand control," which seemed to be basically Rogerian therapy, in a sample of university counseling center patients.

Moradveisi et al., (2013) compared BA vs antidepressant TAU.

Myhre et al (2018) compared "Standard Care" which was "mandatory components such as milieu therapy and regular sessions with a psychiatrist, doctor or psychologist. The frequency of the sessions was individualised, but typically occurred every other day" vs. SC plus BA in an inpatient sample.

Richards et al (2016) compared CBT to BA in an adult sample. I'm not sure how this study was included in the meta when the comparison is CBT vs BA.

Snarski et al (2011) compared BA to TAU in a geriatric inpatient sample who all had mild to moderate cognitive impairment.


I'm not saying that this is necessarily a bad meta, but there's so much heterogeneity in the samples and what constitutes "active control" that considerable hedging is necessary when interpreting the findings.
 
  • Like
Reactions: 2 users
That may be what they are claiming, but that doesn't seem to be holding true when you look at some of the studies they included as having an
active control."
Delgadillo et al. (2017) focuses on BA delivered by therapists vs. CBT self-help in sample of depression comorbid with SUD.

Hopko et al. (2011) studied BA vs problem solving therapy in a breast cancer sample.

Jaconbsen et al. (1996) compares BA vs what seems like CBT but without restructuring of schemas and core beliefs.

Jahoda et al. (2017) compared BA with a guided self-help treatment for adults with ID.

Kanter et al. (2015) compared BA with TAU in a Latino population

McIndoo (2016) compared a four-session Mindfulness and BA treatment with a waitlist control. I'm not sure how this constitutes a active control.

McNamara & Horan (1986) compared CT vs. BA vs. Combined CT & BA vs "high demand control," which seemed to be basically Rogerian therapy, in a sample of university counseling center patients.

Moradveisi et al., (2013) compared BA vs antidepressant TAU.

Myhre et al (2018) compared "Standard Care" which was "mandatory components such as milieu therapy and regular sessions with a psychiatrist, doctor or psychologist. The frequency of the sessions was individualised, but typically occurred every other day" vs. SC plus BA in an inpatient sample.

Richards et al (2016) compared CBT to BA in an adult sample. I'm not sure how this study was included in the meta when the comparison is CBT vs BA.

Snarski et al (2011) compared BA to TAU in a geriatric inpatient sample who all had mild to moderate cognitive impairment.


I'm not saying that this is necessarily a bad meta, but there's so much heterogeneity in the samples and what constitutes "active control" that considerable hedging is necessary when interpreting the findings.

Agreed.


Sent from my iPhone using Tapatalk
 
I am sure the common factors folks will love this.

That's because they are frequently terrible at science. Did a very quick scan of the table and it included the following comparators: 4 cognitive therapy; 1 Problem-solving therapy, 1 Mindfulness; 1 CBT; 1 combined CT and BA; 1 medication, 2 CBT-based guided self-help; 2 TAU and 1 Brief psychodynamic therapy.

This tells us little about common factors and is mostly confirmation of the early dismantling studies. Yet it will be used to justify all manner of craziness that is not anywhere on the list of comparators.
 
  • Like
Reactions: 7 users
Ugh, the common factors stuff has so many methodological limitations. Who'd have thought when you collapse a lot of only slightly overlapping things together, it would even groups out?> Huh, imagine that?
 
  • Like
Reactions: 3 users
As someone who's done a lot of systematic reviews and metas (just had one accepted today, in fact), I question how this got published. The study selection and interpretation is a mess. Also, this reminds me of a lot of issues with acceptance and commitment therapy studies that adherents actively overlook because they are pretty much one step away from building temples to Steven Hayes (and I really like ACT, but the widespread doctrine that Hayes is right about everything and ACT is infallible and the only treatment that works is troubling).
 
Last edited:
  • Like
Reactions: 3 users
That's because they are frequently terrible at science. Did a very quick scan of the table and it included the following comparators: 4 cognitive therapy; 1 Problem-solving therapy, 1 Mindfulness; 1 CBT; 1 combined CT and BA; 1 medication, 2 CBT-based guided self-help; 2 TAU and 1 Brief psychodynamic therapy.

This tells us little about common factors and is mostly confirmation of the early dismantling studies. Yet it will be used to justify all manner of craziness that is not anywhere on the list of comparators.
Yes, it would be better to compare BA to non-CB treatments.
 
  • Like
Reactions: 1 users
As someone who's done a lot of systematic reviews and metas (just had one accepted today, in fact), I question how this got published. The study selection and interpretation is a mess.
This is a pre-print and I am not sure if this has been accepted anywhere. But Cuijpers is on the paper and my google scholar always goes off for his name.
 
Ugh, the common factors stuff has so many methodological limitations. Who'd have thought when you collapse a lot of only slightly overlapping things together, it would even groups out?> Huh, imagine that?
And this is why the common factors stuff can be so obtuse. Obviously, certain basic concepts like empathy are likely going to be helpful across groups, problems, and contexts, but it's so reductionist to act like that these are the only or primary factors, or that there is no substantial variance in these domains. So much research across disciplines is moving towards to matching patients to treatments instead one-size-fits-all approaches.

As someone who's done a lot of systematic reviews and metas (just had one accepted today, in fact), I question how this got published. The study selection and interpretation is a mess.

Even a cursory glance pokes many holes in this meta and the discussion doesn't really do a great job of exploring or even acknowledging them. I'm not necessarily one for demolishing your own study in the discussion, but when the flaws are so glaring, you need more substance than what they have offered.
 
  • Like
Reactions: 1 users
And this is why the common factors stuff can be so obtuse. Obviously, certain basic concepts like empathy are likely going to be helpful across groups, problems, and contexts,
To be fair, we have never even looked, in a controlled and systematic matter, at the effect of the common/non-specific factors.

Aside from correlational evidence, we do not have any evidence strong in internal validity to indicate that any common/non-specific factor leads to a therapeutic effect. Of course, it would be awkward to do an RCT where we control the level of validation or empathy. I bet if we did, certain interventions (e.g., BA) would have a therapeutic effect independent of the common factor.
 
  • Like
Reactions: 3 users
I'm no meta-analysis expert, but I was thinking that this one seems pretty poor quality. Glad that people with more knowledge are voicing that same opinion.
 
This is a pre-print and I am not sure if this has been accepted anywhere. But Cuijpers is on the paper and my google scholar always goes off for his name.
Tbh, this is why I’m wary of pre-prints. Seriously methodological flawed studies (like that rat cell phone study) get out there and into the public conscious despite having huge flaws that drastically limit their accuracy. Peer-review is far from perfect, but at least it catches some of this.
 
Tbh, this is why I’m wary of pre-prints. Seriously methodological flawed studies (like that rat cell phone study) get out there and into the public conscious despite having huge flaws that drastically limit their accuracy. Peer-review is far from perfect, but at least it catches some of this.
My lab collaboratively goes over the manuscripts our mentor has been asked to review for journals. I can't imagine this one getting published, at least not the way it's currently written.
 
Sadly, I am 100% confident this will still be published if it wasn't already. Its substantive enough in terms of the number of articles and very easy to shop things around until you get friendly reviewers. JAMA Psychiatry? Probably not. Hopefully not. Some IF=2 journal? Sure. Might take a few tries, but sure. I was looking at that in between patients and admittedly didn't look at their discussion section - if properly framed I wouldn't necessarily take issue with it. Framed as a non-inferiority analysis there could be some merit, though I haven't the foggiest idea how to power that in a meta. Were I reviewing I would still ask they tighten the exclusion criteria (i.e. maybe only include the 6 CT/CBT comparisons or at least test these separately).

By and large I think science has moved too far into the "dump everything into the model and let <amorphous statistical magic we don't understand> tell us the answer" direction. I am certain this will grow worse with the current ML push and I expect massive problems in a number of fields to start emerging. I think a major piece missing from traditional statistics education is helping people better understand what stats can and cannot do and when it isn't appropriate to use. Give me any data set and I can always make you p values. Most of the time I can even make you statistically valid ones. Doesn't mean its a good idea.
 
  • Like
Reactions: 1 users
Cuijpers already did a meta on BA showing that it is effective and not different from other cognitive interventions

This adds anxiety into the fold (not a situation where I use BA) and compares it to other active controls. So, my interest is how it compares to other active controls, not a repeat of the Cuijpers meta.

Hopefully, the finally version will be a little different.
 
Sadly, I am 100% confident this will still be published if it wasn't already. Its substantive enough in terms of the number of articles and very easy to shop things around until you get friendly reviewers. JAMA Psychiatry? Probably not. Hopefully not. Some IF=2 journal? Sure. Might take a few tries, but sure. I was looking at that in between patients and admittedly didn't look at their discussion section - if properly framed I wouldn't necessarily take issue with it. Framed as a non-inferiority analysis there could be some merit, though I haven't the foggiest idea how to power that in a meta. Were I reviewing I would still ask they tighten the exclusion criteria (i.e. maybe only include the 6 CT/CBT comparisons or at least test these separately).

By and large I think science has moved too far into the "dump everything into the model and let <amorphous statistical magic we don't understand> tell us the answer" direction. I am certain this will grow worse with the current ML push and I expect massive problems in a number of fields to start emerging. I think a major piece missing from traditional statistics education is helping people better understand what stats can and cannot do and when it isn't appropriate to use. Give me any data set and I can always make you p values. Most of the time I can even make you statistically valid ones. Doesn't mean its a good idea.

Yeah I am beginning to despair of getting the point across to colleagues (at a pretty fancy academic place with a millionty NIMH research dollars) that statistical analysis can be very useful but does not actually do your reasoning for you. They will nod gravely and say yes, of course, but this condition achieved significance and that one did not so clearly one effect is "real" and the other is just an illusion.

Total divorce between rhetoric and how people actually seem to draw conclusions.

Also I kind of want to scream every time I hear someone say that a 95% confidence interval means that there is 95% chance the interval contains the mean or dismisses an effect because "the intervals overlap so it's not real."
 
  • Like
Reactions: 1 users
Also I kind of want to scream every time I hear someone say that a 95% confidence interval means that there is 95% chance the interval contains the mean or dismisses an effect because "the intervals overlap so it's not real."

I am a broken record at Q&As for research presentations "The p values are meaningless, what were the effect sizes?" Or the "this result is trending towards significance." Well ok, but by that same logic, everything close to and just under .05 is also "trending away from significance." Unfortunately, competence in research methodology and statistical literacy is already shameful in the advanced practice provider community, and the educational trend appears to be wanting even fewer opportunities to gain experience in these areas.
 
  • Like
Reactions: 1 users
Top