I'm going to guess you know what the words "guide" and "clinical practice" mean and you are also familiar with how verbs modify nouns. But for the record, guide means
inform clinical practice. You don't reinvent the wheel with every patient. It turns out we have studies where approximately the same set of interventions work for a large group of people but individual differences necessitate that clinicians remain in control what will work best for their patients. I actually don't think we disagree on this point. By no means have I claimed that CBT works for all patients for all time. But I would claim that, on average, CBT will work better than dophin therapy (mainly due to the access issue,
🙂)
Being better than dolphin therapy makes a poor comparison. Systematically, CBT must be better than other active, psychological therapies. Otherwise, why would I pick that thing over the other? I don't see a rationale to pick CBT because it's better than getting stung by bees, for instance. Nor a reason that CBT should be picked over not going to therapy, or doing therapy but without any active components. Thats a good proof of concept, but it doesn't shore up why CBT should be relied on, or answer any questions about 'what is it' and 'when is it enough'. Thus, the wording I indicated I was unclear about remains unclear.
While the good humor sounds logical and clear, it is not. The issue is one of replicability and clinical efficacy, and becomes a question of not just EST v EBP but also one of efficacy versus effectiveness. When you tell me to use CBT to "guide clinical practice", I do not know what that means. Do you mean (1) the inclusion of homework as a basis of exposure based learning? (2) a particular and correct balance of cognitive OR behavioral component, which operate on different supposed pathways, (3) specific content which I must include (e.g., could it be exposure discussion, exposure hierarchies, or exposure principles- are all three 'equally CBT')? How much does it matter if I choose CBT version 1 or CBT version 2 to reduce symptoms largely enough related nomologically, even if they aren't the same (e.g., anxiety/depression/ptd symptom decline patterns in treatment for any of the three). It sounds good, but its not clear because its not specific. The reason it's not specific is that CBT means so many different things, and all of them are correct. In using phrases like 'the clinician stayins in control' you are implying a more EBP approach, but one which relies more widely on adoption. In return, such EBP practices focus on effectiveness research, which assumes that there is variability and that manuals won't be followed (versus efficacy, which is in lab-based ideal conditions; like an RCT). The balance and nature of worksheets, the specific phrases (e.g., which distortion lists are used), etc don't seem to matter as much, and good thing since clinicians have no set order to the ones they select based on folks I use. They were given to them from local or regional or specialty-based treatment source (e.g., division, organization, etc), they made it, it was online for free, their friend made it, or they work in a setting where they are given a list (VA, DOD, etc). All of these factors make it unclear to me what you mean, in concrete and practical terms, that people should use CBT to guide their practice.
Also, I agree a lack of opportunity for dolphins is likely the largest issue at play in this thread.
Disagree here. First, not all CBT interventions are created equally. Second, dismantling studies have shown that some work better than others.
Here is one dismantling study example for CBT for Panic Disorder, which supports interoceptive exposure and face to face settings are associated with the largest treatment gains. You can critique the study designs all you want, but the effect sizes are still quite large. Larger than supportive therapy. Larger than cognitive restructuring. Larger than PMR. This also matches my clinical experience treating panic disorder using CBT (some I regular did in primary care). Does this that these interventions will succeed with absolute certainty? Again no. Any good psychologist knows that statistics are probable and do not necessaily reflect reality for everyone. That's as true for common factors metas as it is true for RCTs for CBT protocols,
as Culjipers has pointed out.
Versions of treatments for the same thing (e.g., WET, CPT, PE for PTSD etc etc) are generally the same in their outcomes and "non-inferior" to one another, with specific ingredients really only mattering with respect to avoidance/exposure based interventions, as described in a variety of meta analyses (Bruce Wampolds work, Pim's work, etc). If you pick a random treatment based on a random psychological disorder in a random population, the treatment will probably be about .68 to 1.3 effect size, with differences reflecting methods/measures/sample sizes/etc more than anything else. This finding and its unwavering stability is as old as meta-analysis, almost 50 years. We can agree that other general effects are also common. For instance, in vivo exposure is better than imagined, etc. But thats not untrue of other theories. The point I made about my defense mechanism of intellectualization here stands. As a dynamic therapist, I will absolutely tell my clients to directly confront and talk through their feelings with someone. We will do role plays. We will talk about how to express our needs, drives, and fears, and how to remain in control of our egos while facing our cultural father figure seen in the role of our employer.... Is that exposure? Is it in vivo? Is this CBT....? Why not? Is it because I said dynamic? or drive? Or is what I am actually doing (encouraging engagement/exposure via in vivo and imaginal activaties, likely with psychoeducation about emotional states and identification of distorted thinking [cognitive fragmentation, for instance]) going to work anyway? What if I told you I'm not dynamic but actually CBT, but I like those words and terms because they resonate with clients? What if a dynamic therapist reads what I wrote, and says thats who they do it? Is it still CBT? Is ACT CBT since it assumes different things about symptom control? If thats CBT, what is not?
As to the phobia specific point, some portions of explained variances change by treatment and condition and setting - that's not surprising - effect sizes vary by setting, population, etc as well. That said, the relative portions remain highly weighted towards share method variance across conditions. I haven't read the phobia specific work you're citing, but it's the same reason, to my eye, that behavioral interventions for anxiety/phobia were those most heavily weighted towards larger effects in older meta-anlaysees (e.g., Smith & Glass; etc). Largely, that behavioral intervention (exposure) were those most likely to show incremental gains relatively. That said, a majority of the literature does not support this case. It also means that its hard to define what CBT is (to my point above). Is it CBT if I only do BT? What about only CT? What about ACT? What, again, about my intellectualization example? or my boss issue?
Marv has a ton of great work. He was kind enough to speak to the students in my program a few years ago about change processes and his career work. I can't say enough kind things about him. His 2019 paper is probably my favorite, but he's been doing this work for decades. he's a big part of what made SUNY such a powerhouse for treatment outcome research for so many years
Goldfried, M. R. (2019). Obtaining consensus in psychotherapy: What holds us back?.
American Psychologist,
74(4), 484.
Busy for the last point. I'll finish later.
I will add, as it is relevant to this thread, I expect all first year counseling psychology doctoral students I train to be able to have this exact same debate, at this same level as they learn about theories. It's a heavy reading load; and it helps form an extremely strong theory understanding.