Future of clinical / counselling psychologists

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Yiannis021174

New Member
10+ Year Member
15+ Year Member
Joined
Jul 12, 2008
Messages
10
Reaction score
0
Dear all,

I have been reading quite a lot about the latest trends towards evidence based practice in psychotherapy and the implications for the professions of clincal and counselling psychologists.

The trends seem to suggest:
i) Only clearly and empirically evidenced psychotherapy practice will be covered by insurers in the US
ii) Manualised treatments seem (at present) more 'compatible' with randomized clinical control trial studies which are the 'favourite' way.
iii) Increasing competition in the future stemming from preference for manualised treatment delivered by MSW / LCW etc.
iv) Nicolas Cummings has even suggested that clinical psychologists might stop practicing psychotherapy.

The following article fro the American Journal of Psychotherapy is very pertinent and worth reading: Thomason, Timothy C. (2010), 'Trend toward evidence-based practice and the future of psychotherapy', American Journal of Psychotherapy, Jan Issue.

Link for article:

http://findarticles.com/p/articles/mi_7450/is_201001/ai_n55070812/?tag=content;col1


What do you think?

Members don't see this ad.
 
I think this is the way it needs to be. We're all ethically obligated to deliver treatments that work. This means that clients should be given the first line treatment for their set of problems before trying treatments with less or no research evidence.

I wish there were a board similar to the FDA that approves treatments that people can do. There are all sorts of fringe attachment therapies, tapping nonsense, among other things that either are not effective or cause harm in some cases. We're one of the only medical professions where providers can justify a treatment approach because they "feel" it works rather than that they actually have data that it works.
 
The author, Timothy C. Thomason, Ed.D., brings up an important topic that has and will likely continue to affect clinical practice. Namely, EBT and EST in the managed care context. I take issue with several important conclusions of the article.

1) The author misrepresents the relationship between EBP and EST. He posits that these are separate practices. I argue that his logic is incorrect. The evidence for this is such: One can in no way practice an EST without simultaneously and necessarily practicing EBP. Thus, the two cannot logically be discrete. The author himself uses the terms "empirically supported treatment" and "evidence-based treatment" interchangeably.

2) The author reports that supporting the development of EST's is tantamount to "medicalizing" psychotherpy and thus making it obsolete (which eerily suggests that true psychotherapy is necessarily devoid of science). He states, "Some feel that the emphasis on using ESTs is misguided because it moves psychotherapy further into the medical mode." He contends that "others" suggest psychotherapy of any sort is helpful "most of the time." One cannot portend to be objective by using deferential pronouns. He does not provide any argument, in fact, for EST's. In writing the article, the author seems to assert that psychologists have "their" own treatments and are attempting to medicalize the field. Counselors/Social Workers, etc., are simply trying to find creative new ways to help suffering people, even if the methods aren't' supported by empirical evidence for their effectiveness or safety.

3) These assertions omit two critically important aspects of clinical/counseling psychology and psychotherapy. Dependence on scientific rigor is the only way to avoid the pitfalls of "common sense" when engaged in the process of therapy where another's wellbeing is at stake (think of an individual suffering from Borderline Personality Disorder or Schizoaffective Disorder). The second omission is the importance of the therapeutic relationship. A solid and mutually valued therapeutic relationship accounts for a majority of the success in psychotherapy (according to the scientific literature on psychotherapy outcome). Thus, "psychotherapy" is not, in fact, "helpful most of the time."

As a final comment, I found this to be interesting: "Similarly, the Code of Ethics of the American Counseling Association (2005) mandates that 'Counselors use techniques/procedures/modalities that are grounded in theory and/or have an empirical or scientific foundation.'" What exactly is a treatment that has an empirical or scientific foundation but is not grounded in theory?

.02
 
Last edited:
Members don't see this ad :)
As a final comment, I found this to be interesting: "Similarly, the Code of Ethics of the American Counseling Association (2005) mandates that 'Counselors use techniques/procedures/modalities that are grounded in theory and/or have an empirical or scientific foundation.'" What exactly is a treatment that has an empirical or scientific foundation but is not grounded in theory?

.02[/QUOTE]


There are plenty of studies that show that CBT, for instance, works. However, we do not know how it works. For instance, exposure for anxiety disorders works but the exact mechanisms by which it works are not fully research. One may create a treatment that is based on theory -- for instance training parents to habituate to anxiety provoking stimuli and staying in the situation until anxiety lessens. The actual mechanisms are underreasearched.
 
As a final comment, I found this to be interesting: "Similarly, the Code of Ethics of the American Counseling Association (2005) mandates that 'Counselors use techniques/procedures/modalities that are grounded in theory and/or have an empirical or scientific foundation.'" What exactly is a treatment that has an empirical or scientific foundation but is not grounded in theory?

.02

This is how I understand it.... There are theories and then there are interventions/treatments. Both should be ideally evidence-based. Theories include CBT, psychanalytic, humanistic, etc etc and interventions/treatments include behavior modification, desensitization for phobias, homework assignments, etc etc But they don't need to match. For instance, you can use a humanistic theory with a CBT style treatment or interventions. But the point is, both should hopefully be evidence based and proven to work and help.

I've heard rumors that APA is coming out with a evidence based treatment manual. That'd be nice. :idea:
 
While there is no rule saying they need to match, I think someone would be hard-pressed to deliver good and effective CBT without conceptualizing and working from cognitive-behavioral theory standpoint. It's not as simple as following a treatment manual. One should have an in-depth understanding of the theory behind evidence-based treatments so that one will be effective in managing the treatment and conceptualizing the case correctly. And in any case, why conceptualize from a theory that has no research support? For example, psychodynamic therapy has been shown to be containdicated for OCD - so why attempt doing exposure and response prevention (the front-line treatment) while conceptualizing from a psychodynamic perspective? That doesn't make sense.

I think this kind of loose approach adds to the phenonmenon that I often see with people out in practice where they claim they deliver evidence-based treatments, except that they were never actually trained to do so - they merely conceptualize from whatever theory they were trained in and then try to sprinkle in "techniques" that they may have picked up here and there from CBT. This is not doing real CBT.
 
Dear all,

I have been reading quite a lot about the latest trends towards evidence based practice in psychotherapy and the implications for the professions of clincal and counselling psychologists.

The trends seem to suggest:
i) Only clearly and empirically evidenced psychotherapy practice will be covered by insurers in the US
ii) Manualised treatments seem (at present) more 'compatible' with randomized clinical control trial studies which are the 'favourite' way.
iii) Increasing competition in the future stemming from preference for manualised treatment delivered by MSW / LCW etc.
iv) Nicolas Cummings has even suggested that clinical psychologists might stop practicing psychotherapy.

The following article fro the American Journal of Psychotherapy is very pertinent and worth reading: Thomason, Timothy C. (2010), 'Trend toward evidence-based practice and the future of psychotherapy', American Journal of Psychotherapy, Jan Issue.

Link for article:

http://findarticles.com/p/articles/mi_7450/is_201001/ai_n55070812/?tag=content;col1


What do you think?

I didn't read the article, but it seems to imply from what you wrote that empirical support for a therapists' activities will continue to be presumed on the basis of the efficacy study model (e.g., double-blind, placebo-controlled, categorical diagnosis).

This seems to be a massive misconceptualization of what psychotherapists do (e.g., that we treat diseases and our treatments are like pills). We don't treat diseases, we treat people, and our therapies are not pills. Treatment manuals don't work and traditional efficacy studies are not the proper way of quantifying how psychotherapy works. Would be nice of the insurance companies could understand this.

It's kind of our fault anyways - we've given up our practice to the medical model, to our own detriment.
 
Theory starved of science is merely attitude. Psychologists are in a unique position to understand the incredible potential for harmful bias in attitude formation (from both the client and clinician perspective). To base interventions on attitude is unethical for a scientist-practitioner.

Theory cannot truly exist without science. Thus, practicing interventions based on theory OR with a scientific foundation, as suggested by the American Counseling Association, is absurd. Furthermore, it is similarly absurd to conceive of offering therapeutic intervention without some basis in theory specifying how a particular intervention may affect a certain outcome (a hypothesis). To be sure, some forms of therapy are more "fun" than others.

Consider: Larry is passionate about art. Larry enjoys drawing, painting, and sculpting. Larry is a psychotherapist. Larry channels his passion for art into his therapy work with clients. He finds that many clients enjoy and respond well to artistic expression, and are able to process emotions more honestly and freely through this medium. Larry has heard from many clients that his therapy has changed their life.

Answer: Is Larry acting ethically in using this therapy with his clients? What information would you want to have to answer this question?

.02
 
Theory starved of science is merely attitude. Psychologists are in a unique position to understand the incredible potential for harmful bias in attitude formation (from both the client and clinician perspective). To base interventions on attitude is unethical for a scientist-practitioner.

Theory cannot truly exist without science. Thus, practicing interventions based on theory OR with a scientific foundation, as suggested by the American Counseling Association, is absurd. Furthermore, it is similarly absurd to conceive of offering therapeutic intervention without some basis in theory specifying how a particular intervention may affect a certain outcome (a hypothesis). To be sure, some forms of therapy are more "fun" than others.

Consider: Larry is passionate about art. Larry enjoys drawing, painting, and sculpting. Larry is a psychotherapist. Larry channels his passion for art into his therapy work with clients. He finds that many clients enjoy and respond well to artistic expression, and are able to process emotions more honestly and freely through this medium. Larry has heard from many clients that his therapy has changed their life.

Answer: Is Larry acting ethically in using this therapy with his clients? What information would you want to have to answer this question?

.02

I'm confused by what you are saying when you say "Thus, practicing interventions based on theory OR with a scientific foundation, as suggested by the American Counseling Association, is absurd". Could you elaborate on what you mean here? And from Larry, I would love more than anecdotal evidence...
 
What exactly is a treatment that has an empirical or scientific foundation but is not grounded in theory?

Arguably, Systematic Treatment Selection (which is not really any one treatment) is very much scientifically and empirically grounded, and happens to be pretty much atheoretical.... although you *could* call it "trans-theoretical." http://www.larrybeutler.com/systematic-treatment-selection/

Most certainly prefer this than the endless, interminable obsession with categorical, DSM diagnoses and traditional efficacy studies that our field has been mired in.
 
DrGero, I think you make a great point. Science, including our current method of "validating" treatments, is not the Gospel, to say the very least.

I also really like that you pointed out that in the end, these are real people, who are experiencing very real distress/dysfunction. This whole discussion is moot of we forget that.

I would disagree, however, with two points. First, I believe one would be hard pressed to defend the statement, "Treatment manuals don't work." Consider DBT for Borderline Personality Disorder, which has been shown to be the most effective treatment to date for this devastating disorder. You may very well know what it is like to engage a client with BPD, and having the guidance and support of a treatment team following a protocol can literally save the client's life (and certainly won't do any harm to the therapist, either). Second, while we are treating people, not "diseases," these people are, in fact, suffering. The want, and expect, that we possess some knowledge and expertise that will help them when they feel helpless. So, in the end, we are treating a problem in the unique context of a client's personhood. I don't agree with the insurance company conception that treatment can be boiled down to something akin to a software program; Sit the client in front of the therapy computer and hit "Enter" either.

P.S. I think what STS is suggesting is a more or less software-driven approximation of what a seasoned scientist-practitioner would do in the initial stages of therapy. I like the concept and specification that factors are analyzed and utilized based on empirical findings. I find it ironic, perhaps unfortunate, that what the software spits out has been termed "Prescriptive Therapy."

DrHoops, in the post that you quoted, I was arguing that the "or" part was suggesting one could choose scientific foundation or theory, but I fail to see how one can call their approach theory-based if not based on scientific method, which establishes the organization of theories.
 
I would disagree, however, with two points. First, I believe one would be hard pressed to defend the statement, "Treatment manuals don't work." Consider DBT for Borderline Personality Disorder, which has been shown to be the most effective treatment to date for this devastating disorder. You may very well know what it is like to engage a client with BPD, and having the guidance and support of a treatment team following a protocol can literally save the client's life (and certainly won't do any harm to the therapist, either).

I'm certainly aware that there are a huge number of studies which (typically) show nonspecific effectiveness of x, y and z manualized treatments. But I don't think manuals are how therapists actually work in the real world, and moreover, there is actually some indication that treatment manuals may 'harm' therapists - at least in terms of encouraging antitherapeutic attitudes in their users (Beutler & Martin, 2000). Not a big deal, really, since I'm pretty confident most therapists don't use traditional treatment manuals in the way they're intended to be used.

Second, while we are treating people, not "diseases," these people are, in fact, suffering. The want, and expect, that we possess some knowledge and expertise that will help them when they feel helpless. So, in the end, we are treating a problem in the unique context of a client's personhood.

Well, *we* are ("we" as in thoughtful therapists) but traditional psychotherapy efficacy studies are which the insurance companies and the EBP movement slavishly follow, and they are based typically on looking at how supposedly homogenous groups of patients respond to supposedly unitary treatment approaches, typically in the form of a manualized treatment.
 
Couple seeming misconceptions here (some of these may be driven by the original article, which I think wasn't written very well):

1) If you read through the literature, EBP and ESTs are no longer treated as synonymous. EBP is general seen as a broader term. While engaging in EBP one will generally be using ESTs, but the opposite is not true...use of an EST does not necessarily mean one is engaging in EBP. EBP entails a great deal more, including use of reliable outcome measures and continued assessment of therapeutic progress in as objective a manner as is realistic, collaborative approaches to treatment and openness regarding a client's options (something that I think is VERY rare in typical practice). EBP entails familiarity and review of the current literature...as opposed to blindly implementing an EST because "It is how I was trained". This involves selecting the most appropriate treatment based off evidence available, and can actually entail use of non-ESTs in circumstances when alternative options are not available.

2) My take of the ACA statement is that it is too liberal. We are arguing whether scientific study necessitates a theoretical-base...I doubt many would disagree. However the statement as written, also leaves open the possibility of a theory-based therapy with absolutely no scientific evidence behind it. Most of the quacks have concocted some theoretical rationale for what they do, though it varies widely in the absurdity of the theory. As written, as long as you have a theory why moon beams and lasers improve depression, you are in the clear.

3) I'm somewhere in the middle on the manual issue. On the one hand, I believe failure to make use of scientific evidence when it is available constitutes malpractice, and would be all too happy to see many of these practitioners lose a license that in my opinion, they do not deserve. On the other hand, I agree that the way manuals are written is rather dysfunctional, and that approach to studying treatments leaves much to be desired. I think the focus needs to shift from "Session 1: Do this. Session 2: Do this." to a more flexible manual. The increased focus on processes and mechanisms rather than pure outcomes should aid greatly in this, though is unfortunately still in its infancy. Efficacy studies (i.e. RCTs) are important to do, and tell us something. The problem is when we "stop there". Hence the recent push for "effectiveness" studies, that examine the effects of treatments under less-controlled conditions. Therapists have a wider range of competence and are not as tightly supervised, a wider population of clients with much looser criteria for entrance into the trial are seen, etc. In short....striving for increased external validity as well. Perhaps surprising to some...many of the "EBP crowd" are the ones pushing for this right now (though its unfortunate it took so long for it to happen). We're seeing increased collaboration, CBPR used in treatment development and implementation, etc. We're on the cusp of a pretty significant change to how therapy is studied...and it is largely driven by the EBP crowd that some here seem to dislike.
 
Members don't see this ad :)
DrGero, I see your point. It sounds like the efficacy versus effectiveness conundrum. I suppose it would be difficult to counter the argument, following strict scientific principles, that in order to achieve the reported outcomes in those RCT's, one would have to be a therapist with the training of the RCT therapist, apply treatment x precisely as administered in the RCT, and then "administer" the treatment to a client from the RCT population (to ultimately achieve a .5 effect size lol).

Good arguments.

Well put, Ollie. And I appreciate that elucidation of EBP/EST.

.02
 
DrGero, I see your point. It sounds like the efficacy versus effectiveness conundrum. I suppose it would be difficult to counter the argument, following strict scientific principles, that in order to achieve the reported outcomes in those RCT's, one would have to be a therapist with the training of the RCT therapist, apply treatment x precisely as administered in the RCT, and then "administer" the treatment to a client from the RCT population (to ultimately achieve a .5 effect size lol).

Good arguments.

Well put, Ollie. And I appreciate that elucidation of EBP/EST.

.02


I'm wondering how feasible it is and how much we should expect that an EBT treatment validated through and efficacy trial demonstrates efficacy given that the conditions between the clinic labs and the real world are quite different. Comorbidities appear to be higher in the real world, so applying treatment X that was designed specifically for problem Y without taking into account those comorbidities may pose a problem.

One the other hand, it is too early to show the above as effectiveness trials are not so many. Also, it seems that treatment as usual works as well as CBT in effectiveness trials conducted in CMHC -- though I'm only familiar with some of that literature and for specific conditions.
 
I really appreciate the great responses and discussion. While I agree that the article cited is perhaps not the best quality in terms of arguments' rationalization, it does raise very important issues and questions pertaining to clinical / counselling psychology as a profession / vocation.

I personally agree that psychotherapists help/treat people not diseases and that this makes a bit difference is the way that 'helping' is approached. I remember reading in 'The Gift of Therapy' by Irvin D. Yalom that "... psychotherapy consists of a gradual unfolding process wherein the therapist attempts to know the patient as fully as possible. A diagnosis limits vision; it diminishes ability to relate to the other as a person. Once we make a diagnosis, we tend to selectively inattend to aspects of the patient that do not fit into that particular diagnosis, and correspondingly overattend to subtle features that appear to confirm an initial diagnosis. What's more, a diagnosis may act as a selffulfilling prophecy." (Ch 2, p.5)

Ok, I realise there is a difference between the topics of 'using diagnosis', 'using formulations', 'using manualized psychotherapy', 'seeking to empirically prove effectiveness' etc. There is a significant overlap however which becomes obvious if we ask ourselves the question:

how do we know that what psychotherapists do, does indeed help?

I would at this point like to clarify that I personally believe in psychotherapy as being helpful most of the times. I also agree with the position that psychotherapists 'treat' people not diseases. However, I feel that a lot more is needed (beyond) subjectivity if we try to adequately answer the above question.

The difficulty is that subjecting psychotherapy to the randomized clinical control trials has not so far helped us to answer the question adequately.
A lot of studies I have read seem to operationalize 'adherence to manualized therapy'. However, we know that psychotherapists do not necessarily stick to this in real life. That is a significant challenge to the study conducted. Should psychotherapy be more adherent to manualised treatments? I agree that probably not but my opinion is not really enough and of course again I cannot see how the initial question can be addressed.

Studies also show that the therapeutic relationship is a significant contributor to the success of psychotherapy. By the way, I would be very interested to know what % variance does it account for in studies conducted (just in case anyone knows). Bottom line, if therapeutic relationship is a significant antecedent to psychotherapy success, how is this 'ensured'?

The harsh reality is that psychotherapy tends become more integrated with managed care. This integration leads to a requirement by the managed care system for an answer (via empirical studies) to the question:

how do we know (prove) that what psychotherapists do, does indeed help?

No matter what I believe, Managed care views therapy from a cost - benefit analysis. In other words, IF financial resources are to be allocated to psychotherapy insurance cover, there should be a level of guarantee that the outcome (success) will be forthcoming. At present, the favourable means of providing this 'guarantee' is through scientific empirical studies.
So ... back to square one I guess.


Thank you again very much for the constructive comments. I really appreciate it.

Yiannis
 
I would just like to make one obvious point -- theories existed long before science did, hence they have a strong and grounded historical perspective as both a foundation for science, but also unto themselves. Some like to call this "philosophy," and some of the world's oldest and best thinkers were mere "theorists" who, nonetheless, were quite good at making people think differently about their problems or the world around them with little or no data.

Many therapists practice far more strongly from an underlying theoretical conceptualization of their clients than a specific data-based or data-driven scientific approach.

While the pure manualized and "scientific" approach is most appealing to students who often have little experience to draw upon otherwise (I know it was appealing to me for that reason), I think one shouldn't underestimate the power of theory.

John
 
I would just like to make one obvious point -- theories existed long before science did, hence they have a strong and grounded historical perspective as both a foundation for science, but also unto themselves. Some like to call this "philosophy," and some of the world's oldest and best thinkers were mere "theorists" who, nonetheless, were quite good at making people think differently about their problems or the world around them with little or no data.

Many therapists practice far more strongly from an underlying theoretical conceptualization of their clients than a specific data-based or data-driven scientific approach.

While the pure manualized and "scientific" approach is most appealing to students who often have little experience to draw upon otherwise (I know it was appealing to me for that reason), I think one shouldn't underestimate the power of theory.

John

I definitely agree with you that overarching theory and conceptualization skills are highly important. This is often what separates psychologists from masters level practitioners. I also believe they make us more flexible providers who are better able to treat clients as individuals.

However, I also think you are presenting a false dichotomy between "scientific" approaches and "theory." Yes, philosophers came up with many theories that they did not then test scientifically. However, psychology split off from philosophy and pursued a more empirical path. As such, a psychological theory that is not based in science shouldn't make any more sense than a biological theory that is not based on science.

I have a hard time thinking of any data-driven approaches to psychotherapy that have not either evolved into full theories or been incorporated into existing theories. Motivational interviewing begat a theory of change that closely mirrors humanistic theory, exposure and DBT are housed under behavioral theory. Where are these rogue techniques that do not fit within a theoretical framework?
 
I definitely agree with you that overarching theory and conceptualization skills are highly important. This is often what separates psychologists from masters level practitioners. I also believe they make us more flexible providers who are better able to treat clients as individuals.

However, I also think you are presenting a false dichotomy between "scientific" approaches and "theory." Yes, philosophers came up with many theories that they did not then test scientifically. However, psychology split off from philosophy and pursued a more empirical path. As such, a psychological theory that is not based in science shouldn't make any more sense than a biological theory that is not based on science.

I have a hard time thinking of any data-driven approaches to psychotherapy that have not either evolved into full theories or been incorporated into existing theories. Motivational interviewing begat a theory of change that closely mirrors humanistic theory, exposure and DBT are housed under behavioral theory. Where are these rogue techniques that do not fit within a theoretical framework?

Although I wouldn't by any means use the term "rogue techniques" to describe it, I think Systematic Treatment Selection (a la Beutler & Clarkin, 2000, etc) certainly is unable to be fit into any one particular theoretical framework, if by "theoretical framework" we mean traditional "schools of thought" like CBT, humanistic, psychodynamic, etc. Perhaps not atheoretical, it's trans-theoretical and research in this area has certainly done a good job avoiding a lot of the problems caused by trying to fit psychotherapy research into the RCT, efficacy-study model popularized by drug company research and the FDA's approval guidelines.
 
Although I wouldn't by any means use the term "rogue techniques" to describe it, I think Systematic Treatment Selection (a la Beutler & Clarkin, 2000, etc) certainly is unable to be fit into any one particular theoretical framework, if by "theoretical framework" we mean traditional "schools of thought" like CBT, humanistic, psychodynamic, etc. Perhaps not atheoretical, it's trans-theoretical and research in this area has certainly done a good job avoiding a lot of the problems caused by trying to fit psychotherapy research into the RCT, efficacy-study model popularized by drug company research and the FDA's approval guidelines.

Perhaps you're right. I will have to admit that the first I ever heard of Systematic Treatment Selection was when you mentioned it above in this thread. That makes me think (hopefully not misguidedly so) that it is not in wide use even among the EST/EBP crowd. A theoretical framework is important, and sidestepping that would be a huge mistake for the field.
 
Perhaps you're right. I will have to admit that the first I ever heard of Systematic Treatment Selection was when you mentioned it above in this thread. That makes me think (hopefully not misguidedly so) that it is not in wide use even among the EST/EBP crowd. A theoretical framework is important, and sidestepping that would be a huge mistake for the field.

This actually raises an interesting question. Does the adoption of something like STS necessarily preclude a theoretical framework? I'm admittedly not as familiar with Beutler's work as I should be either, but I don't think it does. I see STS as working hand-in-hand with the case conceptualization process. It may require people to acquire a broader knowledge base rather than the usual rigid adherence to one theory that one "likes best" (often for no clear reason) and filtering everything through that lens. That is certainly ambitious (perhaps unrealistically so), but I don't know there is any reason it need be devoid of theory...I'd argue the opposite is true - that a honed understanding of how all the pieces fit together is intensely theoretical. I sort of see it as being a cleaner and micro-level integration of a differential diagnosis process, only on the treatment level. With a complex differential diagnosis (and yes, I know the irony of bringing the relatively arbitrary categorization from the DSM into the discussion)...you are developing theories about the etiology of a problem, and through continued testing, interviewing, etc. you use data and other information at hand to whittle down the possibilities...re-theorizing as you go, in an iterative process.

That said, I completely agree with the above assertion...that perhaps one of the most critical advances in mental health was the movement to look beyond esoteric discussion of what drives behavior to find out whether or not those theories actually held up. Some did. Some didn't. Some we can never know because they are virtually untestable. Don't get me wrong, I firmly believe theory has an important place in the field. However, why rely on logic (with all its well-known flaws) alone when there is other information one can make use of? Ideally, I think it should be a careful balance between coming up with theories of what the problem is and the best way to resolve it, appropriate techniques applied to "resolve" those problems, all coupled with careful and continuous measurement across a variety of domains to help detect changes, better delineate the problems, and determine the next treatment goal. We are likely decades away from such a system being plausible, as it requires a great deal more knowledge of mechanisms, a full-on paradigm shift in both research and practice, etc. but I think it is where the field will need to go if we are ever going to see improved success rates that occur in a time-frame that both clients and insurance providers can live with.
 
Last edited:
I read an article some time ago about the schism between academia and "the real world." I can't recall the author, but it highlighted the relative lack of adherence to a strictly scientist-practitioner approach to clinical work conducted by academics with part-time practices as opposed to non-academics in full-time practice.

I recall, however, that many in the "real world" often cite lack of resources, time, reimbursement, pragmatics, etc. as reasons for drifting from their Boulder idealism of graduate school. I wonder how much of this posturing and illusory comparison to others in the "real world" serves to reduce cognitive dissonance?

Not that clinical psychologists could really be affected by something as elementary as cognitive dissonance.

.02
 
However, why rely on logic (with all its well-known flaws) alone when there is other information one can make use of? Ideally, I think it should be a careful balance between coming up with theories of what the problem is and the best way to resolve it, appropriate techniques applied to "resolve" those problems, all coupled with careful and continuous measurement across a variety of domains to help detect changes, better delineate the problems, and determine the next treatment goal. We are likely decades away from such a system being plausible, as it requires a great deal more knowledge of mechanisms, a full-on paradigm shift in both research and practice, etc. but I think it is where the field will need to go if we are ever going to see improved success rates that occur in a time-frame that both clients and insurance providers can live with.


I think this is a good point Ollie123. Reminds me a bit of the debate for 'Formulation' vs. Diagnosis. The idea being that a working hypothesis is built at the beginning of therapy based on (but not solely) diagnosis. This working hypothesis guides the therapeutic approach but as new data, insight etc is gained through the process, the formulation is updated accordingly. It is inherently integrative and flexible which promises a lot. However, I think that this approach would sometimes take more time (not necessarily a bad thing). It is also by nature less structured / prescriptive and as such more difficult to research empirically. Managed care would probably not like that. I am not saying all we should do is find wasy to satisfy managed care but the difficulty in testing effectiveness and success empirically is a real challenge we need to address.
 
Yuck!

Empirically Supported Treatments, Evidence Based Practice, blah blah blah.

Makes me feel like we're manufacturing Chevrolets.

DSM diagnosis in, "functional" out.

First the tighten the right screw, then this other piece of metal goes on, then tighten the left screw, etc.

There was a time I used to look at science as an ideal, as freedom from dogma, from bureaucracy, but not any longer.

But imagine getting sued because you don't practice exactly as a small group of people have decided is the only and the best way to practice. Yes, that anything else is unethical, is hurtful to the patient!

People's problems should not and can not be reduced to a mode of therapy. What we define as a "psychological disorder" or whatever, as if to claim ownership of suffering, is much more complex and multidimensional. It has philosophical, historial, political, social and sociological, cultural, biological, humanistic, spiritual, economic, in addition to psychological dimensions.

Common factors are big. The truth of the matter is that we know SO LITTLE about what works and yet because of money hungry managed care that demands a human-rebuilding-manual for insurance purpose, some have decided that not only do they know what's the best way to conduct therapy but also that any therapist who disagrees is being unethical?! Bull****!

Secular therapy is slowly replacing religious dogma, as the solution to life problems. How do we replace God? Well, there better be good science behind what we're offering people if we are to use either science or medicine's authority and status. Great science! Unquestionable convergence of research findings, that sort of thing. But that's not what we have, not even close. What is deemed effective is related to the particular therapy, sure, but it also has to do with who does the research, how often he conducts studies and gets published, finances, general trends like recent interest in mindfulness, etc. That common factors is still a serious alternate explanation for effectiveness of many types of therapy is a serious threat to legitimacy of the business.

What is unethical is not, in my opinion, a therapist who is humble and aware of limitations of science and art of therapy, one who dances to the rhythm emanating from the client, improvises on the spot, let's her humanism and her compassion, her willingness to help, be the guide as to which form of therapy is most appropriate at the moment. This person would be up-to-date on the latest scientific findings on psychological theories and effective therapies, but would be given enough freedom to allow the level of rapport, scientific and clinical knowledge, and her humanity guide therapy. So first and foremost, it would be two people meeting, yes a professional and a client, but more importantly, one human being genuinely trying to help another.

What is unethical then? Pretending there is convergence in science just because managed care wants us to. Confining therapists within illusory frames of what is deemed effective, and hurting therapists whose creativity, sensitivity, and ingenuity have been of great help to their patients, and finally, hurting clients--under the guise of protecting them--by enforcing dogmatic ways in which the therapists diagnose and treat them.

Let me use the example of a therapy that I actually have respect for, one that is effective:

Does DBT work for BPD? Yes! Is it the only form of therapy, one that must be presented in particular and rigid manner, that helps BPD patients or more generally, those with emotion dysregulation? No! Have we done long-term studies on every potential method that could help those with BPD (whatever you think of the validity of BPD)? No! Should DBT monopolize psychotherapy for BPD, and any deviation deemed "unethical" and punishable by law?

End of Rant//

Available to run for political office or rant about heartless vegetable-hater vegetarians and hot girls who refuse to wear short skirts.
 
Yuck!

Empirically Supported Treatments, Evidence Based Practice, blah blah blah.

Makes me feel like we're manufacturing Chevrolets.

Ah, but talking about the art of therapy, just feeling your way through it, and dancing to the rhythm of the client makes me feel like I'm at one of those levitating yogi conventions ;).

Seriously, though, it is important to remember the limitations of science. It's true that we don't know very much about why treatments work and who they will work with. However, let's not throw the ESTs out with the bathwater. If someone comes in with a specific phobia willing to try any tx, are we not ethically obligated to lead them through exposures instead of dragging them through months or years of insight oriented therapy? Of course, many things are not as clear as phobias. However, the only way they will become clear is by encouraging these research programs to develop more and more soundly supported treatments. Only then will we have the answers to your questions about the long-term effectiveness of DBT and only then will other treatments for BPD emerge as solid alternatives.
 
Does DBT work for BPD? Yes! Is it the only form of therapy, one that must be presented in particular and rigid manner, that helps BPD patients or more generally, those with emotion dysregulation? No! Have we done long-term studies on every potential method that could help those with BPD (whatever you think of the validity of BPD)? No! Should DBT monopolize psychotherapy for BPD, and any deviation deemed "unethical" and punishable by law?/QUOTE]

Until other therapies demonstrate similar or greater efficacy for BPD than DBT (or any other disorder for that matter), then DBT should be the front-line treatment and the one that should be offered first before trying other therapies with less or no support. It's like an MD deciding to try ketamine for depression before trying approved medications that have more empirical support. It doesn't make sense to do so when the science does not back you.
 
What is unethical then? Pretending there is convergence in science just because managed care wants us to. Confining therapists within illusory frames of what is deemed effective, and hurting therapists whose creativity, sensitivity, and ingenuity have been of great help to their patients, and finally, hurting clients--under the guise of protecting them--by enforcing dogmatic ways in which the therapists diagnose and treat them.


Well spoken. Thank you.
 
Agree with Jon wholeheartedly, with the caveat that I actually think we all too often don't understand the mechanisms to do this effectively...but I think we're moving in the right direction now (scientifically, at least) though it will certainly take awhile to make up for lost time.

Any psychologist worth his salt should know what the evidence says. Yes, that evidence is not completely convergent and science absolutely has its flaws too. If it was perfect, there wouldn't be a need for ongoing scientific debate, actual scientific training, or for keeping up on the literature. We'd just learn all the "right" choices in grad school and never have to learn again. I do think its important to recognize what the literature says and what it doesn't. However, its also important not to throw the baby out with the bathwater and ignore the scientific evidence in favor of gut instinct. There are plenty of places where there does seem to be some convergence in the literature...or frequently...no one has even TRIED to produce evidence for common treatment modalities. Again, take OCD since it is one area we do seem to have some convergence. Why would one not try ERP? What alternate evidence is this decision being based off? Is the client refusing it? That can happen, in which case, absolutely something else should be considered. Has the client tried that repeatedly without success? Yup, definitely time to look into other options. Some flexibility is certainly in order. Sometimes (often) the supposedly "effective" treatments do not work, though the reason why is often unclear. Sometimes (almost always?) treatment manuals contain sections that are irrelevant to individual clients, or do not contain material that may be critical. Therapists need to be able to adapt. That includes the "art", being willing and able to step outside a manual. Step outside the DSM diagnostic scheme, etc.

However, those decisions should be centered on the client. All too often those decisions are therapist-based, and clients are not even aware of the options and do not have a say in what approach is used. A good psychologist should use scientific data to back their decisions. These sort of decisions are rarely black and white...that is the nature of the field. Adaptation should take place with a clear rationale, and should be based on evidence..both in the scientific literature and on data gathered from the individual client throughout therapy. Most of the time, it seems to come from the therapist saying "Well, I practice x, y, z modality because I think its better for clients" with no further explanation and no actual support for why they feel that way. Well, why is it better for that particular client? Why would you not try DBT first in that case? If there is an argument beyond "I don't like it" or "I think my way is better, but have no data to back it up" than have at it. That is a competent clinician, making an informed decision to improve patient care. I can't imagine more than a tiny percentage of the situations where people are not being guided by the literature reflect that.
 
Last edited:
Very well said, Ollie. Very early in this thread, I posted a scenario hoping (but ultimately finding no takers) to illustrate the point you just made.
 
Top