Why do psychologists reject science???

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I'm not convinced all existing ones would be in the "have-nots" category (though many would, and in my opinion, rightly so), though I agree its an issue with implementing any sort of system like this. That said, change needs to happen. We can call creating a two-tiered system elitist, but the reason for doing so (we need people with adequate research training who implement evidence-based practice) is legitimate. If that makes me elitist in the eyes of some, I can live with that.

As for fixing APA, I probably would not be a member if they didn't offer discount insurance I was required to have. Many of the APS folks were trying to change it for years, and a large reason for the formation of APS, SSCP and the academy was that they were repeatedly shot down and APA has explicitly gone out of its way to silence that part of its membership. APA politics are very complicated since there ARE such a huge number of psychologists who don't seem to know or care about EBP, so oftentimes even those in leadership positions will have resistance from a large majority of members. Forming a new organization seems like a reasonable and possibly optimal outcome from that. Like I said, I'm not sure this is the best way for that to happen but I think something like this needs to be done. Given I think APA is completely backwards, I'm an APS member and plan on getting involved more heavily with them rather than APA.
 
Ollie,

I agree about that change is more likely to come from other organizations, but APA still is seen as the "voice" of psychology....which at times is problematic, and at other times it is embarassing.

I think having other organizations attempt to have a voice in things is a good first step. One of the biggest knocks against psychology is that we are passive and ineffective in enacting (political and legal) change, and if we wait for the APA to have a position, we'll never get anything accomplished. I had hopes the NAPPP would be that, but they have strayed even farther on certain issues.

As for the haves and have nots....I was just talking with one of my supervisors yesterday about the role of boarding (specifically ABPP), and we both agreed it was good for our field, and yet the vast majority of people don't do it. I like that there are gatekeepers, standards, and a legitimate process for people to gain an ABPP credential.

I'm hoping as I move forward in my career that more positions start to differentiate between boarded and non-boarded people, but we have a long way to go. Right now many job descriptions lump together LPC/MSW/EdD/Ph.D/Psy.D, which is yet another reminder of the marginilization that is happening in our field. We should be able to require that the other sides prove their competency, but instead we are left to defend our domain and prove that they are not (in certain areas).
 
MOD NOTE: Ivan, welcome to SDN. We do not allow people to post teaser articles and link over to a personal/business website. You can post your thoughts/article, but you cannot include your website link in the post, as it is viewed as advertising/spamming. -t4c
 
Since Psychoanalysis and psydynamic therapies are among the oldest (135 years or so), shouldn't there be support for these therapies beyond case studies (that is, if they really work)? If those therapies were effective wouldn't randomized clinical trials have demonstrated so by now? I bet these studies have been done, but have had negative (and unpublished) results.

There are in fact some studies supporting psychodynamic therapy beyond case studies. Hit up PsycINFO.
 
There are also studies (sorry, I don't have the reference - it was a few years ago from a Personality Theory textbook) showing client preference for psychodynamic therapy. The one that stood out to me found that among therapists who entered therapy as clients, the majority chose psychodynamic. The factor of therapist-clients' own orientation was not show to be significant in which therapy they chose for themselves. A significant percentage of therapist-clients practiced CBT and still preferred psychodynamic therapy for themselves.

Might client preference also be a factor?

The relative lack of supporting research behind psychodynamic treatments may be explained by the history of the orientation. Case study was the established method that early psychoanalysts used. Behavioral therapy and CBT grew out of a more empirical setting and the research base came more naturally. It has taken more time for psychodynamic clinicians to buy into the need to conduct research to establish efficacy, since they were already practicing and were the established and preferred method.

Honestly, when approaching it as a client, I find CBT to lack something. I want more understanding and validation of how I got to where I am. Marsha Linehan picked up on that too, and incorporated validation into CBT when she created DBT. That is why I practice DBT. To me, it is the best of all worlds. Or at least one of the best of what is currently available. The future will probably bring something better.
 
There are also studies (sorry, I don't have the reference - it was a few years ago from a Personality Theory textbook) showing client preference for psychodynamic therapy. The one that stood out to me found that among therapists who entered therapy as clients, the majority chose psychodynamic. The factor of therapist-clients' own orientation was not show to be significant in which therapy they chose for themselves. A significant percentage of therapist-clients practiced CBT and still preferred psychodynamic therapy for themselves.

Might client preference also be a factor?

The relative lack of supporting research behind psychodynamic treatments may be explained by the history of the orientation. Case study was the established method that early psychoanalysts used. Behavioral therapy and CBT grew out of a more empirical setting and the research base came more naturally. It has taken more time for psychodynamic clinicians to buy into the need to conduct research to establish efficacy, since they were already practicing and were the established and preferred method.

Honestly, when approaching it as a client, I find CBT to lack something. I want more understanding and validation of how I got to where I am. Marsha Linehan picked up on that too, and incorporated validation into CBT when she created DBT. That is why I practice DBT. To me, it is the best of all worlds. Or at least one of the best of what is currently available. The future will probably bring something better.

Interesting. Some anecdotal support for your citation 🙂 I'm primarily a CBT therapist, which fits best with working in health psych usually. But for my own individual therapy I see a psychodynamic therapist. I'm not doing full blown psychoanalysis, but I specifically sought this modality out when choosing who I'd see.
 
Might client preference also be a factor?

Counseling Psych has been researching this for a long time. Check out Wampold's pubs and the papers that cite/are cited by him.
 
There are also studies (sorry, I don't have the reference - it was a few years ago from a Personality Theory textbook) showing client preference for psychodynamic therapy. The one that stood out to me found that among therapists who entered therapy as clients, the majority chose psychodynamic. The factor of therapist-clients' own orientation was not show to be significant in which therapy they chose for themselves. A significant percentage of therapist-clients practiced CBT and still preferred psychodynamic therapy for themselves.

That's interesting.

Do you remember...
1) Approximately what years the studies were published?
2) Reasons given as to why they chose dynamic therapy?
3) Type of problems that brought them to therapy in the first place? Because something like family/relationship issues would lend themselves more to dynamic therapy than would an anxiety disorder.
 
3) Type of problems that brought them to therapy in the first place? Because something like family/relationship issues would lend themselves more to dynamic therapy than would an anxiety disorder.
While the most supported Tx's for anxiety related Dxs are not psychodynamically based, it can still be an effective Tx in some cases.
 
The relative lack of supporting research behind psychodynamic treatments may be explained by the history of the orientation.

Honestly, when approaching it as a client, I find CBT to lack something. I want more understanding and validation of how I got to where I am.

Shouldn't we follow the science and provide the treatment most likely to help our clients? Your preference doesn't really matter as much as what will actually work.

Until there is evidence that psychodyamic treatments work, why are these treatments being taught to clinical psych students??? Why not teach phrenology? - patients like it, it is fun and entertaining. Oh yeah, phrenology has been repeately debunked - as has psychodynamic theory.
 
I think thats because patient preference often dictates what they will comply with during treatment. When I continiously hit a wall with my frist method (CBT), I often find myself exploring what this resistance is about and where it comes from. This inevitably leads to more exploratory interpersonal type sessions that focus on past issues and existential and philosophical questions/issues. Victor Frankle's Man's Search for Meaning is always high on my suggesting reading list in these cases...🙂

I think it important to remember that there s a great deal of self-fullfilling prophecy to many psychopathologies. Just because CBT has great efficacy does not mean it is neccasarily uncovers the universal truth to what is causing their problems. Ignoring the client's view of their psychopathology, and thus, ignoring their theoretical orientation amounts to "strong-arming" IMHO. Its all grist for the therapuetic mill......
 
Last edited:
I think thats because patient preference often dictates what they will comply with during treatment. When I continiously hit a wall with my frist method (CBT), I often find myself exploring what this resistance is about and where it comes from. This inevitably leads to more exploratory interpersonal type sessions that focus on past issues and existential and philosophical questions/issues. Victor Frankle's Man's Search for Meaning is always high on my suggesting reading list in these cases...🙂

I think it important to remember that there s a great deal of self-fullfilling prophecy to many psychopathologies. Just because CBT has great efficacy does not mean it is neccasarily uncovers the universal truth to what is causing their problems. Ignoring the client's view of their psychopathology, and thus, ignoring their theoretical orientation amounts to "strong-arming" IMHO. Its all grist for the therapuetic mill......

👍 I completely agree with this. CBT is fantastic on paper and often in practice. But there are clients that it will not jive with and you will have to be able to change gears.
 
This inevitably leads to more exploratory interpersonal type sessions that focus on past issues and existential and philopshical questions/issues.

Ignoring the clients view of their psychopathology, and thus, ignoring their therpretical orientation amounts to strong-arming IMHO. Its all grist for the therapuetic mill......

With all due respect, you are suggesting giving a patient the treatment they want, even when you know it won't work. That is like offering a patient with panic disorder aspirin because they heard that aspirin is a panacea. Psychologists should be trained in using proven treatments, not snake oil!

Any therapist trained in CBT or BT are trained (or should be) in attempting to see things from their patient's point of view/world view. The vast majority of CBT therapists I have known are as good as anyone with expressing accurate empathy and developing rapport with patients.

Are you suggesting that CBT and other empirical treatments do not do this? CBT therapists are trained to speak with a patient to determine their symptoms and their beliefs about their symptoms, and then develop an EMPIRICAL treatment. How could anyone defend administering an untried/non empirical treatment? How would you explain that to your patient? "Hello Mr. X, you appear to be in a major depressive episode. I could use CBT to help you reduce your symptoms, but instead I want to talk about your childhood and your dreams."

If motivation for treatment is low why not use Motivational Interviewing? It is EMPIRICALLY supported.
 
👍 I completely agree with this. CBT is fantastic on paper and often in practice. But there are clients that it will not jive with and you will have to be able to change gears.

I agree with part of what you say here. Some patients will not respond to therapy at first and therapists must be flexible in their approach. This does not mean that the therapist should change orientations from empirical to non-empirical. Where is the logic in that? Why would you change to a non-empirical/unproven treatment? Do you tell your patients that you will be using an uproven treatment to attempt to attenuate their symptoms? If not, you are violating APA ethics rules.

I would appreciate any research available that shows unproven treatments work when CBT fails.
 
With all due respect, you are suggesting giving a patient the treatment they want, even when you know it won't work. That is like offering a patient with panic disorder aspirin because they heard that aspirin is a panacea. Psychologists should be trained in using proven treatments, not snake oil!

Any therapist trained in CBT or BT are trained (or should be) in attempting to see things from their patient's point of view/world view. The vast majority of CBT therapists I have known are as good as anyone with expressing accurate empathy and developing rapport with patients.

Are you suggesting that CBT and other empirical treatments do not do this? CBT therapists are trained to speak with a patient to determine their symptoms and their beliefs about their symptoms, and then develop an EMPIRICAL treatment. How could anyone defend administering an untried/non empirical treatment? How would you explain that to your patient? "Hello Mr. X, you appear to be in a major depressive episode. I could use CBT to help you reduce your symptoms, but instead I want to talk about your childhood and your dreams."

If motivation for treatment is low why not use Motivational Interviewing? It is EMPIRICALLY supported.

hmmmm....from reading this post, I think the real underlying issue is one of drastically different orientations towards the development of psychopathology. I have never been trained to treat a disorder (ie., your MDE example), but rather, a person who is in some kind of distress. The sources and causes of this distress are as varied as you can imagine.

However, just to clarify, I did make it clear that I start with cbt and if the client is significantly resistant, I often follow their lead unless there are significant overt syptoms that need to be adressed (ie., HI/SI, or severe impairments in functioning). In other words, Im not the one who says "I want to talk about your childhood and your dreams,"........ they are. There is no sense hitting the wall over and over, and their is no sense in forcing a clients into excercies, frameworks, etc that they are not compliant with and not ready to commit to yet. Again, i think that my willingness to follow their lead is primarily due to our differring approaches to development of psychopathology and the nature and role of the therapist. Since I view clients as my boss, I would find it poor form to say, "no, we cant talk about that." Because frankly, maybe we do, who knows. MAYBE THATS ALL THAT PERSON REALLY NEEDS?- SOMEONE TO ACTUALLY BE THERE WITH THEM IN THE MOMENT. MAYBE THAT PERSON DOESNT WANT ALL THE OTHER STUFF? Maybe noone ever gave the time, patience, and attention to work through their traumas, frustrations, dissappointents, etc. Im not in the business of strong arming clients into my way/view (of therapy) or the highway.
 
Last edited:
With all due respect, you are suggesting giving a patient the treatment they want, even when you know it won't work.

Uhm, how do I know this? I certainly cant say that for certain.

It strikes me that you have the mentality that anything that is not heavily empirically reseached is automatically assumed to be a waste of everyones time. Where did this come from? This strikes me a very concerete thinking considering the nature of the constructs and problems we are dealing with during the therapuetic hour.

I have had many clients where talking about past experiences and rehashing old issues was a indeed a waste of time (ie., unproductive) and I quickly redirected the focus. Conversely, I have had clients who found working through these issues to produce enormous insights, and the catharsis they derived from it was enormously helpful to them. Again, sometimes, people just want to be listened to. Just because it didnt seem to work well on a group level in a research trial doen't mean it wont work for certain individuals from time to time. To me, all this is much like psychiatrists and their use of psychopharm knowledge. The literature heavily guides practice, but the demands of seeing individuals (not whole groups) demand doing what works for that individual.
 
Last edited:
I'll address each of your statements in turn:

I think the real underlying issue is one of drastically different orientations towards the development of psychopathology.

I think you may be partially onto something here. However, what you fail to consider is that the cause of one's problems may not be responsible for perpetuating the problem. In other words, abuse during childhood may result in depression, but "catharsis" about childhood issues probably won't attenuate the depression long-term.

However, just to clarify, I did make it clear that I start with cbt and if the client is significantly resistant, I often follow their lead ...

Again, refer to my previous post about providing unproven treatments to patients instead of treatments with proven efficacy. If motivation or "resistance" is a problem, why not use empirically supported treatments like motivational interviewing? My point is that clinical psyc students should be trained in empirical treatments like these, rather than unproven treatments like long term psychoanalysis.

Since I view clients as my boss, I would find it poor form to say, "no, we cant talk about that." Because frankly, maybe we do, who knows. MAYBE THATS ALL THEY REALLY NEED?- SOMEONE TO ACTUALLY BE THERE WITH THEM IN THE MOMENT.

I don't actually disagree with you here. Of course therapists should not dictate topics for discussion with patients, or suggest that certain topics are off limits. However, switching to unproven treatments when motivation wanes or when the patient is resistent to suggestions seems counter productive. Motivation tends to wax and wane throughout treatment. All i am saying is that there are proven techniques to increase motivation and these techniques should be tried before trying unproven treatments.

Im not in the business of strong arming clients into my way/view (of therapy) or the highway.

Good for you. I don't think most therapists - no matter their orientation - "strong arm" their patients. Who would ever recommend such a thing?
 
Again, refer to my previous post about providing unproven treatments to patients instead of treatments with proven efficacy. If motivation or "resistance" is a problem, why not use empirically supported treatments like motivational interviewing?- "

Well, who said I'm not? Im not following the MI manual or anything, but alot of what I do when I attempt to work through reisistance utililizes its principle componets. All 4 of its principle componets have been used for years. Psychology seems to have this facination with rediscovering ideas/process that already exist and have been used for years and then giving them fancey names after they run some clincial trials on them.

"All my greatest ideas were stolen by the ancients"..... right?

Just because I dont view it as "doing MI" doesnt mean im not utilizing it. And just because i might explore other issues of the clients past and childhood at the same time doesnt mean im not "doing MI" either.
 
Last edited:
I agree with part of what you say here. Some patients will not respond to therapy at first and therapists must be flexible in their approach. This does not mean that the therapist should change orientations from empirical to non-empirical. Where is the logic in that? Why would you change to a non-empirical/unproven treatment? Do you tell your patients that you will be using an uproven treatment to attempt to attenuate their symptoms? If not, you are violating APA ethics rules.

I would appreciate any research available that shows unproven treatments work when CBT fails.

I think you're extrapolating this a bit too far. When traditional CBT does not seem to jive with my client's perspective, it doesn't mean I completely abandon it as my basis for conceptualization or treatment. What it does mean is I may become more relational or channel my inner Ellis, depending on the client. I don't think I could conceptualize or treat from a non-CBT perspective. I also don't think it's unethical to tailor your approach to incorporate some of the "non-proven" treatments to help build the therapeutic relationship as long as you're utilizing what the research says is effective in the long term (e.g. exposure + response prevention for anxiety, etc.). I do think it's a problem to completely ignore the client's needs in terms of your style. I personally find Judy Beck's book robotic and would never use that rigid of an approach. I don't set agendas in session, I don't give clients paper sheets to fill out their daily thought records on. I do have them carry a small journal with them to keep track of their thoughts. I know that there have been clients I've seen who would have bailed completely on treatment if I had not changed my approach to meet them in the middle somewhere. Hope that clarifies where I'm coming from.
 
Well, who said I'm not? Im not following the MI manual or anything, but alot of what I do when I attempt to work through reisistance utililizes its principle componets. All 4 of its principle componets have been used for years. Psychology seems to have this facination with rediscovering ideas/process that already exist and have been used for years and then giving them fancey names after they run some clincial trials on them.

"All my greatest ideas were stolen by the ancients"..... right?

Just because I dont view it as "doing MI" doesnt mean im not utilizing it. And just because i might explore other issues of the clients past and childhood at the same time doesnt mean im not "doing MI" either.
this, to me, begs the question. are you using empirically supported treatments if you arent following the "manual" line by line? if thats the case, i have never used them. human beings rarely fall neatly into a category, in my experience. having the research guide my practice is one thing, following a manual despite the fact that its not working is not something i plan on doing...
 
I would argue that evidence-based practice need not involve following a manual line-for-line. Whether it is an EST is a bit more struct, but I would still argue that there is a "window" of what is acceptable. For instance, manuals typically are built around a specific number of sessions. Many of my clients have had very limited educational backgrounds, and are not familiar with analyzing their thoughts. Getting them to understand the idea of thought challenges can be quite difficult, so if it takes 3 sessions to get them to understand this instead of 1, I am technically "off manual" but I don't think even the strictest EST adherents would argue that isn't legitimate.

We work with humans. Not cells, not chemicals. By nature, that is going to require some flexibility, the question is where to draw the line. If a client comes in with say, OCD (to pick one we clearly have great data on) and the therapist begins 2 years worth of twice weekly psychoanalysis without ever mentioning ERP, that is pretty clearly unethical in my eyes. If the client is informed of ERP and refuses. If there is a legitimate reason to believe ERP is not appropriate for this case then fine. My problem is that this doesn't happen (at least not often). Instead, people pick "I'm a psychodynamic therapist" or "I'm a CBT therapist", and screw the evidence because I think my way is better no matter what because my gut says CBT is missing something, or I think psychodynamic therapy takes too long. Client preferences absolutely need to be considered. However, the client should be an informed consumer.

We can argue the merits of non-specific factors. I'm not familiar enough with the literature to comment on exactly how methodologically sound many of the individual numbers that get quoted are. However, I will make the point that if you were diagnosed with cancer survival, and 70% of the variance in outcome was due to use of Drug A, but you could tack on another 10% by adding Drug B as an adjunct- how would you feel if you found out your Doctor ignored Drug B because "Drug A is really all most people need, and probably good enough"? To push it a step further, I doubt that many of the individuals quoting those numbers are any more familiar with the nonspecifics literature than I am so are probably not adequately using that literature either (I can see it now - "I get along with my patients, I must be doing a good job!").
 
I would argue that evidence-based practice need not involve following a manual line-for-line...

There is a phrase for this--theoretically consistent eclecticism. For example, I do therapy primarily based in REBT, augmented with feminist, multicultural, and counseling psych conceptualizations and interventions. Most everything I do in the therapy room is informed by this. For example, I'd focus on present manifestations and effects of mom's negativity toward a client, but really almost never the cathartic experience of talking *about* the old transgressions (because I think they have almost nothing to do with present self-talk and neuroses). BUT, I say almost never, because there are DEFINITELY circumstances in which I would have have gone beyond the interventions laid out by those theoretical models in order to further a goal of one of those models. So, for example, I've done dream interpretation with a client. I think dream interpretation itself is, at best, a fun little useless creative activity. But, in this circumstance, it accomplished exactly the goal intended--building rapport with a reluctant client. If I had not done it, he would not have come back (and worked hard for 20 sessions, and thrived in therapy, and overcome what we later recognized as potentially rather severe clinical depression totally without meds, and enacted lasting life changes). So, this totally unscientific intervention was applied in an exactly scientific manner to the treatment of a client. Imagining as though we're working with perfect clients on whom manualized treatments will automatically work is not a perspective I think to be well-thought-out or informed by actual clinical experience or research evidence.
 
I also don't think it's unethical to tailor your approach to incorporate some of the "non-proven" treatments to help build the therapeutic relationship as long as you're utilizing what the research says is effective in the long term (e.g. exposure + response prevention for anxiety, etc.). I do think it's a problem to completely ignore the client's needs in terms of your style.

Partially agree. Of course I agree that it is ethical to tailor treatments to patients. I just disagree that including "non-proven" treatments is ever necessary. Being supportive and listening to a patient is part of any psychotherapy, proven or unproven. I doubt that any decent therapist would ignore a client's needs. Where do you get the idea that empirically based therapists would "ignore a client's needs?" It seems that you are equating CBT and other ESTs with low warmth and low empathy. Where is the research supporting that idea?

I know that there have been clients I've seen who would have bailed completely on treatment if I had not changed my approach to meet them in the middle somewhere. Hope that clarifies where I'm coming from.

I think most of us can relate to your experience here. We just disagree that it is necessary to change your "therapeutic approach" when patients become resistant. Being flexible is part of beign a good therapist. Reevaluating what a patient wants/needs, getting a better idea of what is driving a patient's symptoms, and setting smaller goals are all useful strategies in these situations.
 
this, to me, begs the question. are you using empirically supported treatments if you arent following the "manual" line by line? if thats the case, i have never used them. human beings rarely fall neatly into a category, in my experience. having the research guide my practice is one thing, following a manual despite the fact that its not working is not something i plan on doing...

Completely agree. My argument is that clinical psyc programs should focus on teaching ESTs, rather than unproven treatments.
 
If a client comes in with say, OCD (to pick one we clearly have great data on) and the therapist begins 2 years worth of twice weekly psychoanalysis without ever mentioning ERP, that is pretty clearly unethical in my eyes. If the client is informed of ERP and refuses. If there is a legitimate reason to believe ERP is not appropriate for this case then fine. My problem is that this doesn't happen (at least not often). Instead, people pick "I'm a psychodynamic therapist" or "I'm a CBT therapist", and screw the evidence because I think my way is better no matter what because my gut says CBT is missing something, or I think psychodynamic therapy takes too long. Client preferences absolutely need to be considered. However, the client should be an informed consumer.

Brilliant!
 
I am glad that there is so much interest in this thread. I think we all agree that manualized ESTs can and should be tweaked to meet a client's needs. The major points that I want to make are:

1) We should follow the science and provide the treatment most likely to help our clients

2) Clinical psyc programs should focus on training students to deliver treatments that have empirical support

3) Treatments with no or very little empirical support should not be the focus of clinical training programs unless other, more effective treatments are not available
 
I personally find Judy Beck's book robotic and would never use that rigid of an approach. I don't set agendas in session, I don't give clients paper sheets to fill out their daily thought records on. I do have them carry a small journal with them to keep track of their thoughts. I know that there have been clients I've seen who would have bailed completely on treatment if I had not changed my approach to meet them in the middle somewhere. Hope that clarifies where I'm coming from.

I got pissed off just filling out thought records for class :laugh:
 
There is a phrase for this--theoretically consistent eclecticism.

One of the challenges is utilizing multiple orientations without violating major tenants of any one orientation. I have come across a number of people who self-identify as "eclectic", but the way in which they practice can be contradictory at times. It isn't that they are being unethical, but they are being incongruent with one or more "pillars" of a particular orientation. It may not seem like a big deal, but it can often devolve into a style that doesn't really have any basis, and thus doesn't have emperical support. One thing I have learned during supervision is the importance of keeping theoretical framework in the forefront of your mind as you are working with your patient.

(more to come in a bit)
 
One of the challenges is utilizing multiple orientations without violating major tenants of any one orientation.

Yup, absolutely. I think that phrase was invented to differentiate what you and I mean from "I do whatever I feel like" eclecticism.
 
The major points that I want to make are:

1) We should follow the science and provide the treatment most likely to help our clients

2) Clinical psyc programs should focus on training students to deliver treatments that have empirical support

3) Treatments with no or very little empirical support should not be the focus of clinical training programs unless other, more effective treatments are not available

Do others agree/disagree/both with these points?
 
Yup, absolutely. I think that phrase was invented to differentiate what you and I mean from "I do whatever I feel like" eclecticism.

"Technical Eclecticism" is often the term I see used to describe people who combine techniques from multiple orientations, with the understanding that the techniques are not in conflict with the chose orientations.

I identify most with a Technical Eclecticism model, though it can be quite complicated to have everything mesh. A person really needs to be grounded in multiple orientations before they can effectively utilize Technical Eclecticism. Only knowing a bit about a few different orientations really can set a person up to violate one or more "pillars" of an orientation.

My approach to ED treatment falls under a Technical Eclecticism Model, and I'd actually like to collect some data on it....you know, with all of that free time I have. 😀 The vast majority of my therapy work now (because of the hospital/brief intervention setting) is straight up DBT, CBT, CPT, and PET....none are really my preference, but they are the EBTs for each area.

For anyone looking to work in the VA...they are really pushing CPT (Cognitive Processing Therapy) and PET (Prolonged Exposure Therapy) for PTSD, so it'd behoove you to know.
 
Last edited:
Interestingly enough, that article is now making the rounds in our department.
 
For anyone looking to work in the VA...they are really pushing CPT (Cognitive Processing Therapy) and PET (Prolonged Exposure Therapy) for PTSD, so it'd behoove you to know.

The VA where I did my internship used these 2 treatments very effectively. The VA system seems to recognize the value of implementing ESTs.
 
Seriously, can you even pretend to be in a research based doctoral program if you are using SPSS. I don't know anyone within my doctoral program that uses SPSS. SPSS is what is used for undergrad psych students.

Pretty much invalidates all your 'evidence based' junk you've been spouting.

Finally had a chance to go through it cover to cover.

The first half (Data-driven decision making and Merits of psychosocial interventions) I thought was great. Obviously a lot more could have been said on the topic, but I think they covered the important points. At least to me, this all fell under "duh" and shouldn't really be controversial, though others may have a different take. I do think they glamorize the medical model, and I'm not entirely sure why. My primary reason for choosing psychology over medicine is I feel like psychologists at least have the potential to be leaders in evidence-based practice due to our dual-training in science and clinical work, even if it hasn't worked out that way. I agree that in general, physicians are more likely to AGREE that the research is important than psychologists are. However, they cite a study showing that 85% of patients were receiving good evidence-based care. This is definitely a case of selective citing - I'm moderately familiar with the literature, and that is by FAR the highest number I have seen. I don't take that invalidates the need for evidence-based practice. I think it just means physicians collectively suck a little less than we do when it comes to this, rather than a lot less, as they portray.

I think they do a fair job of owning up to the failures of scientists in terms of translating research. They acknowledged nonspecifics, which was good, but I thought the analogy to medicine was kind of inane since a therapeutic relationship is SO qualitatively different. I think they would agree scientists haven't done their job in making evidence-based practice realistic and support changes in that direction.

It should come as no surprise to folks here, but I still agree with where they end up, even if the middle of the paper (especially the forced analogies to a medical model) is junk. I absolutely agree that the stsatus quo cannot continue. I think blame is assigned appropriately (note that they make the ever-important distinction between professional schools in general versus the PsyD degree itself). They don't pull punches here, and again I think they overstate their case by focusing on the extreme situations, since as others have already pointed out, some professional schools have more rigid research requirements than others, not to mention the variance by student, by advisor, etc. That said, I'm obviously on board with the fact that these schools need to either get their acts together, or be publicly branded. To steal JN's example, if someone is finishing grad school without even knowing what SPSS is, there's simply no way they are going to be even remotely capable of evidence-based practice in even the loosest definition of the phrase. In my opinion, it is COMPLETELY unethical on the part of everyone involved to let them have a doctorate of any kind. I felt if anything they were too nice towards APA who in my eyes has COMPLETELY dropped the ball out of fear of losing their members who think underwater primal scream therapy wearing a purple hat is the greatest thing since sliced bread and won't consider anything beyond their own distorted, irrational thoughts.

I agree with the accreditation system in principle. Frankly, I think APA's behavior is pretty disgusting across the board (found out there is currently some underhanded political maneuvering designed to prevent this accreditation system from ever becoming recognized by licensing bodies so APA can maintain the crap-opoly). Whether this system will succeed remains to be seen. The materials here don't provide enough info about the accreditation process itself for me to feel comfortable saying. My gut reaction is that they may push it too far at first to differentiate themselves and will need to pull back (esp. with regards to not having "input" requirements). I think there is merit to a stronger focus on outcomes, but there needs to be balance. I think it walks a fine line between being "Accreditation for researchers only" versus true clinical science, and I would hate to see many of these concepts recognized as ideals only by researchers since that defeats the purpose of such a movement. I do recognize that many of the ideals put forth are widely varying in how realistic it is to attain them.

Overall, I pretty much stick by my earlier statement. I absolutely, 100% agree that this is the direction psychology needs to move in. I think APA finally having some competition by people who aren't afraid of numbers, and are willing to step up to the plate and prove they are effective rather than simply saying "Trust us, we're helpful. Now give us money" is fantastic. Whether this accreditation system will succeed, or even deserves to succeed, I remain a skeptical cheerleader.
 
I find this whole discussion interesting and having been talking about it quite a bit with other students and faculty members, basically since I got to grad school.

If I had to pick which orientation I prefer, it would be CBT. And I agree that therapy should all be evidence based. Something I've realized through all these conversations and as I become more exposed to clinical work is that we should be aware of the limitations of the current research methods being used to evaluate psychotherapy, particularly outcome measures. Most of the outcome measures used are designed to find CBT effective. For example, if you conduct a trial comparing CBT to psychodynamic and use the BDI as an outcome measure, of course it will find CBT more effective. CBT aims to reduce cognitive symptoms and change behavior, dynamic aims to help clients resolve interpersonal conflicts and address ego defenses. The outcome measures seemed to be stacked against dynamic.

That said, this should not be a reason to completely reject evidence based practice. Dynamic therapists should have the obligation to develop quantifiable outcome measures that are still true to psychodynamic theory, and then demonstrate it's effectiveness.

Just some thoughts.
 
Seriously, can you even pretend to be in a research based doctoral program if you are using SPSS. I don't know anyone within my doctoral program that uses SPSS. SPSS is what is used for undergrad psych students.

Pretty much invalidates all your 'evidence based' junk you've been spouting.

🙄

Welcome to ignore! Only a very select few board members have proven themselves foolish enough that I no longer consider anything they say worth reading, so you can be proud to be a member of rather elite group. We all have to excel at something, its just a shame you've chosen failure.
 
Yes, because all those R-O1 research studies I worked on at WPIC and UCSF are soooo undegrad...give me a break pal......

He would need to explain this comment, as it makes no sense to me. Besides the obvious software gliches in the new version, what exactly is the fucntional problems with it? Its more than adaquate for the questions and the designs used by the majority of doctoral stuidents.

What research are you doing in which spps fails miserably?
 
Last edited:
I guess I'm not in a doctoral program either, then, Ollie. I was using SPSS just today. 😉
 
Have to respectfully disagree about the outcome measures being biased towards cognitive-behavioral therapy: The BDI taps affective, behavioral, and somatic/vegetative sxs of depression because symptoms of depression hang together in these three factors. Hence, any treatment, including pharmacotherapy or dynamic treatment, will have to demonstrate a reduction in one or more of these clusters of symptoms to be demonstrated effective for depression.

The MAIN difference b/t dynamic theory and cognitive theory in that dynamic therapy postis that what cognitive therapy calls "core beliefs/schemas" cannot be made conscious while CBT holds the opposite. However, as Padesky and Beck have shown, schemas are in the realm of consciousness..........



I find this whole discussion interesting and having been talking about it quite a bit with other students and faculty members, basically since I got to grad school.

If I had to pick which orientation I prefer, it would be CBT. And I agree that therapy should all be evidence based. Something I've realized through all these conversations and as I become more exposed to clinical work is that we should be aware of the limitations of the current research methods being used to evaluate psychotherapy, particularly outcome measures. Most of the outcome measures used are designed to find CBT effective. For example, if you conduct a trial comparing CBT to psychodynamic and use the BDI as an outcome measure, of course it will find CBT more effective. CBT aims to reduce cognitive symptoms and change behavior, dynamic aims to help clients resolve interpersonal conflicts and address ego defenses. The outcome measures seemed to be stacked against dynamic.




That said, this should not be a reason to completely reject evidence based practice. Dynamic therapists should have the obligation to develop quantifiable outcome measures that are still true to psychodynamic theory, and then demonstrate it's effectiveness.

Just some thoughts.
 
Seriously, can you even pretend to be in a research based doctoral program if you are using SPSS. I don't know anyone within my doctoral program that uses SPSS. SPSS is what is used for undergrad psych students.

Pretty much invalidates all your 'evidence based' junk you've been spouting.

Way to illustrate why we try not to depend on anecdotal evidence. Plenty of research based PhD programs use SPSS.
 
The part that amuses me the most is that I just said that one should know what SPSS is, didn't say a word about using it. Yay for basic reading skills. I actually do use it for most things, but I'm also comfortable with SAS or a number of other options, and just attended the Matlab conference a couple days ago to help facilitate working with EEG data. As others have mentioned, SPSS is more than adequate for the majority of stats that people do. It was a bit behind SAS on implementing HLM and GEE, but it seems to be fine for both now. SPSS definitely has its limitations (e.g. requires Amos for some things, can't really use MI, tough for EEG/fMRI data), but its not like we're only allowed to learn 1 statistical program.
 
Last edited:
Seriously, can you even pretend to be in a research based doctoral program if you are using SPSS. I don't know anyone within my doctoral program that uses SPSS. SPSS is what is used for undergrad psych students.

Pretty much invalidates all your 'evidence based' junk you've been spouting.

Is this some kind of bad joke? You could do hard science in Excel if your research questions were appropriate. This reveals a fundamental lack of understanding about the process of science.
 
Is this some kind of bad joke? You could do hard science in Excel if your research questions were appropriate. This reveals a fundamental lack of understanding about the process of science.


Exactly right. The process of science is about a rigor and logic. It is about how you think about things, identifying biases, use psychometrically sound measures, identify appropriate IV/DV, and on and on. Simply knowing the last computer program, whether it be SPSS or something else, does not turn bad research or even mediocre research into good research.

Of course people use SPSS to analyze good data from a good research design.
 
Interesting Editorial in Nature that speaks to this very topic:

http://www.nature.com/nature/journal/v461/n7266/full/461847a.html

Here is the complete post:

Editorial

Nature 461, 847 (15 October 2009) | doi:10.1038/461847a; Published online 14 October 2009

Psychology: a reality check
Top of page
Abstract

If clinical psychology in the United States wants to remain viable and relevant in today's health systems, it needs to publicly embrace science.

Anyone reading Sigmund Freud's original works might well be seduced by the beauty of his prose, the elegance of his arguments and the acuity of his intuition. But those with a grounding in science will also be shocked by the abandon with which he elaborated his theories on the basis of essentially no empirical evidence. This is one of the main reasons why Freudian-style psychoanalysis has long since fallen out of fashion: its huge expense — treatment can stretch over years — is not balanced by evidence of efficacy.

Clinical psychology at least has its roots in experimentation, but it is drifting away from science. Concerns about cost–benefit issues are growing, especially in the United States. According to a damning report published last week (T. B. Baker et al. Psychol. Sci. Public Interest 9, 67–103; 2008), an alarmingly high proportion of practitioners consider scientific evidence to be less important than their personal — that is, subjective — clinical experience.

The irony is that, during the past 20 years, science has made great strides in directions that could support clinical psychology — in neuroimaging, for example, as well as molecular and behavioural genetics, and cognitive neuroscience. Numerous psychological interventions have been proved to be both effective and relatively cheap. Yet many psychologists continue to use unproven therapies that have no clear outcome measures — including, in extreme cases, such highly suspect regimens as 'dolphin-assisted therapy'.

There is a moral imperative to turn psychology into a robust and valued science.

The situation has created tensions within the American Psychological Association (APA), the body that accredits the courses leading to qualification for a clinical psychologist to practise in the United States and Canada. The APA requires that such courses have a scientific component, but it does not require that science be as central as some members would like. In frustration, representatives of some two-dozen top research-focused graduate-training programmes grouped together in 1994 to form the Academy of Psychological Clinical Science (APCS), with a mission to promote scientific psychology.

The APCS effort has not been enough to change attitudes among all practitioners. But, in the United States, political pressure for change is building rapidly. The debates swirling around health-care reform have made it clear that key decision-makers expect medical caregivers to justify their therapies in terms of proven cost-effectiveness. If clinical psychologists cannot do this plausibly, they will be marginalized.

A quick and effective way to break this impasse would be to create a US version of the system that transformed clinical psychology (and medical practice generally) in England and Wales. There, the National Institute for Health and Clinical Excellence (NICE) evaluates therapies for evidence of efficacy, and approves the ones to be covered by the state health system (see Nature 461, 336–339; 2009). Private health insurers are influenced by NICE's decisions, and any clinical psychologist wishing to offer dolphin-assisted therapy in Britain will be hard-pushed to find patients.

For many opponents of health-care reform in the United States, however, NICE represents the epitome of big-government intrusion into individual freedom of choice; it remains to be seen whether such a body can ever be created in America. Still, as Baker et al. point out, interested US psychologists could take matters into their own hands by establishing a new accreditation system for scientifically trained psychologists in parallel with the APA system.

The APCS is well-positioned to take such a step. But whoever takes it should do so soon. Unmet mental-health needs are massive and growing: the number of Americans receiving mental-health care has almost doubled in the past 20 years. There is a moral imperative to turn the craft of psychology — in danger of falling, Freud-like, out of fashion — into a robust and valued science informed by the best available research and economic evidence.
 
Temper temper, people. Those familiar with SAS know that it is can be more efficient for handling complex data analysis than SPSS. To be honest, I was just being sarcastic (I have used both SPSS and SAS in research myself).

My main point, that I did not take the time to write (though I had intended to do so) was in regards to the wider point being discussed on ebis. You all really need to get over yourself - there is a reason that master level lpcs or social workers have client outcomes that are JUST as effective as phd level psychs - namely, it is the client/clinician match, not a specific therapy, that is most effective. Certain treatments can help certain individuals, but the rapport that gets established is the most effective predictor of treatment success. So get over the evidence-based b#$% already. Certainly, I try to learn different therapies and ebis that work, and recommend that others do the same. But there is no certainty with 'science'. The only certainty i know of regarding science is that if one pushes too strenously for "data based outcomes" and "ebis", they risk disconnecting with the client and ruining any positive outcomes.

To those who disagree - show me evidence to the contrary.

Yes, because all those R-O1 research studies I worked on at WPIC and UCSF are soooo undegrad...give me a break pal......

He would need to explain this comment, as it makes no sense to me. Besides the obvious software gliches in the new version, what exactly is the fucntional problems with it? Its more than adaquate for the questions and the designs used by the majority of doctoral stuidents.

What research are you doing in which spps fails miserably?
 
Last edited:
Certain treatments can help certain individuals, but the rapport that gets established is the most effective predictor of treatment success .

You're misunderstanding the Common Factors perspective, pretty significantly.

Rapport between a physician and patient is also the best predictor of medication adherence. This doesn't mean that the rapport and not the medication keeps the patient's blood pressure down.
 
And I agree that therapy should all be evidence based. Something I've realized through all these conversations and as I become more exposed to clinical work is that we should be aware of the limitations of the current research methods being used to evaluate psychotherapy, particularly outcome measures. Most of the outcome measures used are designed to find CBT effective.

That said, this should not be a reason to completely reject evidence based practice. Dynamic therapists should have the obligation to develop quantifiable outcome measures that are still true to psychodynamic theory, and then demonstrate it's effectiveness.

So you agree, until dynamic therapies are proven effective, clinical psychology phd programs should focus on teaching ESTs? Why waste students' time with unproven magical therapies? Do your professors agree with this or do they formulate excuses for teaching the unproven therapies that they were taught 20-30 years ago.
 
Top