Why do psychologists reject science???

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

busi26

Full Member
10+ Year Member
Joined
Apr 1, 2009
Messages
157
Reaction score
0
I think it is amazing that so many psycholgists reject what scientific studies have to say about treatments that work. Why is it that many psychologists rely more heavily on thier own "intuition" rather than controlled scientific studies? The recent article below lays out this issue and suggests a new accreditation system to recognize training in the use of empirically based treatments. http://www.newsweek.com/id/216506/output/print

Members don't see this ad.
 
Here is the article in case people want to quote it.

------------------------------------------------------------------
Why do psychologists reject science?
By Sharon Begley | NEWSWEEK
Published Oct 2, 2009
From the magazine issue dated Oct 12, 2009

It's a good thing couches are too heavy to throw, because the fight brewing among therapists is getting ugly. For years, psychologists who conduct research have lamented what they see as an antiscience bias among clinicians, who treat patients. But now the gloves have come off. In a two-years-in-the-making analysis to be published in November in Perspectives on Psychological Science, psychologists led by Timothy B. Baker of the University of Wisconsin charge that many clinicians fail to "use the interventions for which there is the strongest evidence of efficacy" and "give more weight to their personal experiences than to science." As a result, patients have no assurance that their "treatment will be informed by science." Walter Mischel of Columbia University, who wrote an accompanying editorial, is even more scathing. "The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment," he told me, and there is a "widening gulf between clinical practice and science."

The "widening" reflects the substantial progress that psycho-logical research has made in identifying the most effective treatments. Thanks to clinical trials as rigorous as those for, say, cardiology, we now know that cognitive and cognitive-behavior therapy (teaching patients to think about their thoughts in new, healthier ways and to act on those new ways of thinking) are effective against depression, panic disorder, bulimia nervosa, obsessive-compulsive disorder, and -posttraumatic-stress disorder, with multiple trials showing that these treatments—the tools of psychology—bring more durable benefits with lower relapse rates than drugs, which non-M.D. psychologists cannot prescribe. Studies have also shown that behavioral couples therapy helps alcoholics stay on the wagon, and that family therapy can help schizophrenics function. Neuroscience has identified the brain mechanisms by which these interventions work, giving them added credibility.

You wouldn't know this if you sought help from a typical psychologist. Millions of patients are instead receiving chaotic meditation therapy, facilitated communication, dolphin-assisted therapy, eye-movement desensitization, and well, "someone once stopped counting at 1,000 forms of psychotherapy in use," says Baker. Although many treatments are effective, they "are used infrequently," he and his coauthors point out. "Relatively few psychologists learn or practice" them.

Why in the world not? Earlier this year I wrote a column asking, facetiously, why doctors "hate science," meaning why do many resist evidence-based medicine. The problem is even worse in psychology. For one thing, says Baker, clinical psychologists are "deeply ambivalent about the role of science" and "lack solid science training"—a result of science-lite curricula, especially in Psy.D. programs. Also, one third of patients get better no matter what therapy (if any) they have, "and psychologists remember these successes, attributing them, wrongly, to the treatment. It's very threatening to think our profession is a charade."

When confronted with evidence that treatments they offer are not supported by science, clinicians argue that they know better than some study what works. In surveys, they admit they value personal experience over research evidence, and a 2006 Presidential Task Force of the American Psychological Association—the 150,000-strong group dominated by clinicians—gave equal weight to the personal experiences of the clinician and to scientific evidence, a stance they defend as a way to avoid "cookbook medicine." A 2008 survey of 591 psychologists in private practice found that they rely more on their own and colleagues' experience than on science when deciding how to treat a patient. (This is less true of psychiatrists, since these M.D.s receive extensive scientific training.) If they keep on this path as insurers demand evidence-based medicine, warns Mischel, psychology will "discredit and marginalize itself."

If public shaming doesn't help, Baker's team suggests a new accreditation system to "stigmatize ascientific training programs and practitioners." (The APA says its current system does require scientific training and competence.) Two years ago the Association for Psychological Science launched such a system to compete with the APA's. That may produce a new generation of therapists who apply science, but it won't do a thing about those now in practice.

Sharon Begley is NEWSWEEK's science editor...etc.
------------------------------------------------------------------

which non-M.D. psychologists cannot prescribe.

This doesn't help her fact-checking credibility.

You wouldn't know this if you sought help from a typical psychologist. Millions of patients are instead receiving chaotic meditation therapy, facilitated communication, dolphin-assisted therapy, eye-movement desensitization

Citation?

I think any ethical psychologist is against the quacks that practice these things. I'd venture to guess that most of these people are non-doctoral or have not gone through a doctoral program in the last 20 years.

American Psychological Association—the 150,000-strong group dominated by clinicians—

....but run by academics and not clinicians who manage private practices.

A 2008 survey of 591 psychologists in private practice found that they rely more on their own and colleagues' experience than on science when deciding how to treat a patient. (This is less true of psychiatrists, since these M.D.s receive extensive scientific training.)

I think Ms. Begley is making science and research synonymous. which clearely they are not. She also is ignoring large portions of psychology that utilize "science" AND "research" on a daily basis. I am with her on not wanting people running around doing EMDR and Primal Scream Therapy.

As a point of disclosure, that Ms. Begley did not make clear, Dr. Baker's team is part of the group that would do the accreditation, thus he'd have a financial interest in this issue.
 
Here is the article in case people want to quote it.

------------------------------------------------------------------
Why do psychologists reject science?
By Sharon Begley | NEWSWEEK
Published Oct 2, 2009
From the magazine issue dated Oct 12, 2009

I love science. I just finished a neurobio class (upper division for majors) at UC Berkeley. for fun. ;)

full disclosure: I did not read the article. Because it's Newsweek. The "People" of news. It's not scientific enough for my tastes. Never cites stuff. Authors clearly know little about the field. As T4C mentions, don't always reveal disclosures. I prefer my news a bit less fluffy.

Edit to add: That sounded rude above. So I wanted to say I don't mean to be. Sorry busi26.
I think this is an important topic that involves discussion. and I did skim the article.
I just get annoyed at the way some of these articles are written. That is all.
 
Last edited:
Members don't see this ad :)
I've been to conferences that emphasized evidence-based treatment, and have also seen other participants get offended because the speaker condemned treatments and practices that, in the words of a participant, "everybody is using." Most of the people I know at this training are not psychologists, but master's level therapists.

Another concern I have is the number of therapists and programs that disregard the need for specific training in the modalities used. I was hired where I work because of my interest in DBT, and the program's attempt to include more evidence-based treatments such as DBT. However, I haven't been able to interest administration in providing training in the modality or in changing the program to actually follow the model or include a consultation group ("we already have treatment team meetings"), or encourage other therapists and staff to back up the DBT skills training. The result is clients not valuing DBT either. It's just one more group to put down on paper that looks good to have as part of the program. My strategy is to tie in DBT with other aspects of the program and show the clients where DBT skills can help them learn to do the other things that the program emphasizes.

Many CBT therapies can be somewhat vague, and leave room for incorporating other strategies too, such as relaxation, imagery, etc.

There are also modalities that may be effective but might just not have as much emphasis yet in the world of research-based support. Or there might not be a better way to involve or facilitate communication with certain clients (play therapy, sand tray therapy, ...). I have seen dramatic results from EMDR, and even if it is not as well-documented as CBT, and although I wouldn't use it as a stand-alone therapy, it does seem to have its place. Correct training is important.

Part of the issue is availability of certified training in specific modalities, and agency funding limitations. Supervisors have told me that it isn't considered important anymore to get specific training before using a modality, yet I can see that people without the training aren't doing it according to the model.

I'm pulling master's level therapists into this discussion too because that's what I am (for the time being, anyway), and because master's level therapists are probably at least as much a part of this as psychologists, if not more. Psychologists are very influential though, because they have developed many of the treatment modalities in question (such as EMDR), and because the rest of us follow their example to some extent. Also because psychology training programs tend to promote the idea that people who want to do therapy don't need the scientific training offered by PhD programs.
 
I'm actually taking a class in efficacy vs. effectiveness and treatment dissemination right now so we have spent a lot of time discussing this very topic. The answer is very complicated and I think a number of parties are at fault (including scientists themselves).

As for the newsweek article, poorly written with many holes, no surprises there. I do want to add that I'm not sure exactly what the obligations would be for financial disclosure in this situation, since this isn't quite the same as a for-profit company and the same could be said about anyone who writes about training models, or really anything about psychology and is also a member of APA. My knowledge of ethics/laws regarding this are somewhat limited. Interesting question but perhaps a thread for another day...
 
I support clinical research, however, I know first hand that its hard to know if you generalize finding in many cases because most research studies have been designed to minimize other factors that may influence the results. Presenting with "pure" forms of any disorder isnt very common in the clinic. Because of this, any clinician (myself included) can state that most patients I see in the clinic don’t resemble the people I see in my research studies.
 
I have seen dramatic results from EMDR, and even if it is not as well-documented as CBT, and although I wouldn't use it as a stand-alone therapy, it does seem to have its place. Correct training is important.

I understand that many people in the trenches doing therapy are masters level and they typically do an excellent job (no better or worse than Ph.D. level therapists in many cases). However, the quote above does much to prove my point. Anecdotal evidence of the efficacy of EMDR is just not the same as scientific evidence. For instance, several deconstruction studies of EMDR show that the active ingredient in this therapy has nothing to do with the patient moving their eyeballs back and forth. The active ingredient is exposure (e.g., talking about trauma). Why do people still attend trainings for this therapy? Why not just get training in the active ingredient (exposure therapy)? The answer probably involves money...

Knowing the science is important! I started this discussion not because I thought the Newsweek article was great, but because this is an important topic that I believe too many training programs and lay psychologists ignore. When are training programs going to recognize that training students to do therapies that are no better than placebo harms the profession? Why don't we follow The Guide to Treatments that Work? What is lost by STARTING with treatments that have proven efficacy?
 
I'm going to need a source to back this up, but I heard today at a talk at a major University hospital that only 15% of medical practice is evidence-based.
 
For instance, several deconstruction studies of EMDR show that the active ingredient in this therapy has nothing to do with the patient moving their eyeballs back and forth. The active ingredient is exposure (e.g., talking about trauma).
That is my major gripe with EMDR. All of the "pro" EMDR research has been unable to seperate out the "eyeball movement" from the exposure and therapeutic relationship pieces. My other gripe is that it is sold to treatment providers.
 
It's hocus-pocus + proven (a long, long, long, long time ago) behavioral therapy technique for anxiety related conditions.

Seems like I am preaching to the choir here. Or, those in favor of nonproven treatments are uncharacteristically quite.
 
I'm going to need a source to back this up, but I heard today at a talk at a major University hospital that only 15% of medical practice is evidence-based.

This doesn't surprise me. In psychology training programs alone, at least 50% of the treatments taught today are no more effective than placebo (e.g., using psychoanalysis for anxiety disorders, eating disorders, etc). We have a treatment modality that works for most disorders, but for some unknown (to me) reason the majority of patients continue to get inferior treatment. It is also appalling that so many meds are thrown at anxiety disorders, when CBT and BT greatly outpreform meds in the long run.
 
It is also appalling that so many meds are thrown at anxiety disorders, when CBT and BT greatly outpreform meds in the long run.

It is important to note that often the pt. has no interest in talk therapy and wants the pill, even if it won't help them alleviate the issue. Of course, a benzo is usually very very good at what it does, and for some that is good enough.
 
That is my major gripe with EMDR. All of the "pro" EMDR research has been unable to seperate out the "eyeball movement" from the exposure and therapeutic relationship pieces. My other gripe is that it is sold to treatment providers.

Personally, I roll my own eyes whenever I hear someone talking about the efficacy of EMDR. :D
 
Members don't see this ad :)
Seems like I am preaching to the choir here. Or, those in favor of nonproven treatments are uncharacteristically quite.

Yeah, it's a possibility that you are preaching to the choir on this forum. I become infuriated when I encounter providers who eschew empirically supported treatments in favor of unfalsifiable theories of pathology. (Somehow, these therapies the providers "intuit" never seem very intuitive to me :rolleyes:). I think the in-fighting between practicing psychologists on this issue may be just as heated as the fighting that occurs between researchers and practitioners. Thus, programs that emphasize a scientist-practitioner model are crucial to the future of the field.

That being said, the article made some incorrect inferences about what is and what is not evidence based. I mean, how many studies have we all read where therapeutic improvement is found to be due not to the treatment being tested, but to "common factors"? I thus consider taking the time to build rapport and accurately reflect statements, etc. to be "evidence based practice". Additionally, the article derides meditation, but mindfulness therapy is an EST.
 
The Newsweek article does a pretty sorry job of trying to summarize the researchers' findings (which is no fault of the author, though, as the research article is lengthy and pretty detailed). The researchers address the issue of non-specific factors in the research, albeit it may not be to everyone's satisfaction.

Having said that, the authors of the research has a very specific point of view they're making the argument for, and it reflects their own biases. In fact, two of the three authors are members of the new accreditation organization they note as the "solution" to the problem. And the editorial written praising the study? Written by no less than a member of the same organization's advisory board. All of this in a peer-reviewed journal, no less.

The researchers clearly believe there are two paths to the goal of ideal clinical psychologist -- the status quo (broken) and training psychologists only to be perfect research-practitioners. That kind of black and white thinking is something we'd never allow in our clients.

They also paint a very rosy picture of the field of medicine, as though its training led it to embrace empirically-based medicine decades ago. This could be nothing than further from the truth, given that the EBM movement is not that old (and is still not fully embraced in American medicine).

The journal article is long, 145 pages double-spaced, but if someone wants a copy of it, PM me.

John
 
The Newsweek article does a pretty sorry job of trying to summarize the researchers' findings (which is no fault of the author, though, as the research article is lengthy and pretty detailed).

If she is the Science Editor for Newsweek, and she is going to write an article about it, I think there is an expectation that she should understand what she is writing about.
 
I will be interested to see the APA's response to this Newsweek article. Melba Vasquez informed the Division 17 listserv that the APA is currently working on a response to the article.

The article generated one of the most lively discussion on the Counseling Psych listserv. Seems clinicians and researchers alike found the article to be extremely flawed.

Jon
 
In the interest of saving John from an unholy number of PMs, you can also download from the APS website here:

http://www.psychologicalscience.org/media/releases/2009/bakerhomepage.cfm

Haven't read it yet, but its on tomorrow's to-do list, unfortunately behind a number of other things that are far less interesting and important, but with immediate deadlines.

Whether the article itself is good or bad, I'm hoping it will push this issue to the forefront and force some public discussion to see if APA can defend its current path, something that has been a long time coming. Its no secret to most here which side of the debate I'm likely to fall on, but I'm going to do my best to keep an open mind when going through this. Whether the PCSAS succeeds or not, I think the simple fact that we are being forced to have this discussion is going to result in some positive change across the board.
 
Last edited:
It's on the ABCT listserv now as well. Should be fun to read the responses.
 
I understand that many people in the trenches doing therapy are masters level and they typically do an excellent job (no better or worse than Ph.D. level therapists in many cases). However, the quote above does much to prove my point. Anecdotal evidence of the efficacy of EMDR is just not the same as scientific evidence. For instance, several deconstruction studies of EMDR show that the active ingredient in this therapy has nothing to do with the patient moving their eyeballs back and forth. The active ingredient is exposure (e.g., talking about trauma). Why do people still attend trainings for this therapy? Why not just get training in the active ingredient (exposure therapy)? The answer probably involves money...

Yes, we need to start with treatments that work. The main modality that I use is DBT-informed (and as I implied above, I want to join a consultation team and become certified by Behavioral Tech, only I haven't yet found a team that I can join or people who want to start one - getting the proper training isn't always as simple as we would like it to be). I believe that this is an important topic.

And I also understand that it isn't moving the eyes back and forth that does the work in EMDR. Essentially, it is a cognitive-behavioral process. The client has to work hard and face memories and feelings that they have been avoiding. And they have to reframe their ideas about themselves. EMDR also gets clients to relax, which is also known to be effective for anxiety. That there is a bit of hocus-pocus thrown in can serve to motivate some clients. And the placebo effect is also effective. If something can get a client to work hard at re-shaping their thoughts, and get them to believe it's going to work, where is the harm?

Also, it is reinforcing to the therapist to be able to provide a client with rapid relief from anxiety. That is why we do something again if we have seen it achieve good results. My experience may be limited, but I've seen EMDR get results every time I've seen it used. And I've seen CBT fail. I recognize that that is probably because I've seen a lot more CBT than EMDR, but the few times that EMDR fits the client's needs, I'll still try it again. Along with other modalities, of course.
 
I had an instructor last year (an LCSW) actively advocate EMDR due in large part to her own personal experiences. It was a major "head desk" moment for me, but then again, my SW cohort tends to needle me a bit for being the "research one." ;) However, I've also known a few PhD psychologists/grad students whose theoretical orientations sprung largely from their own personal experiences/values/beliefs, and some master's levels clinicians who are strong on EBT, so I don't want to generalize too much....

In general, I think one's own personal experiences is a fine place to *start* with regards to theory/research/tx ideas, but not a good place to *end*. I do think this matter is somewhat complicated by the considerable role extra-therapeutic client factors have on tx outcomes, though (I wish I could remember the citation for that!).
 
I read through the article quickly and need to go back and do a more thorough evaluation. About 3 pages are dedicated to criticizing the PsyD training model and stating that most PsyD programs are not grounded in science.

My program required a dissertation, however you did not have to do a quantitative study. You could do qualitative or program development, which many people chose and later found out were more complicated and time consuming than a quantitative study. So much for skipping out on those stats.

Where the "science" split off, in my opinion, wasn't at the level of did my school produce scientific inquiry on par with a PhD program (it doesn't) but in what I think is more important in terms of context of the article. They offer therapy concentration tracks, which include CBT, Psychodynamic, Existential/Humanistic and Systems. I chose CBT because of it being evidence-based and supported by research. People who chose other tracks would either dismiss CBT as too manualized or cite literature stating that the mode of treatment doesn't matter, it's the therapeutic relationship at the end of the day. Then there were those who would cite fMRI studies being done in the psychodynamic world :)

Where I can agree with the article isn't necessarily that PsyD programs are not grounded in science as a whole. Much of our training is. But when we select how we're going to approach therapy, you have the problems that are reflected in the article's (and many on this board) criticisms.

I'm curious if PhD programs also have this type of therapy concentration structure? Or if therapy training is entirely based on EBTs?

So I agree somewhat with the criticism of the PsyD model. I feel it's a bit overgeneralized though, at least in my experience in my program. Then again, my program has grown exponentially since I went through it. So I can only speak of what it was, not what it is today. And that's depressing.
 
I read through the article quickly and need to go back and do a more thorough evaluation. About 3 pages are dedicated to criticizing the PsyD training model and stating that most PsyD programs are not grounded in science.

My program required a dissertation, however you did not have to do a quantitative study. You could do qualitative or program development, which many people chose and later found out were more complicated and time consuming than a quantitative study. So much for skipping out on those stats.

Where the "science" split off, in my opinion, wasn't at the level of did my school produce scientific inquiry on par with a PhD program (it doesn't) but in what I think is more important in terms of context of the article. They offer therapy concentration tracks, which include CBT, Psychodynamic, Existential/Humanistic and Systems. I chose CBT because of it being evidence-based and supported by research. People who chose other tracks would either dismiss CBT as too manualized or cite literature stating that the mode of treatment doesn't matter, it's the therapeutic relationship at the end of the day. Then there were those who would cite fMRI studies being done in the psychodynamic world :)

Where I can agree with the article isn't necessarily that PsyD programs are not grounded in science as a whole. Much of our training is. But when we select how we're going to approach therapy, you have the problems that are reflected in the article's (and many on this board) criticisms.

I'm curious if PhD programs also have this type of therapy concentration structure? Or if therapy training is entirely based on EBTs?

So I agree somewhat with the criticism of the PsyD model. I feel it's a bit overgeneralized though, at least in my experience in my program. Then again, my program has grown exponentially since I went through it. So I can only speak of what it was, not what it is today. And that's depressing.

To answer your question, at my program its sort of a yes with a caveat. We definitely do not have therapy concentrations (or any concentrations beyond "This is your lab"). Even applying to grad school I felt like many/most psychologists have a religious-like relationship with their therapeutic orientations rather than a scientific one. The caveat is that as students, we're still somewhat limited by the orientation of our supervisors. So yes, they would encourage us to use DBT, mindfulness, or whatever else the situation called for if we could produce evidence that it would be helpful. However if they aren't familiar with it, they can't really supervise it so at least for me (still pretty early-on), I'm reluctant to use it.

At the same time, choosing a therapy because "I like it and the others aren't as fun/are too manualized/I like this one better" is not going to endear you to faculty or students alike here.

Rapunzel - to answer your question, the damage is on several levels. For one, I think of it as essentially "lying" to the client. If they ever find out, it could be harmful to the therapeutic relationship if it still exists, or at the very least discredits the profession. For two, it demeans the profession as a whole if we become known as magician's doing parlor tricks rather than health care workers. If I advertise myself as healing people with crystals because I make them wear a bracelet during therapy, would that not seem a little dishonest?
 
This references the re-emphasis of science in graduate training for clinical psychology. Regardless of the details, emphasizing that we are scientists first is I believe an important step in increasing the standards and relevance of clinical psychology.

Exactly. I think too many places have strayed from not only sound, emperically-based treatment, but away from the critical thinking skills that really differentiate doctoral training from everything else. It definitely starts at the undergrad, and I like your idea of "real" stats classes, neuro classes, etc. I took both a "humanities" geared stats course and then a real one, and they were night and day. Unfortunately psychology as an undergraduate major now doesn't prepare the student for anything after they graduate, unless they take the extra steps to be competitive for a graduate program.

That folks can even say the ridiculous PhD/PsyD = MSW, LPC, etc... and believe it is awful. To advance the field, to make psychology a powerful applied science, change has to start at the undergraduate level, in my opinion.

That is one of my biggest pet peeves because people don't really understand the difference, but it is definitely our own fault as a profession because we have not held ourselves to a higher standard.
 
Rapunzel - to answer your question, the damage is on several levels. For one, I think of it as essentially "lying" to the client. If they ever find out, it could be harmful to the therapeutic relationship if it still exists, or at the very least discredits the profession. For two, it demeans the profession as a whole if we become known as magician's doing parlor tricks rather than health care workers. If I advertise myself as healing people with crystals because I make them wear a bracelet during therapy, would that not seem a little dishonest?

My sentiment exactly!
 
I was able to be part of a site review this year, and it gave me a glimpse into the process of what APA looks for in their accredidation of internship sites. I think what most stood out was the focus on the model we use (Scientist-Practioner), how it is supported in our training at the internship level, and how our training up until this point influences that. Obviously each site review will be different, but it really got me thinking about the focus and level of importance that was put on the role of research informing practice and the value placed on EBT. It isn't going away, and frankly I think there should be more of a critical analysis of how we can better prepare students in the future to not only utilize research, but also understand the role of EBT and to use them effectively.

As for the direction of psychology.....I am still most worried about non-doctoral clinicians eroding our scope and marginalizing our training, though we are just as culpable for not differentiating our profession better.
 
It isn't going away, and frankly I think there should be more of a critical analysis of how we can better prepare students in the future to not only utilize research, but also understand the role of EBT and to use them effectively.

Until the APA stops accrediting programs that teach students to use placebo (e.g., long term psychoanalysis) as a primary treatment strategy, our profession will not realize its potential.
 
I read through the article quickly and need to go back and do a more thorough evaluation. About 3 pages are dedicated to criticizing the PsyD training model and stating that most PsyD programs are not grounded in science.

My program required a dissertation, however you did not have to do a quantitative study. You could do qualitative or program development, which many people chose and later found out were more complicated and time consuming than a quantitative study. So much for skipping out on those stats.

Where the "science" split off, in my opinion, wasn't at the level of did my school produce scientific inquiry on par with a PhD program (it doesn't) but in what I think is more important in terms of context of the article. They offer therapy concentration tracks, which include CBT, Psychodynamic, Existential/Humanistic and Systems. I chose CBT because of it being evidence-based and supported by research. People who chose other tracks would either dismiss CBT as too manualized or cite literature stating that the mode of treatment doesn't matter, it's the therapeutic relationship at the end of the day. Then there were those who would cite fMRI studies being done in the psychodynamic world :)

Where I can agree with the article isn't necessarily that PsyD programs are not grounded in science as a whole. Much of our training is. But when we select how we're going to approach therapy, you have the problems that are reflected in the article's (and many on this board) criticisms.

I'm curious if PhD programs also have this type of therapy concentration structure? Or if therapy training is entirely based on EBTs?

So I agree somewhat with the criticism of the PsyD model. I feel it's a bit overgeneralized though, at least in my experience in my program. Then again, my program has grown exponentially since I went through it. So I can only speak of what it was, not what it is today. And that's depressing.

I wholeheartedly agree that the PsyD= bad claim is just plain wrong. But, realize who you're sharing a degree with. At APA I met a guy who's at a cohort n = 60 place, paying over 100k, who didn't know what SPSS *WAS*, let alone ever did or read any research, or knew any other kind of analysis software or even qualititative methodology. He also couldn't articulate a theoretical orientation beyond "I'm psychodynamic," so so much for quality clinical training. This was on a (pro)long(ed) bus ride, not some passing conversation, so there was time to talk about this stuff. It was ridiculous.


EDIT: Oh, yeah--also, he went to a PsyD because he "wanted to be a therapist, not a researcher." He apparently didn't notice the look that caused on my face, so again so much for clinical acumen.
 
Last edited:
I wholeheartedly agree that the PsyD= bad claim is just plain wrong. But, realize who you're sharing a degree with. At APA I met a guy who's at a cohort n = 60 place, paying over 100k, who didn't know what SPSS *WAS*, let alone ever did or read any research, or knew any other kind of analysis software or even qualititative methodology. He also couldn't articulate a theoretical orientation beyond "I'm psychodynamic," so so much for quality clinical training. This was on a (pro)long(ed) bus ride, not some passing conversation, so there was time to talk about this stuff. It was ridiculous.

That's just disgraceful. I know I will have to defend my 4 letter degree because of people like this, and other issues that are being raised. It's a shame and I'd be more than happy to see standards in place to tighten things up. Unfortunately the current professional school model has done more to perpetuate the PsyD is a lesser degree stigma of the old Boulder vs. Vail debate than mitigate it. And I know from my experience that if a strong student goes to one of these programs, they can come out with solid training. There were many in my cohort that this was the case for. Unfortunately those seem to be greatly overshadowed.
 
As a Psy.D. it is definitely frustrating to not only deal with misperceptions, but also to know that others are dragging down the degree and not living up to the Vail model. I am not a research hound, but it is rather frustrating when people assume I'm a Ph.D. and then when they find out I'm a Psy.D. go, "Well why didn't you go for a Ph.D. if you actively do research and want to work in XYZ?". They don't mean any harm by the statement, but it just shows that there is still the rigid view of what a Psy.D. looks like and what a Ph.D. looks like. I've gotten that a few times at my site now, and they don't mean anything by it and everyone goes out of their way to treat the interns as Jr. colleagues, but it is a very Ph.D. focused place (28 Ph.Ds & 4 Psy.Ds between the hospitals).
 
Last edited:
This references the re-emphasis of science in graduate training for clinical psychology. Regardless of the details, emphasizing that we are scientists first is I believe an important step in increasing the standards and relevance of clinical psychology. But, I think it has to start at the undergraduate level. Currently, undergrad is too easy. Psychology is a football player and sorority chick major, on par with elementary education, sociology, and diversity studies. If we have legitimate pre-reqs in psychology that reflect where the field should be, it would weed a lot of the "science. . . ick" crowd before they even begin to think, "I can be a psychologist" or see that first Argosy advertisement.

I didn't know that anyone besides me noticed this. Actually, I thought that undergrad psychology has been getting more research-oriented than it used to be. I finished my BA in 1991. My major was Communicative Disorders. It's a practice-oriented major, aimed at a terminal master's level degree for clinicians in audiology and speech pathology. It is a difficult and scientific undergraduate major, which is one of the reasons I chose it, yet still with a focus on providing a direct human-oriented service. But Com D is very focused on a limited set of services, and Psychology was what I loved, so I took psych courses too and ended up with a double major, almost by accident. I found the psych courses much easier and the requirements light enough to complete as an afterthought. I haven't wanted to say so, because I didn't want to offend anyone, but that was the way it was.

Now it seems that psych undergrad students are doing more senior thesis and research and participation in labs alongside researchers and grad students, that I'm not sure if we didn't quite have back then, or if I didn't see it because I did my senior thesis with Com D instead (and didn't have to do one for Psych, but it was actually optional for both majors).

And my master's program was even lighter than undergrad, and didn't allow the opportunity to do research or write a thesis. It was strictly practice-oriented.

What can be done to make training in EBT and real science more accessible to those who want it, at all levels of education and practice? I can't speak for all, but I know there are more like me, who work as master's level clinicians, sometimes in rural areas, and do the best that we can with what we have. Still, we want to do better. I have been rejected by every PhD program in Psychology in my state (there aren't that many), probably because of my lack of opportunity to gain research experience, plus the large numbers of younger applicants who have had those opportunities and are in a better position to relocate. I haven't applied to Argosy, although Argosy has made recruitment attempts. I want the real thing, and to do this right. I don't know if I'll ever get the chance.

This is slightly off the topic, but I think it is still relevant, because all of these factors are part of why therapists are out there just doing what they have always done, or what looks appealing. How can we increase opportunities for scientific training for those of us who are out there and want to do better?

I'd like to have respecialization programs geared to people with experience and advanced degrees, alongside regular PhD programs but not in direct competition with students taking the direct route, where we can get the same training and credit for some of our clinical experience. Also specific training in EBT modalities that could be counted towards credit in such a respecialization program. I've already dedicated 20 years of my life to education. It hasn't been exactly what I wanted, and I probably would put in another 5-8 years if necessary, if I were given the chance. There need to be more options. How can we work together on it to make more effective services available to more people, without dragging down the profession?
 
Last edited:
Jon, you don't seem to understand my suggestion. What I would like is to be a psychologist, and to have the opportunity for the same training that other PhD students would receive. I don't think that having earned a master's degree in counseling or having work experience should count against us, but it does. I never wanted to be an LPC, but I will be because that was what life offered me. I am a therapist. I don't think that I should be expected to apologize for existing. I'm willing to do what it takes to be a better therapist. I could get a PhD in Counselor Education, or many other related fields that various people have suggested, but it isn't the "ok, we're scientists now" mentality that I'm after. I will go to trainings as I can. It's not easy without employer support and half of my income going to student loans, and most of the people around me not buying into it. I have the same concerns that you do about range of expertise, and expecially nurse practitioners, etc. functioning in the role of psychiatrists (I know several of those, and they are even casually referred to by clients and paraprofessionals as psychiatrists). What I want is to be able to get the proper training to be what I want to be. Barring that, I'd like better continuing education to be available for those of us who are functioning as psychotherapists, regardless of which license, degree, or experiences we have.
 
I'm not sure how to address that blurring while at the same time encourage more emphasis on empirically supported treatments (or rather, science driven treatments. . . I'm not a huge fan of the manualized treatments) from masters level practitioners.

Jon, I find your statement that you are "Not a huge fan of manualized treatments" to be rather curious. In order to be a supporter of empirically supported treatments, isn't it mandatory that you have training and understanding of manualized treatments? Don't get me wrong, I'm not saying that manualized treatments should not be tweaked/modified, but how do you know your treatment is empirically supported unless you are providing the specific treatment that was empirically evaluated. Maybe I misunderstood. Please clarify.
 
What can be done to make training in EBT and real science more accessible to those who want it, at all levels of education and practice? ... How can we increase opportunities for scientific training for those of us who are out there and want to do better?

I feel your pain as well. After I finished my Bachelors degree in Psyc I worked as a substance abuse counselor. I got a license to practice in substance use disorders and decided that I wanted to do more, so I got my Ph.D. in Clinical Psyc. So, I can relate to you.

Of course I am unaware of your level of training in ESTs and I would not suggest that you practice beyond your training or license, but the best suggestion I have (in addition to what you already stated) is to read manualized treatments to inform you of what works. You can get many of these manuals, along with the therapist manual, from Amazon.com. Here are some examples:

Feeling Good, the new mood therapy http://www.amazon.com/Feeling-Good-...=sr_1_1?ie=UTF8&s=books&qid=1254858990&sr=1-1

Mastery of your anxiety and panic http://www.amazon.com/Mastery-Your-...=sr_1_5?ie=UTF8&s=books&qid=1254859059&sr=1-5

Mastery of your anxiety and worry therapist guide http://www.amazon.com/Mastery-Your-...=sr_1_2?ie=UTF8&s=books&qid=1254859135&sr=1-2
 
This may be split into its own thread by t4c, but il write it here for the time being. I am curious at to what these hardcore clinical science model types, such as McFall, think about the Boston Process Approach to the practice of clinical neuropsychology?

Although the process approach is built upon science from behavioral neurology and neuroscience, anyone familiar with it knows how much subjective impression and speculation can go into interpreting data when using this approach. In other words, there is alot of clinical judegment of persons's behavioral output, and alot of emphasis is placed clinical experience within this model. These are all things that Mcfall and collegues seem to hate, no?

I am asking because I have noticed that a flexible battery approach with a heavy emphasis on observing process seems to be the predominant model taught in most clinical neuropsych programs (and in practice), including in many research heavy programs such as Florida, etc. This model is know for advocating that npsychs not be "data/test bound" in their practice, is loaded with psychometric controversies (comparing perfomance based on mulitple different normative samples), and emphasizes the integration of process issues obtained via clinical judegment. Although I choose to use this model, (as its better than some of the other alternsatives), it seems almost antithetical to the philosophy that these big research programs (and McFall) preach. Just seems like they would fall alot more on the side of the HRB approach since there is less question about the psychometics invloved, and generally speaking, less emphaiss on clinical judegment and "process" interpretations.
 
Last edited:
This may be split into its own thread by t4c, but il write it here for the time being. I am curious at to what these hardcore clinical science model types, such as McFall, think about the Boston Process Approach to the practice of clinical neuropsychology?

I copied your post and Jon's response to a new thread: http://forums.studentdoctor.net/showthread.php?t=670645

Anyone interested in this portion of the discussion can post their replies in the above thread.
 
Right, this, actually, is where a lot of folks (even in the clinical science world) start to diverge. My emphasis is on science based treatments. We are to understand mechanism, symptoms, what and why things work, not proceed necessarily from a cookbook. By all means, if it is applicable, use the book. But, often patients are more complex. Just as in medicine, there is no empirical study of the interaction effects of 18 pills. Health history, cultural background, education, personality, co-morbidities, beliefs. . . all will play a role in the selection of research-based treatment.

Take a simple disorder like specific phobia. It's not so much important that a certain number of sessions occur or that a, to the millisecond exposure period occur, to allevate the anxiety. We know the mechanism of action for reduction of fear response in specific phobia. We are able to treat that on a patient by patient basis, guaging their ability to handle exposure, through what modality, intensity, and so on.

So, in a complicated case, we might use social learning theory, cognitive behavioral theroy/techniques, or DBT. . . whatever. We can go to the literature and see in what contexts these things work and the hypotheses/explanation as to why, and tailor our treatments on an individual basis. That's competent practice, in my opinion.

Thanks for the clarification.:)
 
I think it is amazing that so many psycholgists reject what scientific studies have to say about treatments that work. Why is it that many psychologists rely more heavily on thier own "intuition" rather than controlled scientific studies?

Maybe sample enrichment, design and authorship of studies, withholding of negative data, limits of statistical methods, politics, opinions, hookers, financial relationships, industry-financed studies, etc, etc..

Here’s some good reading from The Last Psychiatrist:

http://thelastpsychiatrist.com/2009/10/the_problem_with_science_is_sc.html

http://thelastpsychiatrist.com/2009/05/the_difference_between_an_amat.html

http://thelastpsychiatrist.com/2009/03/what_happens_to_fake_studies.html

http://thelastpsychiatrist.com/2009/02/why_no_progress_will_ever_be_m.html

http://thelastpsychiatrist.com/2009/09/unpublished_lamictal_studies_l.html


And sometimes who knows what’s going on. The experimental psychologist Dr. Oakley Gordon was involved in creating a ‘psychological model’ to discover the shamans healing relationship with a patient. He finally discovered it was impossible because the way the shaman understood reality was different than the way modern psychology understood reality.
 
Maybe sample enrichment, design and authorship of studies, withholding of negative data, limits of statistical methods, politics, opinions, hookers, financial relationships, industry-financed studies, etc, etc..

Wouldn't your arguement also apply to treatments that have not been proven effective? In other words, wouldn't these unproven treatments (e.g., psychoanalysis) also have those pressures to be deemed effective? Using your logic, shouldn't all treatments demonstrate equally effectiveness for all disorders?

Your arguement is like saying lets use Drug X for depression because I like Drug X, even though there is NO evidence that Drug X is useful for this disorder. AND lets not use drug Y for depression because I don't have any experience with using drug Y, even though Drug Y has been proven effective time after time.

It is normal to question the results of scientific studies, that is what thoughtful scientists are supposed to do. However, it is unethical (and possibly malpractice) to use unproven treatments when proven treatments are available.
 
Wouldn't your arguement also apply to treatments that have not been proven effective? In other words, wouldn't these unproven treatments (e.g., psychoanalysis) also have those pressures to be deemed effective?

I'd argue that psychoanalysis and more generally psychodynamic therapies in fact have research support about effectiveness, but they tend to be in the form of case studies. It isn't perfect, but there is research out there, depending on the area of focus.
 
Indeed, I did not rip it apart due to the time limitations of having the study and having to write for a deadline. Had I had another few days to dig into it, research contrary data, and write it all up, I'm sure I could've done a better job "ripping." As it is, I leave it to my more esteemed colleagues to do so.

The upshot, in my reading, is that there are many roads to salvation. This article laid out a very specific, research-oriented one that is premised upon the authors' own biases and selective reading of the literature and presentation of an "ideal" medical training model for a medical profession that the authors describe that doesn't exist in the real world.

John
 
Finally had a chance to go through it cover to cover.

The first half (Data-driven decision making and Merits of psychosocial interventions) I thought was great. Obviously a lot more could have been said on the topic, but I think they covered the important points. At least to me, this all fell under "duh" and shouldn't really be controversial, though others may have a different take. I do think they glamorize the medical model, and I'm not entirely sure why. My primary reason for choosing psychology over medicine is I feel like psychologists at least have the potential to be leaders in evidence-based practice due to our dual-training in science and clinical work, even if it hasn't worked out that way. I agree that in general, physicians are more likely to AGREE that the research is important than psychologists are. However, they cite a study showing that 85% of patients were receiving good evidence-based care. This is definitely a case of selective citing - I'm moderately familiar with the literature, and that is by FAR the highest number I have seen. I don't take that invalidates the need for evidence-based practice. I think it just means physicians collectively suck a little less than we do when it comes to this, rather than a lot less, as they portray.

I think they do a fair job of owning up to the failures of scientists in terms of translating research. They acknowledged nonspecifics, which was good, but I thought the analogy to medicine was kind of inane since a therapeutic relationship is SO qualitatively different. I think they would agree scientists haven't done their job in making evidence-based practice realistic and support changes in that direction.

It should come as no surprise to folks here, but I still agree with where they end up, even if the middle of the paper (especially the forced analogies to a medical model) is junk. I absolutely agree that the stsatus quo cannot continue. I think blame is assigned appropriately (note that they make the ever-important distinction between professional schools in general versus the PsyD degree itself). They don't pull punches here, and again I think they overstate their case by focusing on the extreme situations, since as others have already pointed out, some professional schools have more rigid research requirements than others, not to mention the variance by student, by advisor, etc. That said, I'm obviously on board with the fact that these schools need to either get their acts together, or be publicly branded. To steal JN's example, if someone is finishing grad school without even knowing what SPSS is, there's simply no way they are going to be even remotely capable of evidence-based practice in even the loosest definition of the phrase. In my opinion, it is COMPLETELY unethical on the part of everyone involved to let them have a doctorate of any kind. I felt if anything they were too nice towards APA who in my eyes has COMPLETELY dropped the ball out of fear of losing their members who think underwater primal scream therapy wearing a purple hat is the greatest thing since sliced bread and won't consider anything beyond their own distorted, irrational thoughts.

I agree with the accreditation system in principle. Frankly, I think APA's behavior is pretty disgusting across the board (found out there is currently some underhanded political maneuvering designed to prevent this accreditation system from ever becoming recognized by licensing bodies so APA can maintain the crap-opoly). Whether this system will succeed remains to be seen. The materials here don't provide enough info about the accreditation process itself for me to feel comfortable saying. My gut reaction is that they may push it too far at first to differentiate themselves and will need to pull back (esp. with regards to not having "input" requirements). I think there is merit to a stronger focus on outcomes, but there needs to be balance. I think it walks a fine line between being "Accreditation for researchers only" versus true clinical science, and I would hate to see many of these concepts recognized as ideals only by researchers since that defeats the purpose of such a movement. I do recognize that many of the ideals put forth are widely varying in how realistic it is to attain them.

Overall, I pretty much stick by my earlier statement. I absolutely, 100% agree that this is the direction psychology needs to move in. I think APA finally having some competition by people who aren't afraid of numbers, and are willing to step up to the plate and prove they are effective rather than simply saying "Trust us, we're helpful. Now give us money" is fantastic. Whether this accreditation system will succeed, or even deserves to succeed, I remain a skeptical cheerleader.
 
Last edited:
Wouldn't your arguement also apply to treatments that have not been proven effective? In other words, wouldn't these unproven treatments (e.g., psychoanalysis) also have those pressures to be deemed effective? Using your logic, shouldn't all treatments demonstrate equally effectiveness for all disorders?

Probably. Basically I think we should always keep in mind the limitations of studies as well as all the forces impacting them...and that they are performed by humans.

Your arguement is like saying lets use Drug X for depression because I like Drug X, even though there is NO evidence that Drug X is useful for this disorder. AND lets not use drug Y for depression because I don't have any experience with using drug Y, even though Drug Y has been proven effective time after time.

It is normal to question the results of scientific studies, that is what thoughtful scientists are supposed to do. However, it is unethical (and possibly malpractice) to use unproven treatments when proven treatments are available.

So recently I was plowing through Stahl's psychopharm where I read something to the effect that, "so and so condition is hypothetically thought to occur in so and so part of the brain. Treatment for this is Drug XZY which is hypothetically thought to effect said part of the brain." All the while my teachers keep reminding me that this course is "evidence-based."
 
So recently I was plowing through Stahl's psychopharm where I read something to the effect that, "so and so condition is hypothetically thought to occur in so and so part of the brain. Treatment for this is Drug XZY which is hypothetically thought to effect said part of the brain." All the while my teachers keep reminding me that this course is "evidence-based."
Stahl isn't always backed by research, but people swear by him like it is.....so YMMV. I think Stahl is brilliant, though a prescriber still needs to think for himself and herself.
 
sorority chick major
Ouch. I'm a sorority "chick" AND a pretty "hardcore" researcher (a published one, too, FW(relatively little)IW)! I've also taught and done clinical work (to the extent an undergrad can). But I suppose I'm dragging down the major, right? :rolleyes:

Watch those stereotypes, please.
 
Last edited:
I will never understand the disconnect people have with membership organizations. If you are a member of APA, you are the APA. If you don't like the direction it's taken, imagine all the good we could do by changing that direction by running and getting involved in governance.

I absolutely see the argument forwarded by the researchers as elitist and likely to add to the public's confusion regarding psychologists. I don't understand how it's going to help anything in the long-term, since there will then be a two-tiered system of education -- the have's and the have-not's (and all existing clinical psychologists would be in the latter group). This would be considered "progress" for the field?

John
 
I'd argue that psychoanalysis and more generally psychodynamic therapies in fact have research support about effectiveness, but they tend to be in the form of case studies. It isn't perfect, but there is research out there, depending on the area of focus.

Since Psychoanalysis and psydynamic therapies are among the oldest (135 years or so), shouldn't there be support for these therapies beyond case studies (that is, if they really work)? If those therapies were effective wouldn't randomized clinical trials have demonstrated so by now? I bet these studies have been done, but have had negative (and unpublished) results.
 
Last edited:
I will never understand the disconnect people have with membership organizations. If you are a member of APA, you are the APA. If you don't like the direction it's taken, imagine all the good we could do by changing that direction by running and getting involved in governance.

This issue has come up before on some listservs I frequent, and the challenge is getting a foot in the door for the positions that matter. I have heard of some success in certain areas, but the positions that affect the most change seem to be a game of musical chairs. Once I am licensed I plan on becoming more involved in leadership at some level, though it can be hard to break into certain areas.
 
Top