Another article on practice

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

JockNerd

Full Member
10+ Year Member
5+ Year Member
15+ Year Member
Joined
Mar 28, 2007
Messages
1,810
Reaction score
9
I think it would have been more effective had half the article not been a plug for Baker's accreditation system.
 
Members don't see this ad :)
For those who haven't seen, there have been a bunch of others in Nature, Science, Chronicles of Higher Ed, etc. Discussion of it has hit the mainstream and is now all over the place.

T4C - What data were you looking for? Almost everything in there seemed pretty well supported to me. I only saw a small handful of statements that there isn't a fair bit of evidence behind.
 
T4C - What data were you looking for? Almost everything in there seemed pretty well supported to me. I only saw a small handful of statements that there isn't a fair bit of evidence behind.

1. They compared the medical model to psychology, which really doesn't match up well, so it is already an apples to oranges problem.

2. Westen et al (2004) spoke a bit to some limitations of the research design behind most EBTs, some of which were applicable. I'm not against EBTs, quite the opposite, but much of this research is on the newer side, so it may take awhile to be incorporated into graduate training.

3. EPPP scores are a poor predictor for estimating training, as some programs (like many failing high schools), "teach to the test", so the result is a metric that does not properly capture "good" training.

I guess my biggest issue was the tone of the article was the implication that Ph.D. programs were all teaching EBTs, and Psy.D. programs were not. I am in agreement with the authors that there needs to be a greater emphasis of EBT taught in all programs, but the "tone" was a bit off-putting.
 
Last edited:
Wow. I wish I could detail everything in that article that was incorrect/misleading but I would be here all day. It's as though the authors did not get the message on the purpose of the PsyD. I wish their article had a list of references, because they make a lot of unsupported claims.
 
Look, I love research and EBT, but it does have its limits as well. For example, positive reinforcement for bx change is very well-supported, but that does not mean a psychologist should recommend a sticker chart for a 19 year-old (yes, I've heard of this happening). EBT also includes things like the importance of repoire and appropriate tx planning, and these also need to be taught in clinical psych programs.

I'm curious, T4C, if you wouldn't say that psych matches up with the medical model, but model would you suscribe to?
 
I'm curious, T4C, if you wouldn't say that psych matches up with the medical model, but model would you suscribe to?
The medical model posits that the problem is in the patient, and through the primary identification of physical pathology, using an H&P, labs, etc.....a diagnosis is made, followed by a treatment. One of the limitations to this model is that it assumes there is a biological/physical way to measure/capture the information, at least as a primary means of Dx, and it downplays other factors.

The biopsychosocial model takes into account biological, psychological, and social factors. Compared to the medical model, it doesn't assume that the problem is primarily based as a deviation from a biogical norm. There is some research that supports a biological basis for MH issues, but they are in the minority. *added* I believe the more we research there will be more supportive research for a biological/genetic connection, but still not for everything.

So to tie it back in with this article's comparison, the authors compare a lack of connection to biological factors (pre-Flexner), to a much more science-based approach (post-Flexner)...which is good. However, to directly compare a post-Flexner model to psychology, highlights the lack of a bio-centric approach, instead of fully acknowledging that psychology can't be reduced down to only biology, and instead needs a modified approach that best utilizes the research we have.
 
Last edited:
I know one of the authors (who shall remain nameless) who is faculty at a "clinical-scientist" program. He/she is unable to get licensed as a psychologist as his/her educational background does not even qualify for licensure. Nonetheless, this person is core clinical faculty in the program and supposedly a "distinguished" clinician. Hello pot, this is kettle...
 
Look, I love research and EBT, but it does have its limits as well. For example, positive reinforcement for bx change is very well-supported, but that does not mean a psychologist should recommend a sticker chart for a 19 year-old (yes, I've heard of this happening). EBT also includes things like the importance of repoire and appropriate tx planning, and these also need to be taught in clinical psych programs.

I'm curious, T4C, if you wouldn't say that psych matches up with the medical model, but model would you suscribe to?

I'm a bit confused. The authors are arguing for EBT, not blind implementation of ESTs. Its a bit of a straw man that gets thrown around here a lot...I've never seen a clinical scientist argue that rapport doesn't matter, or that students shouldn't be taught about it.

Lots more to reply to, but not much time so it will have to wait for another day:)
 
Members don't see this ad :)
It is excellent that a forum like this exists so that articles such as this one can be criticized for it's biases and outright ignorance. The authors of this article are so ready to scrutinize the practice of clinical psychology simply because it does not adhere to all of the rigorous characteristics of medicine. Medical diseases such as diabetes can be defined, measured and the results can be generalized and commonalities can be shared throughout the field. The same criteria can be said for depression, PTSD, and other psychological disorders. What the authors fail to recognize is the dynamics of the human experience.
Psychology is in full support of using scientific methods in researching pathology, but it is a basic tenet of our discipline to consider every client as an individual whose symptoms and mental state are to some degree unique. If we were to treat psychology like strict science based medicine, we would have no need for counseling or a therapeutic relationship. Instead, the intricacies of the human mind would be marginalized and put aside for the institution of medication and pharmaceutical dominance. Psychology is a science, but it is also a discipline that goes beyond the parameters of a textbook and treats people based on a myriad of other factors that cannot always be operationally defined. That is where true growth and wellness can be achieved.
 
I'm a bit confused. The authors are arguing for EBT, not blind implementation of ESTs. Its a bit of a straw man that gets thrown around here a lot...I've never seen a clinical scientist argue that rapport doesn't matter, or that students shouldn't be taught about it.

Lots more to reply to, but not much time so it will have to wait for another day:)

Oh, to be clear, I wasn't arguing against the article content, but rather the general tone I got from it of "EBT is available--implement it now!" I was just saying that implementing EBT (in psych or medicine) can be quite nuanced, and that's an important thing to remember when working with individual clients/patients (for example, drug X may be shown to be the most effective in general, but may be counterindicated by co-morbidities Y and Z and history items K & L for a particular person). You're still relying on EBT, of course, but it's not so cut-and-dry.). I don't doubt that clinical science programs teach this, but I think it could be lost on some laypeople reading the article as a result of the general tone.

YMMV, of course.
 
Last edited:
Although i have mixed feeling about the article, it brings up one good point.

In my program, and i imagine many others, "clincial training" is often farmed out to outside sites. The superviors at these sites usually have no academic affiliation (unless its the medical center or local VA). Even many of the supervisors at our university clinic are adjuncts. In other words, i dont receive alot of "clinical training" per se from our progeram's faculty. And since its a research focused program, classwork takes a backseat many times. Our intervention classes are broad overviews and dont really lend themsleves to learning any one EBT in depth. Even if they did, practicing them on actual patients can often be tough, as there is no guarantee that your supervior is trained in the technique and can properly supervise it.

Just food for thought......
 
Look, I love research and EBT, but it does have its limits as well. For example, positive reinforcement for bx change is very well-supported, but that does not mean a psychologist should recommend a sticker chart for a 19 year-old (yes, I've heard of this happening). EBT also includes things like the importance of repoire and appropriate tx planning, and these also need to be taught in clinical psych programs.

I'm curious, T4C, if you wouldn't say that psych matches up with the medical model, but model would you suscribe to?

It may seem like a crazy idea, but stickers have worked with young children, teenagers, middle-aged and elderly clients just as well. I am saying this not to prove or discount anything. The example catches my attention, that's all.
 
Phew!! None of the programs I've selected belongs to the APCS. Not that there's anything wrong with that...:whistle:

Just sayin' -- the arguments in the source report were shockingly thin. If that's the party line, maybe what ails clinical science is a logic defecit. Sure was a pretty reference section, though.

I wonder what the report would have earned on the GRE analytical writing section...
 
Phew!! None of the programs I've selected belongs to the APCS. Not that there's anything wrong with that...:whistle:

Just sayin' -- the arguments in the source report were shockingly thin. If that's the party line, maybe what ails clinical science is a logic defecit. Sure was a pretty reference section, though.

I wonder what the report would have earned on the GRE analytical writing section...:laugh:

You'd go out of your way not to attend a program that emphasizes scientifically supported interventions? Why?
 
It may seem like a crazy idea, but stickers have worked with young children, teenagers, middle-aged and elderly clients just as well. I am saying this not to prove or discount anything. The example catches my attention, that's all.

Huh. Well, I guess I learned something new today! :oops: Of course, by the age of eight, I was most motivated by cash as tangible reward. ;)
 
Why are you surprised to see those two schools on the list? (I ask because I'm applying to one of them.)

I'm curious about this as well, for the same reason. Some of my other schools also appear on their list.
 
The irony in this article and the issue raised by Baker et al. is the fact that the Psy.D. as a degree was modeled on the medical degree. The Vail model holds that the knowledge base in psychology is sufficiently advanced that a professional doctorate is warranted and justified as a training approach. This article holds up medical training as a historical precedent. Yet the training of physicians is very different from that advocated by Baker et al. The former get very little research training if any. There training is professional in nature. Baker et al imply that the model of training in medicine is sufficient to give them adequate training to practice their profession. However, ironically a similar model of training is held to be inappropriate for psychology. What Baker et al are doing is conflating the ability to conduct and understand clinical outcome research with training as a psychologist. This is erroneous and it suggests that their conceptualization of the identity of a psychologist is rather limited. In my opinion, first and foremost psychologists are scholars engaged in disciplined inquiry. We only then are scientists who use a particular set of methodologies to understand psychological constructs. And within those methodologies, there is a place for clinical outcomes research. However, many of us are interested in exploring aspects of psychology that do not lend themselves easily to quantitative methods. Many of us are interested in non-clinical aspects of psychology. Interestingly, one can imagine that Baker et al would frown on non-experimental modes of inquiry in psychology such as qualitative methods. Their critique regarding exposure therapy and PTSD is well founded. However, there is always a lag time between research and application.
 
Last edited:
The irony in this article and the issue raised by Baker et al. is the fact that the Psy.D. as a degree was modeled on the medical degree...However, many of us are interested in exploring aspects of psychology that do not lend themselves easily to quantitative methods. Many of us are interested in non-clinical aspects of psychology. Interestingly, one can imagine that Baker et al would frown on non-experimental modes of inquiry in psychology such as qualitative methods. Their critique regarding exposure therapy and PTSD is well founded. However, there is always a lag time between research and application.

Spot on!

And, you're correct. The authors do frown on qualitative research, at least the person I know does.
 
I didn't mean "surprised" in a bad way, just that I always had the impression that APCS programs are very research-heavy and I've heard those two are more balanced. I'm very pro-research/pro-research training.

Kentucky is very research-focused; they just also happen to have very strong clinical training too, but that clinical training absolutely emphasizes evidence-based treatments. The program-run clinic uses a variety of manualized treatments and runs DBT and MBSR groups among other EBTs. I would not put Kentucky in the category of "balanced" programs which are "balanced" because they de-emphasize research (which is what many people seem to mean when they say "balanced"—that they want less research). As someone who is actually quite familiar with the program, it makes perfect sense to me.
 
Kentucky is very research-focused; they just also happen to have very strong clinical training too, but that clinical training absolutely emphasizes evidence-based treatments. The program-run clinic uses a variety of manualized treatments and runs DBT and MBSR groups among other EBTs. I would not put Kentucky in the category of "balanced" programs which are "balanced" because they de-emphasize research (which is what many people seem to mean when they say "balanced"—that they want less research). As someone who is actually quite familiar with the program, it makes perfect sense to me.

So, if "clinical-scientist" encompasses strong research AND clinical training and EBT, shouldn't every PhD program aspire to (or already be) under the "clinical-scientist" model?
 
Kentucky identifies as a scientist-practitioner model program (as do some of the other programs on the APCS list such as Boston University), but it and those others still seem to fit in the APCS because the emphasis on research and clinical science in the training is similar, even if there is more clinical training available than in some of the "clinical scientist" programs. I suspect some other programs do not hold those same values and may not be as consistent in promoting them across all aspects of training, and thus do not want to be part of an organization that pushes for them. I also imagine that if the APCS actually gets more momentum, other programs might well join, but right now it belonging doesn't mean a whole lot.
 
I would not put Kentucky in the category of "balanced" programs which are "balanced" because they de-emphasize research (which is what many people seem to mean when they say "balanced"—that they want less research).

You may be right there, I don't know. I do know that's not what I mean when I say "balanced". By "balanced", I mean a program that has as many classes in applied psychology as it does in research methodology. I also mean a program where students aren't expected to all actively pursue careers in academia. My impression of such programs is not that they devalue research. It's that you may only produce a thesis and a dissertation (and have more clinical hours compared with those who juggle more research projects than that).
 
Note that I didn't say devalue, I said de-emphasize, and I would indeed describe any program where a masters and a dissertation are the only research a PhD candidate is expected to do as de-emphasizing research compared to a more typical, research-intensive model. (Of course, my language choices shows my bias on the topic—I imagine someone coming from a school with a much lower level of research expectations might say programs like mine emphasize research rather than theirs de-emphasizing it.) That said, I don't think there are many programs except for the clinical scientist models where all of the students are expected to pursue academic careers—even research intensive scientist-practitioner models seem to produce some clinicians, but I don't think all of those programs fit your definition of balanced.
 
I guess I think of balanced programs are ones that won't reject you just because you want to be a full-time clinician. Not sure if U of Kentucky fits that bill or not.
 
Well I think again, in thinking about programs in general, it depends on what you mean by that—there are lots of schools where your odds of being accepted are slim to none if you go into the application process saying you want to be a clinician and are not interested in research. That doesn't necessarily mean those programs will kick you out or not support you if you end up deciding to pursue clinical work (though they may vary in the extent to which they will support that, and I suspect that varies a lot by mentor within programs as well), but rather that if you don't have at least enough interest in research to express some of that early on, you probably won't fit those programs well. I think there's a whole chunk of scientist-practitioner models that that's true for. If you want to be able to go in and say that you want to be a clinician and definitely not a researcher and still be accepted, then you will have a much narrower set of PhD programs to choose from, and those are going to be ones that de-emphasize research. Seems to me like people vary all over the place in what they mean in balanced, which is why at this point I find it a pretty useless term unless it comes with a lot more clarifying information.
 
Note that I didn't say devalue, I said de-emphasize, and I would indeed describe any program where a masters and a dissertation are the only research a PhD candidate is expected to do as de-emphasizing research compared to a more typical, research-intensive model. (Of course, my language choices shows my bias on the topic—I imagine someone coming from a school with a much lower level of research expectations might say programs like mine emphasize research rather than theirs de-emphasizing it.)

I think the language issue can make it sound like you're expressing that "balanced" programs are quantitatively less intensive than research-heavy programs, though I understand this is not your intent.

Let's face it, as graduate students, we all have limited time and energy. A program that expects as much of a time commitment in practicums, in coursework, and in teaching as they do in the research lab is simply shifting the priorties around. You're right, this can be seen as "de-emphasizing" research in comparison. It's just that that phrase sounds harsh when you're coming from 65-70+ hour work weeks ;). Perhaps it would sound similar if someone were to say that your program "de-emphasizes clinical work". I wish we had time to do it all in the most rigorous way possible, but it's simply not an option.
 
You'd go out of your way not to attend a program that emphasizes scientifically supported interventions? Why?

It depends on what you mean by "science." I believe Sartre considered some of his philosophy scientific.
 
It's just that that phrase sounds harsh when you're coming from 65-70+ hour work weeks ;).

There *IS* light at the end of the tunnel (not a train)....as I'm on internship and I work a 40hr week. I probably spend an extra 5-10hr per week on post-doc stuff, and probably will spend 2-3hr per week on presentation stuff....it still beats the 65+ hour weeks of grad school. From what I've seen post-doc is back up to 50+, but that also depends on your area of concentration.
 
There *IS* light at the end of the tunnel (not a train)....as I'm on internship and I work a 40hr week. I probably spend an extra 5-10hr per week on post-doc stuff, and probably will spend 2-3hr per week on presentation stuff....it still beats the 65+ hour weeks of grad school. From what I've seen post-doc is back up to 50+, but that also depends on your area of concentration.

Whoohoo! Hopefully that doesn't turn into "toottoot!" as the train hits me ;).
 
I assume everyone knows this is just a rehash of the APS journal article about the need for a scientifically-based accreditation program for psychology doctoral programs from early October, right?

http://psychcentral.com/blog/archives/2009/10/03/is-psychology-rotten-to-the-core/

Same thing. The fact that a mainstream media outlet gave these biased colleagues a place to publish their tripe is unfortunate, but it's the exact same article from 6 weeks ago.

John
 
Top