Mixed feelings on the article. Some points are excellent, some are distorted and some are simply incorrect.
This is a big interest of mine, but have a grant due soon so didn't have as much time to go through as I'd like. Nonetheless, some point-by-point thoughts:
1) Freudian psychoanalysis does not equal short-term psychodynamic treatments. They screw this up left and right in the article.
2) True CBT is more insight-oriented than I think many realize. Automatic thoughts are superficial. Schemas and core beliefs are where much of the long-standing work happens. The two treatments have more in common than many realize - largely at that level.
3) They mention a meta-regression showing the efficacy of CBT shows a downward trend over time. The author seems to think this is evidence that CBT doesn't work as well as it used to. They fail to mention this is true of a huge number of findings that have an adequate literature base to draw on sufficient power to show the effect. Some may be a publication bias effect. It has always seemed a huge leap to me to think it disproves the original effect. Its meta-analysis people - we're collapsing studies together and losing information about individual ones. It certainly has merits, but you can't ignore the methodology. I think an easier explanation is that it is a function of study design. Initially, we set up studies to show something works at all (high effect size). Eventually, we push the bounds of what it can do more and more (different populations it wasn't designed for, more active control groups, messier settings and designs). Covariates are rarely used in such analyses (they are a PITA to implement in meta-analysis). Of course...we can't see the effect for psychoanalysis because there aren't enough well-designed studies to even attempt something like that.
4) Author logic: CBT effect size is overrated. In some cases it is only slightly better than psychodynamic treatments. Therefore Psychoanalysis is more effective in the long-run (unless I'm mistaken, that actually seems to be the argument at one point).
5) Depression is not the only mental illness that exists. What about OCD? What about panic disorder?
6) Not a fan of ad hominem, but if we are going to paint Shedler as a savior doing cutting edge science showing the efficacy of analysis...let's look a little deeper at that. He's never held an NIH grant per Reporter. In the last 3 years, it looks like he's published a bunch of personality structure work (some of which is pretty good), but otherwise his contributions seem to be mostly narrative critiques of the CBT literature. It ain't perfect. Its miles ahead of what the dynamic literature has at the moment. He has a habit of doing this, but showing that someone else's point is weaker is not the same thing as proving that your point is stronger. This is really what the EBP crowd has been saying all along. If it works, prove it. We're trying to prove our points. Not every execution is perfect, but we're trying. Why aren't you?
7) Anyone can pick out a poorly designed study (the reference to grad students with 2 days training in the therapy). If I find a poem a patient wrote about how much analysis helped them - does that tell us anything about how strong the evidence is for analysis?
8) Does anyone know a researcher developing treatments who only has 10 hours of therapy experience? Public health folks do some evaluations of it. Obviously some folks get to a point they no longer actively practice. My experience is that the vast majority on that side are actively practicing or at least have extensive practice experience.
9) Symptom relief shouldn't be the only outcome of interest. Agreed, I'm down. Do we have strong evidence that analysis improves other outcomes (functioning across a variety of settings, interpersonal relationships, etc.)? Nope. Actually, there is more for CBT.
Phew...5 minutes reading and 15 minutes typing. I'm a little harsh above, but don't take this to mean analysis doesn't work. I don't even take the stance that it shouldn't be used. I do think it makes way more sense to try a treatment that has been known to work first, before trying something that is 1) Less cost effective and 2) There is less evidence to suggest will be efficacious. "This is my therapeutic orientation" is never a good reason to do anything...whether that is CBT, psychoanalysis or anything else. Think more and think better.
Think more and think better - agreed. Which is why I think there's some ignored problems here with what can be measured through quantitative analysis, how things are potentially poorly operationalized, how they change as they're turned into objective "data points," how we may not even have a firm enough foundation on "the good life" to really say whether something is effective (seems like symptom management is pretty huge, which does indeed often fail to consider the larger picture), how even things like functioning across a variety of settings may not be big enough - is therapy purely a method for getting people to fit in? Are cultural contexts always to be supported, such that fitting in is always desirable? Is there a place for more political, societal-structures thinking in this stuff?
Just some issues to consider, imo, that often fly under the radar.