opinions on medication management

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
The adverse effect I am most worried about is mood switching (SSRI-induced hypomania or mania), particularly for young people. Mood switching occurs in 6-8% of individuals taking SSRIs with (what is believed to originally be) unipolar depression. This rate is twice as high for SSRIs as it is for placebo. Juveniles are at higher risk for mood switching.
Why would this be the side effect you'd be most worried about? I'd be more worried about something like serotonin syndrome or something else that's potentially deadly, even if rarer.

Members don't see this ad.
 
Behavioral interventions and psychotherapy are fantastic and I always recommend those in concert with medication. It's not or, it's and. But I think this article underestimates how difficult it is to get someone in a deep vegetative depression to do qi gong or whatever. Pill is a low bar and can help get people to the point where they are able to engage with behavioral treatments.
The problem is your are using a severe case of depression in this example. Frankly, I fall into the group of proponents that want to distinguish between severe depression and mild-moderate depression as different diagnostic disorders.

I'd also make the argument that a pill is much simpler for mild-moderate depression in comparison to psychotherapy. Pills require much less commitment and resources than psychotherapy. That is another reason to take them. However, individuals should be made aware of the possible dangerous side effects. While clinicians should be more vigilant of the iatrogenic potential of medication.
 
Last edited:
  • Like
Reactions: 1 user
Why would this be the side effect you'd be most worried about? I'd be more worried about something like serotonin syndrome or something else that's potentially deadly, even if rarer.
Exactly that reason, serotonin syndrome is much more rare. Additionally, my concern is about a treatment for a mental health problem that, in the long term, leads to exacerbation of that mental health problem. The problems I highlight are more insidious.

And it is noteworthy to highlight serotonin syndrome
 
Members don't see this ad :)
This gets under my hide. I never (nor does the research) state that it increases suicide attempts.
...
The correlation of a drop in suicide and the use of SSRIs is just that, a correlation.
If my post annoyed you, I think it's because you misinterpreted what I said. I never claimed that you linked SSRIs with suicides/attempts, nor did I claim that a correlation showed causality. The thought I should have included which would have made this more clear is that it seems odd to me that a medication could increase SI without increasing actual suicides. It suggests to me that there's something going on that we're missing. Plus, if the increase is only in SI and not in actual suicides, that's a lot less concerning than many other adverse effects.
 
How is the difference between a biological factor and a psychosocial factor defined?
I appreciate that you are highlighting the dualism that is often invoked when discussing biology and environment (at least that is what I took away from this question).
 
  • Like
Reactions: 1 user
If my post annoyed you, I think it's because you misinterpreted what I said. I never claimed that you linked SSRIs with suicides/attempts, nor did I claim that a correlation showed causality. The thought I should have included which would have made this more clear is that it seems odd to me that a medication could increase SI without increasing actual suicides. It suggests to me that there's something going on that we're missing. Plus, if the increase is only in SI and not in actual suicides, that's a lot less concerning than many other adverse effects.
To be clear, my annoyance was not at you specifically. I hear this argument often in the world and that annoys me b/c it is not rooted in evidence and makes one of the biggest mistakes we teach undergrads in psychology not to make; correlation does not equal causation.

My work (at this point mostly teaching and research with clinical work through my research) is primarily with depression and suicide (mostly DBT). I would say an increase in suicidal thoughts, while not as severe as suicide attempts, is still very serious. However, I do prioritize my concerns for mood switching, which was the crux of my post.

Finally, it is very difficult to show an effect on suicide attempts or death. Such low base rate events would require extremely large sample sizes. This also ignores the poor psychometrics for collecting data on attempts, which primarily rely on self-report via unstructured interviews. I've seen unpublished research (from the DBT world) indicating people at high-risk for suicide under-report non-suicidal self-injury and suicide attempts.
 
To be clear, my annoyance was not at you specifically.
Understood. I'm reading and replying now during a long car ride, making it hard to do either as carefully as I normally try to be.

Finally, it is very difficult to show an effect on suicide attempts or death. Such low base rate events would require extremely large sample sizes. This also ignores the poor psychometrics for collecting data on attempts, which primarily rely on self-report via unstructured interviews. I've seen unpublished research (from the DBT world) indicating people at high-risk for suicide under-report non-suicidal self-injury and suicide attempts.
That's why I tend to look more to epidemiological data for such an issue. It certainly has its flaws and limitations, but it gets the whole population and doesn't rely on self-report. The perfect study to be sure just isn't feasible.
 
At some level most disease is due to a weakness in an organ system or multiple organ systems. I don’t think it is incongruent with evidence to consider mental health illnesses a weakness as long as this does not interfere with the practitioners ability to empathize with the patient. Weakness does not imply that the patient is at fault, and even if a practitioner chooses to believe that the patient is at fault for their weakness this does not lesson the duty to treat them. I think the medical word often goes to enormous effort to manufacture a perspective that the patient’s illness is not a weakness and nobody is at fault because it makes our work feel more pure.
I know I am piling onto this thread but I've got some more flexibility today than I haven't in the past few months.

I have a problem with using the term illness to describe mental health disorders. Otherwise, I can't really disagree with what you are saying b/c its just a way to describe what is happening. However, we would need to know what specifically is the "weakness." In this case, is the organ the brain? Or is it the CNS along with the endocrine system (and lets throw in the immune system as well). And how is the brain weak for depressed individuals? Or how is it weak for individuals with social anxiety disorder?

I think it is far more descriptive to say social anxiety disorder is a combination of genetic influences on fear sensitivity with environmental experiences of the feared stimuli that must be viewed within the context of the world you are living in and how that interferes with your daily life. To me, it seems mental health disorders are best explained from a biopsychosocial perspective as opposed to a weakness in organs perspective.
 
Let's get away from the biological v psychosocial distinction, which is philosophically indefensible under non-dualist theories of mind, and talk instead about endogenous v. reactive. More precisely, whether the causation of the depressive symptoms is more attributable to factors external to the individual or factors internal to the individual.
Love the inclusion of dualism in this discussion. I am even hesitant to dichotomize into endogenous vs. reactive. I would love to hear more about this topic. I think of MDD as a runaway stress response. This could be due to biological stressors (e.g., depressant substances), daily life hassles, a major stressor (e.g., bereavement, divorce), existential crisis, social isolation, sedentary lifestyle, and on and on and on. And most often a combination of these stressors.

How would the endogenous/reactive perspective add to treatment or research of MDD?
 
  • Like
Reactions: 1 user
Private practice is different from the integrated hospital systems. I have seen therapists not want to lose patients so they don't refer for medication management. They aren't on salary like in a hospital system.
That is a serious problem. I've seen hospitals refuse adoption of DBT b/c it would reduce their profits by reducing hospitalizations.

Generally speaking, I feel for the average consumer seeking mental health treatment. The system doesn't prioritize what is most likely to help.
 
  • Like
Reactions: 1 user
This is a very strong statement. Can you please provide citations? For example, there are studies indicating that medication may actual reduce the effectiveness of other treatments. For example:

Exercise alone beats out exercise+meds in maintenance stage of depression treatment with no differences at end of Tx

Studies indicating benzos reduce the effectiveness of exposure treatments (I can find cites later).​

Here is just some data indicating that combination treatment does not provide better outcomes:

This recent meta indicates that globally speaking meds+psychotherapy work better than psychotherapy alone (ES = .35). HOWEVER, there is no difference at follow-up/maintenance. When comparing CBT, the effect was even smaller (ES = .15). To be fair, I can't imagine that having any real-world significance. Particularly, in light of potential harm from medications.

Social Anxiety Disorder doesn't but I always forget the citations for this meta. I have the data. Basically, the ES are as follows: Sham treatments = .63; Psychodynamic = .62; SSRI = .91; Ind CBT = 1.19; Meds+therapy = 1.30. While CBT was statistically better than SSRI, meds+therapy were NOT statistically different.​

I could look for more.

Love it!!!

Those are incredible points! We should try to define which specific disorders and what phases to discuss. Totally willing to move my statements back. Which disorder are we taking bout? Which phases?
 
  • Like
Reactions: 1 user
I don't want to be overly semantical about it but is there any strong evidence to support any putative mechanism of any medication for depression? For example, SSRIs have an effect above that of placebo but by most people's interpretation that effect has no clinical/real-world significance. More importantly, the effect is unrelated to changing the serotonin activity in individuals. An abundance of data indicates SSRIs do not decrease depressive symptoms through an alteration of serotonin activity and that depression is not caused by reduced serotonergic activity. As I highligth before, this could be due to the heterogeneity of depressive etiologies. So, maybe it is from some but not the average person.

Yes. There is excellent evidence to support a neurogenic effect for antidepressants. This is the classical pathway which takes several weeks to provide benefit, consistent with the length of time necessary for new neurons to be born, migrate appropriately, and integrate into existing cortical networks:

Santarelli, L., Saxe, M., Gross, C., Surget, A., Battaglia, F., Dulawa, S., ... & Belzung, C. (2003). Requirement of hippocampal neurogenesis for the behavioral effects of antidepressants. science, 301(5634), 805-809.


There is also quite solid evidence for an additional pathway, the allopregnanolone pathway, which is faster-acting and also potentially more relevant for disorders like PMDD and postpartum depression. We presume this is the relevant pathway when we use luteal-phase SSRI for the treatment of PMDD, since the classical pathway couldn't be effective over the luteal-phase timespan.

Pinna, G., Dong, E., Matsumoto, K., Costa, E., & Guidotti, A. (2003). In socially isolated mice, the reversal of brain allopregnanolone down-regulation mediates the anti-aggressive action of fluoxetine. Proceedings of the National Academy of Sciences, 100(4), 2035-2040.


Serotonergic activity, as you note, is irrelevant. The serotonin hypothesis is a piece of crap cooked up by the pharma marketing teams. It has nothing to do with science.
 
Last edited:
  • Like
Reactions: 1 user
I've got to say, it's not clear to me that the suicidality risk is something real. The trials which found the increase over placebo didn't find any actual suicides, and the rate of suicides in the US were dropping when SSRIs hit the scene, not increasing.

Completed suicide is pretty hard to study prospectively. I don't think it's at all a foregone conclusion that just because SSRIs may increase suicidal ideation, they would also increase completed suicide. But I find the healthy volunteer studies in particular quite convincing on this point.

Also, have you not seen this clinically? It's rare, but if you give out antidepressants regularly I'm surprised you haven't seen it. I have seen it a number of times, always in young adults under 25 (I don't treat adolescents). Most of them describe the thoughts as feeling ego-dystonic, as if they 'came out of nowhere' or 'weren't me,' which is quite unlike how people with depression- or personality-related SI typically describe their thoughts. Also it goes away pretty quickly when you take the obvious step of stopping the med, which is why I'd be surprised if there were any detectable contribution to completed suicide rates.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
I'll admit I didn't read it very closely when I saw the sample size was so small. My intention was to point out that others have considered this question worth investigating.

However, after looking at it more closely I think you're also misinterpreting the results and grossly misrepresenting the implications.

First, the severity for the sample on the HAM-D (HAM-D=18) was WAY below the typical minimum threshold (HAM-D=24), reducing the room for change.

Yes, agreed, poorly designed study, for this and other reasons

Second, the control comparison was at two-week follow up, which is two weeks less than any follow-up length in the Kirsch trial. Kirsch only included trials with between 4-8 week follow up.

Indeed, which was the intended outcome point (if they expected a 4 week outcome they could have chosen one, but I don't see a reason to expect a time lag for the placebo effect the way there is for a medication effect that depends on the genesis and migration of new brain cells).

Third, as you (and I) mentioned, the wait-list control comparison was not statistically significant. There was an effect of comparable size to antidepressant medication (d=.54), but the actual effect in raw HAM-D units was below what is typical (Difference=2.30).

Right. Neither the wait list control nor the placebo had a statistically significant effect at two weeks. We agree. The effect size (difference of means/SD) is pretty meaningless with a tiny sample size, where outcome and SD are both highly dependent on chance effects, with the result that Cohen's d can blow up or shrink very quickly with stochastic changes in outcome in either direction. It's a meaningless statistic in a sample of this size. Reporting it at all is disingenuous. It should be ignored, but since it happened to look good, instead it was reported and made much of.

Fourth, you neglected to report the pre-post results (n=20, collected 4 weeks after intervention, consistent with Kirsch), which were statistically significant for all three outcomes; HAM-D (d=.56, p=.03), QIDS (d=.76, p=.005), SDQ (d=.41, p=.02).

**BUT THEY DIDN'T REPORT THE WAITLIST CONTROL DECREMENT AT 4 WEEKS!** Why not? The obvious implication is that the waitlist control also decreased by a similar amount. Believe me if waitlist control had been steady and they'd gotten a significant difference at 4 weeks, they would have been trumpeting that finding all over the Daily Mail.

I'm pretty disappointed by how you cherry-picked here, based on my read. You quoted the author stating that, "our findings do not support the hypothesis that open-label placebo is effective for MDD," when what they were saying with that statement is they didn't have a sufficient sample size to have a statistically significant result. The full quote for your cherry-picked excerpt says:

"To our knowledge, this is the first RCT to test the efficacy of open-label placebo for MDD. Despite the fact that we observed a medium-sized effect for the main outcome measure, our findings do not support the hypothesis that open-label placebo is effective for MDD. However, the results were in the predicted direction, and this pilot study was limited by small sample size, low statistical power, and short duration."

Yes, since the authors cherry-picked their findings, presenting an outcome other than the intended primary endpoint and reporting a meaningless statistic because it happened to support their wished-for result, I felt free to lemon-drop their nonsensical report of a 'medium-sized effect'. I don't think the fact that the results were in the predicted direction is encouraging. It's not that we don't believe placebo effects exist. The question is whether they are big enough to justify choosing to treat with placebo over something with biologically based efficacy. This study suggests they are not.
 
  • Like
Reactions: 1 user
If my post annoyed you, I think it's because you misinterpreted what I said. I never claimed that you linked SSRIs with suicides/attempts, nor did I claim that a correlation showed causality. The thought I should have included which would have made this more clear is that it seems odd to me that a medication could increase SI without increasing actual suicides. It suggests to me that there's something going on that we're missing. Plus, if the increase is only in SI and not in actual suicides, that's a lot less concerning than many other adverse effects.
Nice to see you around hamstergang! It's been a minute.

The SI increase was also a bit misleading for at least one of the original studies. IIRC, the study saw a nearly 2-fold increase, but for a rare occurring event. It was something like 1.2% to 2.4% for risk of experiencing SI. I also recall no one actually making an attempt during the study, but it's the study originally cited for increased risk.
 
Nice to see you around hamstergang! It's been a minute.

The SI increase was also a bit misleading for at least one of the original studies. IIRC, the study saw a nearly 2-fold increase, but for a rare occurring event. It was something like 1.2% to 2.4% for risk of experiencing SI. I also recall no one actually making an attempt during the study, but it's the study originally cited for increased risk.
Yes, small but statistically significant. And, I would say, clinical significant when we are talking about one of the most commonly prescribed medications.
 
Last edited:
Yes. There is excellent evidence to support a neurogenetic effect for antidepressants. This is the classical pathway which takes several weeks to provide benefit, consistent with the length of time necessary for new neurons to be born, migrate appropriately, and integrate into existing cortical networks:

Santarelli, L., Saxe, M., Gross, C., Surget, A., Battaglia, F., Dulawa, S., ... & Belzung, C. (2003). Requirement of hippocampal neurogenesis for the behavioral effects of antidepressants. science, 301(5634), 805-809.

There is also quite solid evidence for an additional pathway, the allopregnanolone pathway, which is faster-acting and also potentially more relevant for disorders like PMDD and postpartum depression. We presume this is the relevant pathway when we use luteal-phase SSRI for the treatment of PMDD, since the classical pathway couldn't be effective over the luteal-phase timespan.

Pinna, G., Dong, E., Matsumoto, K., Costa, E., & Guidotti, A. (2003). In socially isolated mice, the reversal of brain allopregnanolone down-regulation mediates the anti-aggressive action of fluoxetine. Proceedings of the National Academy of Sciences, 100(4), 2035-2040.

Serotonergic activity, as you note, is irrelevant. The serotonin hypothesis is a piece of crap cooked up by the pharma marketing teams. It has nothing to do with science.
Really appreciate the citations and continued conversation. I am familiar with some of the alternative theories to how medications could work. This is getting into the weeds and would take me a bit to form a cogent response.

I am wondering if you have a personal opinion on which theory is most likely to come out on top.
 
**BUT THEY DIDN'T REPORT THE WAITLIST CONTROL DECREMENT AT 4 WEEKS!** Why not? The obvious implication is that the waitlist control also decreased by a similar amount. Believe me if waitlist control had been steady and they'd gotten a significant difference at 4 weeks, they would have been trumpeting that finding all over the Daily Mail.
They didn't report it because the waitlist control received the open-label placebo after 2 week follow-up.

From the text:
"For the first two weeks, patients were randomized to either open-label placebo or waitlist control. After two weeks, participants originally randomized to open-label placebo continued for an additional two weeks on open-label placebo; and participants originally assigned to waitlist control were switched to open-label placebo for an additional four weeks, so long as they continued to meet eligibility requirements after two weeks on the waitlist."

Yes, since the authors cherry-picked their findings, presenting an outcome other than the intended primary endpoint and reporting a meaningless statistic because it happened to support their wished-for result, I felt free to lemon-drop their nonsensical report of a 'medium-sized effect'. I don't think the fact that the results were in the predicted direction is encouraging. It's not that we don't believe placebo effects exist. The question is whether they are big enough to justify choosing to treat with placebo over something with biologically based efficacy. This study suggests they are not.

They didn't cherry-pick, that was the study design. It was a relatively flimsy design, even for a pilot study. However, it demonstrated feasibility and acceptability and was published in Psychotherapy and Psychosomatics (IF=13.7). I think what this study demonstrated is a proof of concept with results suggesting that it would be worthwhile to run a more rigorously designed study with a larger sample.
 
They didn't report it because the waitlist control received the open-label placebo after 2 week follow-up.

From the text:
"For the first two weeks, patients were randomized to either open-label placebo or waitlist control. After two weeks, participants originally randomized to open-label placebo continued for an additional two weeks on open-label placebo; and participants originally assigned to waitlist control were switched to open-label placebo for an additional four weeks, so long as they continued to meet eligibility requirements after two weeks on the waitlist."

Sorry, you're right. I missed that. But that's even dumber! Why did they intentionally destroy their control group, eliminating the possibility of making any meaningful observation at all after two weeks? Why not do a regular crossover design? (Did they plan a crossover and then decide to extend their treatment time after observing no effect at their chosen two week endpoint?)

Without a control group one can make no comment on efficacy, and the authors should not have done so.


They didn't cherry-pick, that was the study design. It was a relatively flimsy design, even for a pilot study. However, it demonstrated feasibility and acceptability and was published in Psychotherapy and Psychosomatics (IF=13.7).

Good, and their comments on feasibility and acceptability are justified. They are not justified in commenting on efficacy past the point where their control group ended, and they did not find efficacy in the window where they did have a control.

I think what this study demonstrated is a proof of concept with results suggesting that it would be worthwhile to run a more rigorously designed study with a larger sample.

Again, if your pilot shows no effect, nobody is going to give you money for a bigger study. There are too many other worthy ideas out there begging for funding.
 
Last edited:
Really appreciate the citations and continued conversation. I am familiar with some of the alternative theories to how medications could work. This is getting into the weeds and would take me a bit to form a cogent response.

I am wondering if you have a personal opinion on which theory is most likely to come out on top.

I don't think it's a competition. I think both effects are likely at play, with the neurogenic effect perhaps operating more prominently in some patients and the neurosteroid effect in others.
 
Last edited:
They didn't cherry-pick, that was the study design. It was a relatively flimsy design, even for a pilot study. However, it demonstrated feasibility and acceptability and was published in Psychotherapy and Psychosomatics (IF=13.7). I think what this study demonstrated is a proof of concept with results suggesting that it would be worthwhile to run a more rigorously designed study with a larger sample.

The point of a pilot study is to use a design that is similar to the larger, future trial so that you can demonstrate feasibility and troubleshoot any issues before you move on to the main trial. They didn't do that here. Instead they tried to play it both ways by treating the study as a "pilot" (failed b/c no way would the larger trial be so poorly designed) and an attempt to demonstrate efficacy (underpowered). Study sections must reward this kind of behavior because it's alarmingly common.

I'm a little surprised at that IF, but it's irrelevant to this discussion.
 
  • Like
Reactions: 3 users
Sorry, you're right. I missed that. But that's even dumber! Why did they intentionally destroy their control group, eliminating the possibility of making any meaningful observation at all after two weeks? Why not do a regular crossover design? (Did they plan a crossover and then decide to extend their treatment time after observing no effect at their chosen two week endpoint?)

I would hope not, that would be wildly unethical, and would require multiple members of the study team to be complicit. Thus, I think it's highly unlikely. If you're going to f*ck with data like that I don't think you'd be as transparent about procedures.

Without a control group one can make no comment on efficacy, and the authors should not have done so.
The only comment they made on efficacy was with the control group, and they acknowledged it was not statistically significant. To me that seems appropriate and accurate, given that it's a pilot.

Good, and their comments on feasibility and acceptability are justified. They are not justified in commenting on efficacy past the point where their control group ended, and they did not find efficacy in the window where they did have a control.
Yes, and they didn't, on my read (thus the "lemon drop" quote that you initially cited).

Again, if your pilot shows no effect, nobody is going to give you money for a bigger study. There are too many other worthy ideas out there begging for funding.

No one has ever done a comparison of combined med/psychotherapy (the "gold standard") vs combined open-label placebo/psychotherapy. I think that's a very worthy idea that would probably have minimal costs associated with it (don't need "$5MM" to run that study).

The point of a pilot study is to use a design that is similar to the larger, future trial so that you can demonstrate feasibility and troubleshoot any issues before you move on to the main trial. They didn't do that here. Instead they tried to play it both ways by treating the study as a "pilot" (failed b/c no way would the larger trial be so poorly designed) and an attempt to demonstrate efficacy (underpowered). Study sections must reward this kind of behavior because it's alarmingly common.

I'm a little surprised at that IF, but it's irrelevant to this discussion.

Yea, it's certainly a flawed study. However, I think the IF of the journal is germane to the conversation when people are critiquing it by saying that the study is flawed to the point that it's meaningless. That's a subjective judgment, and clearly there are experts in this area who disagree with that assessment.
 
Yea, it's certainly a flawed study. However, I think the IF of the journal is germane to the conversation when people are critiquing it by saying that the study is flawed to the point that it's meaningless. That's a subjective judgment, and clearly there are experts in this area who disagree with that assessment.

This is an appeal to authority. Many people commenting on this thread have expertise in research design. Besides, this was published as a letter to the editor and cited 128 times in the last 8 years so I don't think it's really driving the impact factor for this particular journal.

FWIW, I think this is a situation where gain score t-tests rather than ANCOVAs would've been appropriate due to the smaller sample size and likely measurement error in the dependent variables.
 
Last edited:
  • Like
Reactions: 3 users
However, I think the IF of the journal is germane to the conversation when people are critiquing it by saying that the study is flawed to the point that it's meaningless.

I didn't say that it was meaningless. That's hyperbole. But if I were reviewing a grant proposal and this was presented as the pilot work, I would be underwhelmed.

The only comment they made on efficacy was with the control group, and they acknowledged it was not statistically significant. To me that seems appropriate and accurate, given that it's a pilot.

What you seem to be missing is that a hypothesis test for the efficacy outcome (or any outcome) has no business in a pilot study in the first place.
 
  • Like
Reactions: 3 users

I had sent those articles to an MD relative of mine who was very pro-hydroxychloroquine about a week before the redaction. Silence from his end until the redactions came out. Super fun conversation to have with a rabid Republican. I think the finer points of the distinction I tried to highlight between the scientific integrity of the academic community versus the journalistic integrity of, say, FOX News, may have been lost on him :unsure:
 
  • Haha
Reactions: 1 user
I had sent those articles to an MD relative of mine who was very pro-hydroxychloroquine about a week before the redaction. Silence from his end until the redactions came out. Super fun conversation to have with a rabid Republican. I think the finer points of the distinction I tried to highlight between the scientific integrity of the academic community versus the journalistic integrity of, say, FOX News, may have been lost on him :unsure:

Even without the redacted article on hydroxycholorquine, other data we have shows it has no effect at best, and is dangerous on a large enough scale at worst. I'm not sure why it's political that we shouldn't widely prescribe a drug with known potentially fatal cardiac side effects until we at least know it does something for what we want to use it for.
 
  • Like
Reactions: 3 users
I would hope not, that would be wildly unethical, and would require multiple members of the study team to be complicit. Thus, I think it's highly unlikely. If you're going to f*ck with data like that I don't think you'd be as transparent about procedures.

So young... So trusting

No one has ever done a comparison of combined med/psychotherapy (the "gold standard") vs combined open-label placebo/psychotherapy. I think that's a very worthy idea that would probably have minimal costs associated with it (don't need "$5MM" to run that study).

Oh yes you do. Individual psychotherapy trials are wildly expensive because you have to pay the time of the study therapists. People try to reduce the costs by using students who receive group supervision, but it's still a very pricey endeavor.

Yea, it's certainly a flawed study. However, I think the IF of the journal is germane to the conversation when people are critiquing it by saying that the study is flawed to the point that it's meaningless. That's a subjective judgment, and clearly there are experts in this area who disagree with that assessment.

Besides the logical error of appeal to authority, it's simply true that even high-IF journals often publish crap. Knowing the editor (or being on the board) can count for a lot. Also peer review is so random. It's very hard to find people willing to spend time reviewing other people's manuscripts for no reward whatsoever, either material or reputational. Sometimes the people who do the review focus on strange details or miss the point of the paper altogether. Terrible papers can sneak through by the luck of being handed to oblivious reviewers.

You can't just throw down an impact factor and end the conversation. Papers have to be examined on their merits.
 
  • Like
Reactions: 1 user
Again, if your pilot shows no effect, nobody is going to give you money for a bigger study. There are too many other worthy ideas out there begging for funding.

This is an appeal to authority. Many people commenting on this thread have expertise in research design. Besides, this was published as a letter to the editor and cited 128 times in the last 8 years so I don't think it's really driving the impact factor for this particular journal.

I didn't say that it was meaningless. That's hyperbole. But if I were reviewing a grant proposal and this was presented as the pilot work, I would be underwhelmed.

Besides the logical error of appeal to authority, it's simply true that even high-IF journals often publish crap. Knowing the editor (or being on the board) can count for a lot. Also peer review is so random. It's very hard to find people willing to spend time reviewing other people's manuscripts for no reward whatsoever, either material or reputational. Sometimes the people who do the review focus on strange details or miss the point of the paper altogether. Terrible papers can sneak through by the luck of being handed to oblivious reviewers.

You can't just throw down an impact factor and end the conversation. Papers have to be examined on their merits.

Out of curiosity, what would a more rigorously designed RCT pilot study look like? I think one important thing would be to extend the follow up beyond the typical 4 week. I'd love to see how outcomes look further down the line -- 6 months, 12 months, etc.
 
Oh yes you do. Individual psychotherapy trials are wildly expensive because you have to pay the time of the study therapists. People try to reduce the costs by using students who receive group supervision, but it's still a very pricey endeavor.

I'll echo tr here as I've been part of RCTs for therapy as an independent evaluator and as a therapist. Even small studies that only look at therapy are pretty expensive. Throw meds on top of that, to do a well-powered study, with adequate treatment and control arms, with adequate follow-up, definitely well into the millions.
 
  • Like
Reactions: 3 users
Out of curiosity, what would a more rigorously designed RCT pilot study look like? I think one important thing would be to extend the follow up beyond the typical 4 week. I'd love to see how outcomes look further down the line -- 6 months, 12 months, etc.

Actually I think the 2+2 week crossover design would have been very appropriate, which is why I wondered if they just decided to extend the treatment time on the open label placebo arm. It's not ethical to keep people on wait-list control conditions for extended periods of time though, and a usual-care control obviously isn't appropriate for a comparison with placebo. So I don't think you could get long-term follow-up with a meaningful control condition. You could choose to follow the open-label placebo arm over time with informed consent, but I'm not sure how useful that would be. You couldn't ethically prevent those participants from accessing other treatments, and then any improvement would be suspected to relate to the other care they had received.

Edit: You could do usual-care control and usual-care plus placebo for as long as you wanted. That would actually be cheap because all you'd have to do is provide placebo and throw everyone a HAMD once in a while. I seriously doubt you'd see much signal though, due to the heterogeneity of other treatments those participants might be exposed to.
 
Last edited:
  • Like
Reactions: 1 users
This thread made me curious, so I've done a little more digging. A meta-analysis was actually done comparing combined psychotherapy + placebo (d = 1.51, p < .001) to combined psychotherapy + medication (d = 1.73, p < .001). In other words, you get 87% of the effect of combined treatment without actually introducing the medication (Error - Cookies Turned Off).

Also, consider that antidepressant meds have side effects, and going off the medications are related to significant withdrawal effects. For example, an estimated 56% of patients experience withdrawal, 46% of those report that withdrawal effects are "severe," and for some withdrawal lasts over a year (A systematic review into the incidence, severity and duration of antidepressant withdrawal effects: Are guidelines evidence-based?).

Personally, I think it's concerning that in order to improve treatment effects by 13% an estimated 12.7% of people over the age of 12 took an antidepressant last month (By the numbers: Antidepressant use on the rise).
 
  • Like
Reactions: 1 user
This thread made me curious, so I've done a little more digging. A meta-analysis was actually done comparing combined psychotherapy + placebo (d = 1.51, p < .001) to combined psychotherapy + medication (d = 1.73, p < .001). In other words, you get 87% of the effect of combined treatment without actually introducing the medication (Error - Cookies Turned Off).

Also, consider that antidepressant meds have side effects, and going off the medications are related to significant withdrawal effects. For example, an estimated 56% of patients experience withdrawal, 46% of those report that withdrawal effects are "severe," and for some withdrawal lasts over a year (A systematic review into the incidence, severity and duration of antidepressant withdrawal effects: Are guidelines evidence-based?).

Personally, I think it's concerning that in order to improve treatment effects by 13% an estimated 12.7% of people over the age of 12 took an antidepressant last month (By the numbers: Antidepressant use on the rise).

I think you should look at that systematic review a little more closely. A large proportion of the data comes from three large online direct-to-consumer surveys.

"These three studies had the largest sample sizes to date (and did not restrict the period during which withdrawal reactions were reported), but the samples were neither randomised nor stratified. It is therefore possible that they may have over-represented people who were dissatisfied with antidepressants."

Y'think?

I also want to point out the oceans of difference between "people who meet criteria for MDD" and "people who are taking SSRIs, SNRIs, or TCAs for any reason."
 
I think you should look at that systematic review a little more closely. A large proportion of the data comes from three large online direct-to-consumer surveys.

"These three studies had the largest sample sizes to date (and did not restrict the period during which withdrawal reactions were reported), but the samples were neither randomised nor stratified. It is therefore possible that they may have over-represented people who were dissatisfied with antidepressants."

Y'think?

I also want to point out the oceans of difference between "people who meet criteria for MDD" and "people who are taking SSRIs, SNRIs, or TCAs for any reason."

Uhhh...Did you just stop reading there?

Here's the rest of that section:

However, as the majority of the participants reported that the antidepressants had reduced their depression, in both the New Zealand (83%) and international (65%) studies the ‘dissatisfaction bias’ concern seems minimal. (Satisfaction data was not provided in the RCPsych study).

Table 1 also summarises eleven other, smaller studies, with diverse methodologies, mostly using assessment periods of just 5–14 days. A multicentre study of 86 people who had been on antidepressants for over 3 months, found that 66 (77%) exhibited withdrawal symptoms within 7 days of having the drug abruptly replaced with placebo (Hindmarch, Kimber, & Cocle, 2000). An 8-week multicentre randomised trial, comparing sertraline and venlafaxine XR patients with major depressive disorder, revealed withdrawal reactions in a combined average of 85% of 129 patients (Sir et al., 2005; Table 4). An RCT study of 95 people who abruptly stopped taking fluoxetine indicated 67% experienced withdrawal reactions, (Zajecka et al., 1998) and a case-report study of 14 people who abruptly withdrew from fluvoxamine found that 86% experienced withdrawal (Black, Wesner, & Gabel, 1993). An additional randomised clinical trial of SSRI withdrawal, covering 185 people, revealed an average withdrawal incidence of 46% (Rosenbaum, Fava, Hoog, Ascroft, & Krebs, 1998). Another study, evaluating 25 outpatients treated with escitalopram, found 14 (56%) experienced withdrawal reactions, with higher dose and lower clearance leading to higher risk of withdrawal (Yasui-Furukori et al., 2016). A further study of 28 users of venlafaxine who were randomised to a three-day or 14-day taper, indicated that 46% experienced withdrawal (Tint, Haddad, & Anderson, 2008). Finally, a small study of 20 outpatients treated with SSRIs before slowly tapering off them found that 45% exhibited withdrawal reactions (Fava, Bernardi, Tomba, & Rafanelli, 2007).

Three studies report somewhat lower rates. One, an open trial of 97 people who discontinued their SSRIs, found 27% experienced withdrawal upon discontinuation (Bogetto, Bellino, Revello, & Patria, 2002). The second, a 12-week randomised, double-blind study of paroxetine patients, showed that of 55 withdrawing from paroxetine 35% developed withdrawal reactions upon abrupt discontinuation (Oehrberg et al., 1995). The third, a randomised, double-blind, placebo-controlled study of escitalopram, found that 27% of 181 people exhibited withdrawal reactions following abrupt replacement with placebo (Montgomery, Nil, Durr-Pal, Loft, & Boulenger, 2005).

These 14 methodologically diverse studies (comprising RCTs, naturalistic studies and surveys) produced incidence rates ranging from 27% to 86%. When grouping the three types of study together, the weighted average for each was: the three surveys, 57.1% (1790/3137); the five naturalistic studies, 52.5% (127/242); and the six RCTs, 50.7% (341/673). The combined median of all studies was 55%, with a weighted average of 55.7% (2258/4052).

I also want to point out the oceans of difference between "people who meet criteria for MDD" and "people who are taking SSRIs, SNRIs, or TCAs for any reason."
True! However, do antidepressants have a larger effect for diagnoses other than depression? The "oceans of difference" comment seems like it's a further critique of current typical antidepressant prescribing practices.
 
I think you should look at that systematic review a little more closely. A large proportion of the data comes from three large online direct-to-consumer surveys.

"These three studies had the largest sample sizes to date (and did not restrict the period during which withdrawal reactions were reported), but the samples were neither randomised nor stratified. It is therefore possible that they may have over-represented people who were dissatisfied with antidepressants."

Y'think?
Uhhh...Did you just stop reading there?

Here's the rest of that section:

However, as the majority of the participants reported that the antidepressants had reduced their depression, in both the New Zealand (83%) and international (65%) studies the ‘dissatisfaction bias’ concern seems minimal. (Satisfaction data was not provided in the RCPsych study).

Table 1 also summarises eleven other, smaller studies, with diverse methodologies, mostly using assessment periods of just 5–14 days. A multicentre study of 86 people who had been on antidepressants for over 3 months, found that 66 (77%) exhibited withdrawal symptoms within 7 days of having the drug abruptly replaced with placebo (Hindmarch, Kimber, & Cocle, 2000). An 8-week multicentre randomised trial, comparing sertraline and venlafaxine XR patients with major depressive disorder, revealed withdrawal reactions in a combined average of 85% of 129 patients (Sir et al., 2005; Table 4). An RCT study of 95 people who abruptly stopped taking fluoxetine indicated 67% experienced withdrawal reactions, (Zajecka et al., 1998) and a case-report study of 14 people who abruptly withdrew from fluvoxamine found that 86% experienced withdrawal (Black, Wesner, & Gabel, 1993). An additional randomised clinical trial of SSRI withdrawal, covering 185 people, revealed an average withdrawal incidence of 46% (Rosenbaum, Fava, Hoog, Ascroft, & Krebs, 1998). Another study, evaluating 25 outpatients treated with escitalopram, found 14 (56%) experienced withdrawal reactions, with higher dose and lower clearance leading to higher risk of withdrawal (Yasui-Furukori et al., 2016). A further study of 28 users of venlafaxine who were randomised to a three-day or 14-day taper, indicated that 46% experienced withdrawal (Tint, Haddad, & Anderson, 2008). Finally, a small study of 20 outpatients treated with SSRIs before slowly tapering off them found that 45% exhibited withdrawal reactions (Fava, Bernardi, Tomba, & Rafanelli, 2007).

Three studies report somewhat lower rates. One, an open trial of 97 people who discontinued their SSRIs, found 27% experienced withdrawal upon discontinuation (Bogetto, Bellino, Revello, & Patria, 2002). The second, a 12-week randomised, double-blind study of paroxetine patients, showed that of 55 withdrawing from paroxetine 35% developed withdrawal reactions upon abrupt discontinuation (Oehrberg et al., 1995). The third, a randomised, double-blind, placebo-controlled study of escitalopram, found that 27% of 181 people exhibited withdrawal reactions following abrupt replacement with placebo (Montgomery, Nil, Durr-Pal, Loft, & Boulenger, 2005).

These 14 methodologically diverse studies (comprising RCTs, naturalistic studies and surveys) produced incidence rates ranging from 27% to 86%. When grouping the three types of study together, the weighted average for each was: the three surveys, 57.1% (1790/3137); the five naturalistic studies, 52.5% (127/242); and the six RCTs, 50.7% (341/673). The combined median of all studies was 55%, with a weighted average of 55.7% (2258/4052).


True! However, do antidepressants have a larger effect for diagnoses other than depression? The "oceans of difference" comment seems like it's a further critique of current typical antidepressant prescribing practices.

I recognize there were plenty of other studies. I also don't buy 'it helped my depression' means 'i'm not disgruntled'.

Nothing that is treated with antidepressant medications regardless of class is a monoamine deficiency. They are also not antibiotic magic bullets. These are medications with non-specific effects that happen to be helpful for some people with a variety of different challenges. As a field we are trying to change the nomenclature and get people to stop calling them antidepressants because this implied a false specificity. So I don't understand how 'people get SSRIs, SNRIs, TCAs, MAOIs for a variety of different reasons" is a critique.

A super common reason antidepressants get prescribed is for folks with excessive or problematic anxiety. They are also frequently prescribed to groups of people who tend to be somewhat impressionable and with somatic expressions of distress, such as people with BPD (that is closer to a critique of prescribing practices). These are groups of people who are very attuned to bodily sensations and tend to experience even benign experiences as disastrous.

I think we can agree that even SSRIs have some psychoactive properties for some people and thus change their experience in some way (TCAs this is beyond doubt, of you don't believe me go take 200 mg of Pamelor and get back to me when you finally wake up tomorrow). They are often given to people who tend to experience state changes as catastrophic or dysphoric. If you ask these people systematically about possible side effects and how bad they are, what role do you think those tendencies play in their results? I hope you also see the answer is unlikely to be "none".

SSRIs clearly have side effects and can precipitate withdrawal symptoms. For some people this ends up being quite bothersome and a very bad time. It is also not even close to being the modal result in practice in situations where people are not being repetitively asked about all the ways they might be suffering.

In medicine we sometimes refer to "retrobulbar micturalgia". That is, does it hurt behind your eyes when you pee? This is not a thing, there is no physiological possibility of this. No one has ever spontaneously reported this. Yet I can assure you that I have met people who will endorse this when asked about it specifically. There are many reasons not to take reports like this in the context of being asked over and over about them as representing inevitable and intrinsic phenomena 100% attributable to a particular molecule or the absence thereof.
 
  • Like
Reactions: 1 users
Uhhh...Did you just stop reading there?

Here's the rest of that section:

However, as the majority of the participants reported that the antidepressants had reduced their depression, in both the New Zealand (83%) and international (65%) studies the ‘dissatisfaction bias’ concern seems minimal. (Satisfaction data was not provided in the RCPsych study).

Table 1 also summarises eleven other, smaller studies, with diverse methodologies, mostly using assessment periods of just 5–14 days. A multicentre study of 86 people who had been on antidepressants for over 3 months, found that 66 (77%) exhibited withdrawal symptoms within 7 days of having the drug abruptly replaced with placebo (Hindmarch, Kimber, & Cocle, 2000). An 8-week multicentre randomised trial, comparing sertraline and venlafaxine XR patients with major depressive disorder, revealed withdrawal reactions in a combined average of 85% of 129 patients (Sir et al., 2005; Table 4). An RCT study of 95 people who abruptly stopped taking fluoxetine indicated 67% experienced withdrawal reactions, (Zajecka et al., 1998) and a case-report study of 14 people who abruptly withdrew from fluvoxamine found that 86% experienced withdrawal (Black, Wesner, & Gabel, 1993). An additional randomised clinical trial of SSRI withdrawal, covering 185 people, revealed an average withdrawal incidence of 46% (Rosenbaum, Fava, Hoog, Ascroft, & Krebs, 1998). Another study, evaluating 25 outpatients treated with escitalopram, found 14 (56%) experienced withdrawal reactions, with higher dose and lower clearance leading to higher risk of withdrawal (Yasui-Furukori et al., 2016). A further study of 28 users of venlafaxine who were randomised to a three-day or 14-day taper, indicated that 46% experienced withdrawal (Tint, Haddad, & Anderson, 2008). Finally, a small study of 20 outpatients treated with SSRIs before slowly tapering off them found that 45% exhibited withdrawal reactions (Fava, Bernardi, Tomba, & Rafanelli, 2007).

Three studies report somewhat lower rates. One, an open trial of 97 people who discontinued their SSRIs, found 27% experienced withdrawal upon discontinuation (Bogetto, Bellino, Revello, & Patria, 2002). The second, a 12-week randomised, double-blind study of paroxetine patients, showed that of 55 withdrawing from paroxetine 35% developed withdrawal reactions upon abrupt discontinuation (Oehrberg et al., 1995). The third, a randomised, double-blind, placebo-controlled study of escitalopram, found that 27% of 181 people exhibited withdrawal reactions following abrupt replacement with placebo (Montgomery, Nil, Durr-Pal, Loft, & Boulenger, 2005).

These 14 methodologically diverse studies (comprising RCTs, naturalistic studies and surveys) produced incidence rates ranging from 27% to 86%. When grouping the three types of study together, the weighted average for each was: the three surveys, 57.1% (1790/3137); the five naturalistic studies, 52.5% (127/242); and the six RCTs, 50.7% (341/673). The combined median of all studies was 55%, with a weighted average of 55.7% (2258/4052).

I don't have time to look at this paper today (maybe later) but I'm curious about the emphasis on withdrawal effects? I think we all know that SSRIs should not be stopped abruptly due to withdrawal effects, that's why it's standard clinical practice to taper for discontinuation. Why is this worth mentioning?
 
  • Like
Reactions: 1 user
@beginner2011
What argument are you trying to make? Is it still for use of placebos? Or is it something about the current prescriptive habits of medications for depression?
 
I recognize there were plenty of other studies. I also don't buy 'it helped my depression' means 'i'm not disgruntled'.
The fact that the average withdrawal prevalence was similar across studies indicates reliability of the effect. You can "lemon drop" all you want, but the limitations were accounted for in the study. It's not perfect, but it's certainly not trash.

Nothing that is treated with antidepressant medications regardless of class is a monoamine deficiency. They are also not antibiotic magic bullets. These are medications with non-specific effects that happen to be helpful for some people with a variety of different challenges. As a field we are trying to change the nomenclature and get people to stop calling them antidepressants because this implied a false specificity. So I don't understand how 'people get SSRIs, SNRIs, TCAs, MAOIs for a variety of different reasons" is a critique.
Do SSRIs, SNRIs, TCAs, MAOIs have larger effect sizes for other diagnoses? Sometimes it seems like they're used by doctors as a placebo-effect inducer. "Got a complaint? I'll throw an antidepressant at 'em and see if it gets better."

A super common reason antidepressants get prescribed is for folks with excessive or problematic anxiety. They are also frequently prescribed to groups of people who tend to be somewhat impressionable and with somatic expressions of distress, such as people with BPD (that is closer to a critique of prescribing practices). These are groups of people who are very attuned to bodily sensations and tend to experience even benign experiences as disastrous.

Sure, so why not prescribe them a placebo that has zero side effects? The data I've come across suggests that in most cases the psytx+medication is not superior to the psytx+placebo. Am I missing something? For example:

Results: Clinical Global Impressions scales response rates in the intention-to-treat sample were 29 (50.9%) (FLU),31 (51.7%) (CCBT), 32 (54.2%) (CCBT/FLU), 30 (50.8%) (CCBT/PBO), and 19 (31.7%) (PBO), with all treatments being significantly better than PBO. On the Brief Social Phobia Scale, all active treatments were superior to PBO. In the linear mixed-effects models analysis, FLU was more effective than CCBT/FLU, CCBT/PBO, and PBO at week 4; CCBT was also ore effective than CCBT/FLU and CCBT/PBO. By the final visit, all active treatments were superior to PBO but did not differ from each other. Site effects were found for the Subjective Units of Distress Scale assessment, with FLU and CCBT/FLU superior to PBO at Duke University Medical Center, Durham, NC. Treatments were well tolerated.


I think we can agree that even SSRIs have some psychoactive properties for some people and thus change their experience in some way (TCAs this is beyond doubt, of you don't believe me go take 200 mg of Pamelor and get back to me when you finally wake up tomorrow). They are often given to people who tend to experience state changes as catastrophic or dysphoric. If you ask these people systematically about possible side effects and how bad they are, what role do you think those tendencies play in their results? I hope you also see the answer is unlikely to be "none".
How else would you propose we assess for side effects/withdrawal?

SSRIs clearly have side effects and can precipitate withdrawal symptoms. For some people this ends up being quite bothersome and a very bad time. It is also not even close to being the modal result in practice in situations where people are not being repetitively asked about all the ways they might be suffering.

Source?

In medicine we sometimes refer to "retrobulbar micturalgia". That is, does it hurt behind your eyes when you pee? This is not a thing, there is no physiological possibility of this. No one has ever spontaneously reported this. Yet I can assure you that I have met people who will endorse this when asked about it specifically. There are many reasons not to take reports like this in the context of being asked over and over about them as representing inevitable and intrinsic phenomena 100% attributable to a particular molecule or the absence thereof.
Again, how would you propose we evaluate side effects/withdrawal if not via self-report?

I don't have time to look at this paper today (maybe later) but I'm curious about the emphasis on withdrawal effects? I think we all know that SSRIs should not be stopped abruptly due to withdrawal effects, that's why it's standard clinical practice to taper for discontinuation. Why is this worth mentioning?
The withdrawal effects are not the primary argument against prescription. The primary argument is that the 13% increase in effect size is not worth the iatrogenic effects and monetary cost to the HCS/individual. Given the well-known side-effects and withdrawal risk it would make more sense to just offer psytx+placebo.


@beginner2011
What argument are you trying to make? Is it still for use of placebos? Or is it something about the current prescriptive habits of medications for depression?

I'm making two arguments:

1. On average, it appears that placebo (psychotherapy + placebo) achieves 87% of the effect on depression symptoms that medication (psychotherapy + medication) achieves.

2. Given the harms of medication (i.e., side effects, withdrawal, costs to the health care system and the individual), I would argue that the benefit (13% average increase in effect size) is not worth the harm.

I do think it's odd that no one has acknowledged the strength of the placebo argument in light of the meta-analysis I linked. I'll state it again:

A meta-analysis was actually done comparing combined psychotherapy + placebo (d = 1.51, p < .001) to combined psychotherapy + medication (d = 1.73, p < .001). In other words, you get 87% of the effect of combined treatment without actually introducing the medication (The Contribution of Active Medication to Combined Treatments of Psychotherapy and Pharmacotherapy for Adult Depression: A Meta-Analysis - PubMed).
 
Last edited:
The fact that the average withdrawal prevalence was similar across studies indicates reliability of the effect. You can "lemon drop" all you want, but the limitations were accounted for in the study. It's not perfect, but it's certainly not trash.


Do SSRIs, SNRIs, TCAs, MAOIs have larger effect sizes for other diagnoses? Sometimes it seems like they're used by doctors as a placebo-effect inducer. "Got a complaint? I'll throw an antidepressant at 'em and see if it gets better."



Sure, so why not prescribe them a placebo that has zero side effects? The data I've come across suggests that the psytx+placebo is not superior to the psytx+medication. Am I missing something? For example:

Results: Clinical Global Impressions scales response rates in the intention-to-treat sample were 29 (50.9%) (FLU),31 (51.7%) (CCBT), 32 (54.2%) (CCBT/FLU), 30 (50.8%) (CCBT/PBO), and 19 (31.7%) (PBO), with all treatments being significantly better than PBO. On the Brief Social Phobia Scale, all active treatments were superior to PBO. In the linear mixed-effects models analysis, FLU was more effective than CCBT/FLU, CCBT/PBO, and PBO at week 4; CCBT was also ore effective than CCBT/FLU and CCBT/PBO. By the final visit, all active treatments were superior to PBO but did not differ from each other. Site effects were found for the Subjective Units of Distress Scale assessment, with FLU and CCBT/FLU superior to PBO at Duke University Medical Center, Durham, NC. Treatments were well tolerated.



How else would you propose we assess for side effects/withdrawal?



Source?


Again, how would you propose we evaluate side effects/withdrawal if not via self-report?


The withdrawal effects are not the primary argument against prescription. The primary argument is that the 13% increase in effect size is not worth the iatrogenic effects and monetary cost to the HCS/individual. Given the well-known side-effects and withdrawal risk it would make more sense to just offer psytx+placebo.




I'm making two arguments:

1. On average, it appears that placebo (psychotherapy + placebo) achieves 87% of the effect on depression symptoms that medication (psychotherapy + medication) achieves.

2. Given the harms of medication (i.e., side effects, withdrawal, costs to the health care system and the individual), I would argue that the benefit (13% average increase in effect size) is not worth the harm.

I do think it's odd that no one has acknowledged the strength of the placebo argument in light of the meta-analysis I linked. I'll state it again:

A meta-analysis was actually done comparing combined psychotherapy + placebo (d = 1.51, p < .001) to combined psychotherapy + medication (d = 1.73, p < .001). In other words, you get 87% of the effect of combined treatment without actually introducing the medication (The Contribution of Active Medication to Combined Treatments of Psychotherapy and Pharmacotherapy for Adult Depression: A Meta-Analysis - PubMed).

There is way too much in this post to respond to in a timely fashion, but I just want to ask - given all the threads on this forum that go on about incompetent interventions delivered by half-trained midlevels, how likely is it that the average person presenting for care received high quality psychotherapy? The modal therapist is not a well-trained Phd/PsyD.

I'm curious because I don't know - are there any papers estimating the increase in effect size of psychotherapy on top of medications versus medications alone? How does it compare? If we are going to talk about HCS costs, Lexapro is much cheaper than even a mid-level therapist. It's an empirical matter at the end of the day but i
even significant side effects might be worth the cost tradeoff if the increase isn't big enough if we are going to start bringing the money question into it.
 
  • Like
Reactions: 1 users
There is way too much in this post to respond to in a timely fashion, but I just want to ask - given all the threads on this forum that go on about incompetent interventions delivered by half-trained midlevels, how likely is it that the average person presenting for care received high quality psychotherapy? The modal therapist is not a well-trained Phd/PsyD.

I'm curious because I don't know - are there any papers estimating the increase in effect size of psychotherapy on top of medications versus medications alone? How does it compare? If we are going to talk about HCS costs, Lexapro is much cheaper than even a mid-level therapist. It's an empirical matter at the end of the day but i
even significant side effects might be worth the cost tradeoff if the increase isn't big enough if we are going to start bringing the money question into it.

I think the cost tradeoff and quality of psychotherapy delivery are fantastic questions, and I'm not aware of any research on a comparison of costs. I'll admit that the benefit/harm ratio calculation that I'm making is based mostly on opinion. The only data I have is that the system (i.e., CMS) is paying out roughly 3-4x the hourly rate for psychiatrists vs psychologists, at the moment. I do believe the impact of quality of psychotherapy is important, and doesn't get much attention from the HCS generally. No one looks at outcomes to determine compensation -- for a number of reasons (scientifically challenging, politically challenging, etc.). As far as whether the specific harm/benefit ratio is "acceptable" I think that's a decision for patients to make. It strikes me as odd that everyone is so quick to say that psytx+meds is the obvious "gold standard", when there are so many risks of harm associated with the med, and the increase in effect is so minimal.

To your specific empirical question, the most thorough meta-analysis that I've seen is Cuijpers' 2014, which reported that the addition of psychotherapy increased the effect of medication alone (g=.37):

Table 3. Direct comparisons between psychotherapy, pharmacotherapy, combined psychotherapy and pharmacotherapy, and placebo in anxiety and depressive disorders (Hedges' g)
Ncompg95% CII295% CINNT
Combined vs. placebo110.740.48‐1.016533‐822.50
Pharmacotherapy vs. combined110.370.12‐0.63430‐724.85
Pharmacotherapy vs. placebo110.350.21‐0.4900‐605.10
Psychotherapy vs. combined110.380.16‐0.59538‐764.72
Psychotherapy vs. placebo110.370.11‐0.646841‐834.85

 
Last edited:
I think the cost tradeoff and quality of psychotherapy delivery is a fantastic question, and I'm not aware of any research on a comparison of costs. I'll admit that the benefit/harm ratio calculation that I'm making is based entirely on opinion. I don't know exactly what the costs are, but I do know that the system (i.e., CMS) is paying out roughly 3-4x the hourly rate for psychiatrists vs psychologists, at the moment.

To your specific empirical question, the most thorough meta-analysis that I've seen is Cuijpers' 2014, which reported that the addition of psychotherapy increased the effect of medication alone (g=.37):

Table 3. Direct comparisons between psychotherapy, pharmacotherapy, combined psychotherapy and pharmacotherapy, and placebo in anxiety and depressive disorders (Hedges' g)
Ncompg95% CII295% CINNT
Combined vs. placebo110.740.48‐1.016533‐822.50
Pharmacotherapy vs. combined110.370.12‐0.63430‐724.85
Pharmacotherapy vs. placebo110.350.21‐0.4900‐605.10
Psychotherapy vs. combined110.380.16‐0.59538‐764.72
Psychotherapy vs. placebo110.370.11‐0.646841‐834.85


I'm not that familiar with Hedges' g but numerically the effect sizes of solely
medication and solely psychotherapy look pretty similar. Forget psychiatrists, we don't write the majority of antidepressant scripts in this country. The hourly rate of say an NP makes the cost difference much lower, but definitely still increased the number of people who can have "access" or whatever the current buzzword is.

If you believe those NNTs, this metanalysis suggests that antidepressants are some of the most effective drugs known to medicine. Half of like everyone in this country is on a station for cholesterol or a BP med for asymptomatic HTN. Those interventions, at best, have an NNT in the 50s. Those meds also sure as heck do have side effects for some people. Some people take a medication and see their prescriber every three months if not less often. It is hard to beat that from the system's perspective efficiency wise.

Look, I want all of my patients to be able to receive high quality therapy. I treat a lot of OCD and it would be borderline malpractice not to push someone to engage in ERP if they haven't before. I just don't think this data supports the idea that we should obviously be giving placebos.
 
  • Like
Reactions: 1 users
I'm making two arguments:

1. On average, it appears that placebo (psychotherapy + placebo) achieves 87% of the effect on depression symptoms that medication (psychotherapy + medication) achieves.
I don't think anyone is arguing this.
2. Given the harms of medication (i.e., side effects, withdrawal, costs to the health care system and the individual), I would argue that the benefit (13% average increase in effect size) is not worth the harm.
I think this is too broad of a statement. My view, providing meds as the de facto treatment to a population is the problem (or at least without more stringent informed consent). But the evidence of SSRIs for severe depression is much stronger. So the benefit may be worth the harm if more stringent informed consent procedures are applied and especially if we reduce prescriptions for mild depression.
 
  • Like
Reactions: 1 user
Hedges g is similar to Cohen's D in terms of interpretation. It's used a lot in meta analyses to look at effect sizes between experimental and control groups. Also a little better than Cohen's when looking at smaller samples
 
  • Like
Reactions: 1 users
To your specific empirical question, the most thorough meta-analysis that I've seen is Cuijpers' 2014, which reported that the addition of psychotherapy increased the effect of medication alone (g=.37):
I am not really sure if this meta is providing very different results than the one I posted earlier.
Here is just some data indicating that combination treatment does not provide better outcomes:

This recent meta indicates that globally speaking meds+psychotherapy work better than psychotherapy alone (ES = .35). HOWEVER, there is no difference at follow-up/maintenance. When comparing CBT, the effect was even smaller (ES = .15). To be fair, I can't imagine that having any real-world significance. Particularly, in light of potential harm from medications.
Need to take a closer look at both.

To be clear, all this is based on a blunt use of treatment: the same treatment must be effective for all etiologies of depression. This is unlikely to be true.
 
  • Like
Reactions: 1 user
To be clear, all this is based on a blunt use of treatment: the same treatment must be effective for all etiologies of depression. This is unlikely to be true.

Not according to all of the people who misunderstand the limitations to Wampold's work and treat it like gospel.
 
  • Like
  • Love
Reactions: 2 users
I am not really sure if this meta is providing very different results than the one I posted earlier.

The Cuijpers2014 meta implies that medication and psytx are about 50/50 of the contribution. In combination with the Cuijpers2010 psytx+placebo meta, the results suggest that roughly:

CONDITIONImprovement from baseline to post-test
waitlist0.0
placebo0.3
medication0.6
psytx0.6
psytx + placebo1.5
psytx + medication1.7


In other words, the combined psytx+placebo increases the effect of psytx by almost .9, thats a 150% improvement in treatment effect compared to waitlist (if I'm reading that right). It's an indictment of medication overprescription, but it's also an indicator that the effect of psytx alone could be VASTLY improved by simply having pts take a placebo. The question I raised earlier is whether or not an open label placebo (no deception) might be recommended in the future for people receiveing psytx. That's why I cited that pilot study, but the sample size was admittedly very small, and they didn't explicitly investigate this question.
 
Not according to all of the people who misunderstand the limitations to Wampold's work and treat it like gospel.

I've been reading The Great Psychotherapy Debate lately -- as an aside, what would you say are the primary limitations of his work? I'm just finishing the section on efficacy (haven't started comparative efficacy yet).
 
I've been reading The Great Psychotherapy Debate lately -- as an aside, what would you say are the primary limitations of his work? I'm just finishing the section on efficacy (haven't started comparative efficacy yet).

Perhaps the biggest limitation in some of this work is the collapsing of both wide ranges of disorders, and treatments, into very heterogeneous groups. We know that different disorders respond differently to treatment in general, as well as types of treatment. Collapsing these just erases these effects, so of course things look similar.
 
  • Like
Reactions: 3 users
The Cuijpers2014 meta implies that medication and psytx are about 50/50 of the contribution. In combination with the Cuijpers2010 psytx+placebo meta, the results suggest that roughly:

CONDITIONImprovement from baseline to post-test
waitlist0.0
placebo0.3
medication0.6
psytx0.6
psytx + placebo1.5
psytx + medication1.7


In other words, the combined psytx+placebo increases the effect of psytx by almost .9, thats a 150% improvement in treatment effect compared to waitlist (if I'm reading that right). It's an indictment of medication overprescription, but it's also an indicator that the effect of psytx alone could be VASTLY improved by simply having pts take a placebo. The question I raised earlier is whether or not an open label placebo (no deception) might be recommended in the future for people receiveing psytx. That's why I cited that pilot study, but the sample size was admittedly very small, and they didn't explicitly investigate this question.

Seriously, go ahead and do it. Run your own pilot. Vitamin C plus therapy for half your patients, therapy alone for the other half. Tell us what you find. I don't think it's a bad idea at all. (Although based on the underwhelming pilot results, it may be the case that open label placebo is not nearly as effective as deceptive placebo. )

But take the comparison to active drug out of the picture, because it's not relevant to your question.

Yes, active drug has more benefits and also more side effects than placebo. The question of whether the benefits outweigh the drawbacks in any given case is one that should be made by the individual patient and treating physician.

A patient with mild/moderate symptomatology who would otherwise be managed with therapy alone is a great candidate for adjunctive placebo. But arguing that we should replace active drug with placebo for the large population of individuals who are either too depressed or simply unwilling to engage in therapy, or who don't have access to effective psychotherapy, is not defensible.
 
Top