0.3 Effect Size of Antidepressants, Implications?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

aim-agm

Full Member
7+ Year Member
Joined
Jun 10, 2015
Messages
463
Reaction score
633
I'm seeing a discussion of a new paper by Cipriani, [1] who seems to have wanted to give that antidepressants do work, that found an effect size of 0.3. This matches the 0.32 found by Kirsch[2], who apparently wanted to find that antidepressants don't work. To translate the number, an effect size of 0.3 is a 0.3 SD difference in means = a 12 percentile difference = ~23% non overlap =1-2 point change in HAMD17.

Does this change any clinical practice for you?

1. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)32802-7/fulltext

2. Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration

Members don't see this ad.
 
Frankly I thought that article was inane--the question was asked and answered. They managed to avoid publication bias. Sort of. Hooray. The much more interesting analyses would be a 1) meta-interaction analysis, as the effect size for SEVERE depression is likely much bigger than 0.3. 2) meta-analysis of placebo vs. antidepressant in PREVENTING another episode once the patient is in full remission, likely also a number much higher than 0.3.

Also, the article shows that there's great variability around the average efficacy value. Is this a real finding? I don't think so. It's that TCAs were trialed on people who were much sicker.

If you read the existing APA guideline, mild to moderate med is not indicated (therapy only), moderate either med or therapy, severe med + therapy. The current guideline is pretty reasonable to me. No change in clinical practice indicated. If you are handing out meds indiscriminately it only means you are not following good evidence based practice.

In community based practice, whether antidepressants are better than placebo is almost a meaningless question, because in the vast majority of cases who you see is much sicker than the typical enrollee of most clinical trials, as they exclude pretty much every kind of co-morbidity. In pragmatic settings (i.e. STAR*D), placebo is not deemed a valid ethical choice in efficacy evaluation.
 
  • Like
Reactions: 2 users
To translate the number, an effect size of 0.3 is a 0.3 SD difference in means = a 12 percentile difference = ~23% non overlap =1-2 point change in HAMD17.

Does this change any clinical practice for you?
No changes in practice as this, per your post, just finds the same thing as a prior study.

While an effect size of 0.3 is small, as mentioned above it is likely an underestimate for more severe depression, where prescribing antidepressants is indicated. Also, in real practice, patients don't just get 0.3 better (for lack of a better phrase). That is, the 0.3 was the effect of the meds beyond that of placebo. Compared to not prescribing anything, meds are more effective than that.
 
Members don't see this ad :)
I understand network meta analysis at a very abstract level, but I don't have the statistical facility to know how to evaluate whether their assumptions were reasonable or their methods made sense otherwise. After googling NMA, it seems to also be an area of active stats research (it's not "well understood" within the field.)
 
In community based practice, whether antidepressants are better than placebo is almost a meaningless question, because in the vast majority of cases who you see is much sicker than the typical enrollee of most clinical trials, as they exclude pretty much every kind of co-morbidity. In pragmatic settings (i.e. STAR*D), placebo is not deemed a valid ethical choice in efficacy evaluation.


Yeah, the inclusion/exclusion criteria issue is what I think sinks many of the underlying studies in terms of ecological validity. If the population being tested is significantly different (and generally much less ill), you simply cannot lean too heavily on the results.

This is one of my favorite papers highlighting this issue:
Recruitment of depressive patients for a controlled clinical trial in a psychiatric practice. - PubMed - NCBI

TL;DR a busy community practice in Austria attempts to recruit patients for an antidepressant trial, exactly one patient actually qualifies for the trial and they decline to participate, so an effective recruitment rate of 0%
 
  • Like
Reactions: 2 users
I interviewed once at a clinic which primarily does medication trials for mood disorders. They used and re-used the same population of human lab rats who would start one study as soon as they were allowed to after finishing another, along with a sprinkling of new recruits. This seemed then, and now, like a form of scientific fraud.
 
I interviewed once at a clinic which primarily does medication trials for mood disorders. They used and re-used the same population of human lab rats who would start one study as soon as they were allowed to after finishing another, along with a sprinkling of new recruits. This seemed then, and now, like a form of scientific fraud.

And were they compensated for the studies, I assume? Much like VA studies that do not adequately control for validity, there is a lot of deeply flawed research out there. Especially when using subjects who monetarily benefit from not getting better.
 
I'm seeing a discussion of a new paper by Cipriani, [1] who seems to have wanted to give that antidepressants do work, that found an effect size of 0.3. This matches the 0.32 found by Kirsch[2], who apparently wanted to find that antidepressants don't work. To translate the number, an effect size of 0.3 is a 0.3 SD difference in means = a 12 percentile difference = ~23% non overlap =1-2 point change in HAMD17.

Do you mind breaking down the steps further?

For others, note that you can't quantify the size of an effect as small, medium or large based on just cohen d. Even Cohen had warned about taking the arbitrary 0.2/0.5/0.8 cutoffs out of context. You can only call 0.3 a small effect if you have other interventions for the same ailment in the same population that have much bigger effect sizes when using the same methods of calculation.
 
Last edited:
Do you mind breaking down the steps further?

My bad, I should have work that more clearly. The first part of just restatements of the same number in different metrics in case one is clearer (i.e. some might find overlap or percentile as easier to use than effect size) but they are equivelant and you can find tables to convert them. The change in HAM-D was based off looking at standard deviations for these studies (which tend to be 4+/-2 from what I saw) and multiplying by effect size (0.3) to translate into the clinical effect of around a 1-2 point change.
 
Members don't see this ad :)
My bad, I should have work that more clearly. The first part of just restatements of the same number in different metrics in case one is clearer (i.e. some might find overlap or percentile as easier to use than effect size) but they are equivelant and you can find tables to convert them. The change in HAM-D was based off looking at standard deviations for these studies (which tend to be 4+/-2 from what I saw) and multiplying by effect size (0.3) to translate into the clinical effect of around a 1-2 point change.

To followup on this, this calculation is probably incorrect. Typically the outcome of these studies are not mean difference on Ham-D but either remission (Ham-D <10) or response (50% reduction). Ham-D is not a linear scale--the difference between 10 and 5 (both of which in the remission zone) is not the same as between 20 and 15.

If you use remission rate, then the 0.3 number is more meaningful. 30% would be a odds ratio measure. This means, with placebo you have a 30% remission rate, with med you'd have a 40%. This is roughly the correct commonly referenced number needed to treat (NNT) for common antidepressants (~10).
 
Last edited:
My bad, I should have work that more clearly. The first part of just restatements of the same number in different metrics in case one is clearer (i.e. some might find overlap or percentile as easier to use than effect size) but they are equivelant and you can find tables to convert them. The change in HAM-D was based off looking at standard deviations for these studies (which tend to be 4+/-2 from what I saw) and multiplying by effect size (0.3) to translate into the clinical effect of around a 1-2 point change.

To followup on this, this calculation is probably incorrect. Typically the outcome of these studies are not mean difference on Ham-D but either remission (Ham-D <10) or response (50% reduction). Ham-D is not a linear scale--the difference between 10 and 5 (both of which in the remission zone) is not the same as between 20 and 15.

If you use remission rate, then the 0.3 number is more meaningful. 30% would be a odds ratio measure. This means, with placebo you have a 30% remission rate, with med you'd have a 40%. This is roughly the correct commonly referenced number needed to treat (NNT) for common antidepressants (~10).

Yes, unfortunately, you can't do either of those manipulations using the effect size. Also, the reason I requested for a breakdown of the steps is I believe the interpretation isn't accurate. Do you mind posting the tables you used, and how you arrived at the other two statements, "12 percentile difference, ~23% non overlap," or what you mean by them?

Do you mean a z table like this one? http://www.stat.ufl.edu/~athienit/Tables/Ztable.pdf

The best restatement of a 0.3 effect size that you could arrive at based on such a table would be that 61.79% of people in the placebo group have worse scores than the average person in the treatment group, and the overall best restatement would be that the probability that a person from the treatment group will be better than the person from the placebo group is 58%.

There are also other issues with HAM-D17 that I can write essays about, which make all these arguments moot. Even Hamilton despised this American version, yet it's now the standard in clinical trials. HAM-D6 is much more sensitive to change. Overall, this meta-analysis, like several others, is well-intentioned, but ultimately just another example of garbage-in garbage-out.
 
Almost forgot: These are short-term results. So unless there are other treatments for depression with effect sizes much higher than 0.3 within ~2 months of treatment, 0.3 is actually a bloody good effect, I'd say.
 
Almost forgot: These are short-term results. So unless there are other treatments for depression with effect sizes much higher than 0.3 within ~2 months of treatment, 0.3 is actually a bloody good effect, I'd say.
That raises a question I was going to pose. Who would fund large, rigorous research to look at all of the treatments that are supposed to help but have too little research to show conclusively they do. For example, there are some studies showing curcumin has a significant benefit in depression, but they're small, flawed studies. If they're trying to suss out whether drugs literally called anti-depressants are in fact anti-depressants 30-40 years after they came out and that have billions riding on sales of generics alone, who would seriously research alternative treatments in a way that led to results people trusted.
 
Was mentioned before, but even if antidepressants are just a little bit better than placebo, they are massively better than no treatment. So in real life your patients are getting dramatically more benefit than the difference between drug and placebo.
 
We already know SSRIs aren't the end all be all in happiness pills, but the NNT for statins (also dispensed like candy) is like 30. Comparative win!
 
Who would fund large, rigorous research to look at all of the treatments that are supposed to help but have too little research to show conclusively they do.
Seemingly no one, unfortunately.
 
Was mentioned before, but even if antidepressants are just a little bit better than placebo, they are massively better than no treatment. So in real life your patients are getting dramatically more benefit than the difference between drug and placebo.
I see what you're saying--that taking any pill has a beneficial effect, whether placebo or SSRI, so the difference in placebo and SSRI is less significant than the difference between a pill and nothing. But you obviously can't knowingly give a patient a placebo. And while you're not saying this, I might extrapolate that part of the benefit of giving a patient an SSRI is a placebo effect.

My question then is whether you can study the effect of no treatment to see whether this theory for giving out SSRIs holds true. For example, you would have patients come to an office be administered either an SSRI, placebo, or be told that they will not be administered a drug as part of the study. You could also add a fourth group where you tell them they're being given a placebo. I know it no longer truly is a placebo pill, but if the point is that it's a pill that makes the difference, any pill, it might have some interesting results, and would certainly be interesting to compare against an actual placebo to get a sense of what causes the placebo effect (the act of taking a pill or believing a pill is a drug). If you're giving SSRIs just because any pill works, it might as well be the most innocuous one you can make.
 
  • Like
Reactions: 1 user
Frankly I thought that article was inane--the question was asked and answered. They managed to avoid publication bias. Sort of. Hooray. The much more interesting analyses would be a 1) meta-interaction analysis, as the effect size for SEVERE depression is likely much bigger than 0.3. 2) meta-analysis of placebo vs. antidepressant in PREVENTING another episode once the patient is in full remission, likely also a number much higher than 0.3.

Also, the article shows that there's great variability around the average efficacy value. Is this a real finding? I don't think so. It's that TCAs were trialed on people who were much sicker.

If you read the existing APA guideline, mild to moderate med is not indicated (therapy only), moderate either med or therapy, severe med + therapy. The current guideline is pretty reasonable to me. No change in clinical practice indicated. If you are handing out meds indiscriminately it only means you are not following good evidence based practice.

In community based practice, whether antidepressants are better than placebo is almost a meaningless question, because in the vast majority of cases who you see is much sicker than the typical enrollee of most clinical trials, as they exclude pretty much every kind of co-morbidity. In pragmatic settings (i.e. STAR*D), placebo is not deemed a valid ethical choice in efficacy evaluation.

Aren't both antidepressants and therapy equal options for mild to moderate depression (or at least aren't antidepressants an option) in the guidelines?
 
antidepressants ~ psychotherapy > placebo >>>> no treatment*

It's not ethical to offer placebo, so I'm gonna stick with antidepressants and psychotherapy.

*This list is incomplete (e.g. ECT)
 
Aren't both antidepressants and therapy equal options for mild to moderate depression (or at least aren't antidepressants an option) in the guidelines?

That's the common teaching, which I think is based on the fact that placebo is more effective for the mild-moderate depression patients compared to the severe, despite the fact that medication is equally effective between the two groups. So there's less of an effect size in mild-moderate depression, but that doesn't mean they would have gotten better with no medication. As other people in this post mention, its not like we're going to start prescribing placebo. The other issue is that we don't have great controls/placebos for psychotherapy, so its hard to make blanket statements about that.

The reality is that there's been an increased placebo response across many specialties and treated symptoms (such as pain). This could be a reflection of the increased public faith in pills, or heightened awareness of/belief in a cure for symptoms that people would typically ignore until they went away on their own but are now pursuing clinical treatment.
 
Frankly I thought that article was inane--the question was asked and answered. They managed to avoid publication bias. Sort of. Hooray. The much more interesting analyses would be a 1) meta-interaction analysis, as the effect size for SEVERE depression is likely much bigger than 0.3. 2) meta-analysis of placebo vs. antidepressant in PREVENTING another episode once the patient is in full remission, likely also a number much higher than 0.3.

Also, the article shows that there's great variability around the average efficacy value. Is this a real finding? I don't think so. It's that TCAs were trialed on people who were much sicker.

If you read the existing APA guideline, mild to moderate med is not indicated (therapy only), moderate either med or therapy, severe med + therapy. The current guideline is pretty reasonable to me. No change in clinical practice indicated. If you are handing out meds indiscriminately it only means you are not following good evidence based practice.

In community based practice, whether antidepressants are better than placebo is almost a meaningless question, because in the vast majority of cases who you see is much sicker than the typical enrollee of most clinical trials, as they exclude pretty much every kind of co-morbidity. In pragmatic settings (i.e. STAR*D), placebo is not deemed a valid ethical choice in efficacy evaluation.

I thought this chart for comparing efficacy/acceptability was interesting food for thought. It also provides good justification for starting with escitalopram as the best trade-off for efficacy/tolerable (and then moving "up" to more efficacious one if ineffective or "down" to a better tolerated one if there is an serotonergic adverse effect).

gr4.jpg
 
This is an interesting article that asserts that when you look include negative-result studies the effects of anti-depressants are worse than shown in the meta-analyses that are commonly reported, that the appearance of greater effectiveness in the severely depressed is actually the disappearance of the placebo effect in the severely depressed, points out the short duration of clinical trials, and questions whether it is ethical to unveil the truth that placebo pills compare favorably to anti-depressants given the effectiveness of anti-depressants as placebos:

Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?
 
Shedler has an interesting paper citing meta-analyses of the efficacy of psychotherapy for depression. CBT had an effect size of something like 0.68 - 1.0, with similar figures for antidepressants 0.31.

http://jonathanshedler.com/PDFs/Shedler (2010) Efficacy of Psychodynamic Psychotherapy.pdf

Don't trust the opinions of someone who puts his picture in his article. Those effect sizes are not comparable.

Read my post above, read a little bit more about meta analyses and effect sizes, then read the original psychotherapy meta-analysis "book" published in 1980 (a scanned version is online).
 
Don't trust the opinions of someone who puts his picture in his article. Those effect sizes are not comparable.

Read my post above, read a little bit more about meta analyses and effect sizes, then read the original psychotherapy meta-analysis "book" published in 1980 (a scanned version is online).

Which post (there are a couple)? It looks like he's citing a number of meta-analyses since 2000 but maybe those meta-analyses are citing older works? I assume that in the 1980s book that therapy doesn't compare as favorably to SSRIs?
 
Last edited:
Aren't both antidepressants and therapy equal options for mild to moderate depression (or at least aren't antidepressants an option) in the guidelines?
I think that the current thinking is that with mild depression, first line tx is psychotherapy, for moderate it could be either, and for severe depression the first line to would be medication medication. Doesn’t mean that you don’t use antidepressants for mild or not use psychotherapy for severe, but it is definitely a good general guideline. I also explain it to patients in this way although one has to be careful on how one explains this though as the moderately depressed are always going to think that it is severe depression.
 
  • Like
Reactions: 1 user
I think that the current thinking is that with mild depression, first line tx is psychotherapy, for moderate it could be either, and for severe depression the first line to would be medication medication. Doesn’t mean that you don’t use antidepressants for mild or not use psychotherapy for severe, but it is definitely a good general guideline. I also explain it to patients in this way although one has to be careful on how one explains this though as the moderately depressed are always going to think that it is severe depression.

Everything you said is correct, except that for severe, the first line is COMBINED medication + therapy. In community settings, therapy, esp. evidence based high quality thearpy, is hard to get and people don't necessarily want it. But that's a separate issue.
 
  • Like
Reactions: 1 user
Everything you said is correct, except that for severe, the first line is COMBINED medication + therapy. In community settings, therapy, esp. evidence based high quality thearpy, is hard to get and people don't necessarily want it. But that's a separate issue.
Thanks for the correction. I do tend to forget that the basic behavioral activation and social interaction that we are working on when a patient is severely depressed is psychotherapy. It just seems so automatic to me, but if it was really so easy to get someone in that state moving and interacting, then they wouldn't pay me the big bucks. :D
 
Top