Antidepressants & Black Box Warning Controversy

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
cross posted in the Psychology forum

Thought it might be interesting to some.
http://www.pharmalot.com/2013/02/the-op-ed-antidepressants-controversial-studies/

thanks for the article...very little there(especially regarding efficacy) that anyone doesn't already know....

Antidepressants aren't very good for depression. Whether they are of no value whatsoever or barely above placebo value is a fair debate. What they do appear to have more efficacy for in *some* populations is certain types of anxiety spectrum d/os.....I will use them in those cases and have higher expectations.
 
thanks for the article...very little there(especially regarding efficacy) that anyone doesn't already know....

Antidepressants aren't very good for depression. Whether they are of no value whatsoever or barely above placebo value is a fair debate. What they do appear to have more efficacy for in *some* populations is certain types of anxiety spectrum d/os.....I will use them in those cases and have higher expectations.

It's a bit more nuanced than that. Efficacy data in mildly depressed populations is poor. In moderate depression it's marginally better. In severe depression, it clearly separates from placebo in a meaningful way.

Don't tell me you refuse to prescribe antidepressants to a person with severe melancholic depression. If you do, that's shameful and malpractice.
 
Members don't see this ad :)
It's a bit more nuanced than that. Efficacy data in mildly depressed populations is poor. In moderate depression it's marginally better. In severe depression, it clearly separates from placebo in a meaningful way.

Don't tell me you refuse to prescribe antidepressants to a person with severe melancholic depression. If you do, that's shameful and malpractice.

In 'very severe' depression there is some separation from placebo......the *vast*(with vast being an understatement) of antidepressants prescribed are not to patients with severe melancholic depression.

In moderate depression there is a good bit of debate if it is even marginally better. You can cherry pick studies to suggest it is....but looking at the whole picture(including all the studies that bigpharma just never submitted) it becomes clear that there is likely little to no benefit.
 
It's a bit more nuanced than that. Efficacy data in mildly depressed populations is poor. In moderate depression it's marginally better. In severe depression, it clearly separates from placebo in a meaningful way.

Recent Gibbons meta-analysis in the green journal a few months ago found pretty clearly that severity really didn't modulate effect all that much. Study by Thase a little while back had similar findings.

Studies on mood relapse prevention for SSRIs is quite robust. Even when not helpful acutely, they appear very helpful for preventing recurrence.

If your criteria for "working" is that they completely cure all patients who take them (including giving somebody a job, a new girlfriend, sobriety, etc), then, yes, they're miserable.

If your criteria includes things like suicide prevention, disability, relapse prevention, etc., then our meds aren't so incomparable in effect size to plenty of interventions throughout medicine.

"Antidepressants don't work" is again a lazy, overly simplistic narrative, never mind the fact that we still have TCAs and MAOIs at our disposal which have robust effects with high side effect burdens. None of these meds make everything "all better" any more than statins prevent all heart attacks or beta blockers prevent all strokes or metformin cures diabetes or steroid inhalers cure asthma.

Nobody expects a cardiac med to give you a new heart or an inhaler to give you new lungs, so I don't know why people should expect an antidepressant to give you a new brain. Set appropriate, clinically meaningful outcomes, and there are plenty of studies that show antidepressants are useful. Prescribe fewer antidepressants, and you see the suicide rate go up. That seems like a big deal to me. The FDA gave us that little natural experiment with children after the black box warning sent pediatricians running from prescribing ADs.

Of course, if you take the easy way out and just look at meta-analyses of crappily performed pharma studies, then things don't look so good.
 
What about just severe depression with suicidality? Answer the question. Do you refuse to treat anything but very severe depression with antidepressants?

Surely you are conflating SSRI data with all antidepressants as well. You know, or should, that some TCAs and all MAOIs are superior to SSRIs. Even SNRIs are superior.

To dismiss all antidepressants as ineffective is faddish and evidence of a failure to think critically.

EDITED TO ADD: This is directed to Vistaril.
 
What about mirtazapine? I know very little about antidepressants, but in a lecture a psychiatrist mentioned it has some of the most convincing data to support its use the proper circumstances. (This lecture was several months ago so I might be remembering wrong)
 
Last edited:
What about just severe depression with suicidality? Answer the question. Do you refuse to treat anything but very severe depression with antidepressants?

Surely you are conflating SSRI data with all antidepressants as well. You know, or should, that some TCAs and all MAOIs are superior to SSRIs. Even SNRIs are superior.

To dismiss all antidepressants as ineffective is faddish and evidence of a failure to think critically.

EDITED TO ADD: This is directed to Vistaril.

of course I use a ton of antidepressants....including a ton on pts with adjustment d/o with depressed mood, dysthymic d/o, etc....

I've read all the major studies out there for antidepressants. Multiple times. You can create an argument that snri's(especially cymbalta) are a little better, but you can also(look at the way those studies were set up) argue that data doesn't really prove they are.

I think it's more likely the people who are piecing together bits of data using specific drugs(or specific combinations of drugs) under various conditions to support their case are the ones not thinking critically....if you do enough different trials asking different questions in different ways you're going to eventually get a few signals above noise....that doesn't mean it is good science in any way.
 
What about mirtazapine? I know very little about antidepressants, but in a lecture a psychiatrist mentioned it has some of the most convincing data to support its use. (This lecture was several months ago so I might be remembering wrong)

Mirtazipine is a marvelous antidepressant--as long as the patient won't be affected by a damn-near guaranteed 30# weight gain. Unfortunately, that applies to precious few Americans right now.
 
What about mirtazapine? I know very little about antidepressants, but in a lecture a psychiatrist mentioned it has some of the most convincing data to support its use. (This lecture was several months ago so I might be remembering wrong)

like with a lot, the evidence is mixed......a big paper a few years back comparing it to most of the major AD's showed that it was maybe a tad better(but mostly indistinguishable) from ssris/snris....comparing it to TCA's it was shown to be equal to Doxepin and Elavil and maybe a tad worse(but again mostly indistinguishable) from Tofranil....of course past studies comparing many of the above(removing Remeron from the mix) don't completely jive with the above.

so I wouldnt say there is convincing data to pick it above anything else. If it were really better than other drugs we would probably be using it a lot more than we are.
 
Recent Gibbons meta-analysis in the green journal a few months ago found pretty clearly that severity really didn't modulate effect all that much. Study by Thase a little while back had similar findings.

I have yet to read a recent Gibbons article that didn't have shoddy statistics or where the numbers in the paper did not support the conclusions. When I did my MPH in one my classes they used a number of his papers for us to tear apart because they are such brilliant examples of bad science. I particularly liked the one in Archives where they claimed there was no association between antidepressants and suicidality when the data (the bits that actually made sense, some was highly suspect) actually rejected the null hypothesis. It just goes to show you any rubbish can get published in top journals when they tell you what you want to hear because the statistical competence of many peer-reviewers is wanting.

My favorite meta-analysis for antidepressants in the Cipriani et al multiple treatments one. They also did a brilliant meta-analysis using the same methodologies for anti-manics and showed antipsychotics perform much much better than do anticonvulsants in acute mania.
 
My favorite meta-analysis for antidepressants in the Cipriani et al multiple treatments one. They also did a brilliant meta-analysis using the same methodologies for anti-manics and showed antipsychotics perform much much better than do anticonvulsants in acute mania.

but we knew that at the time...I know before that came out I certainly wasn't throwing pts on depakote monotherapy in acute mania for example...I don't think many people were doing that.
 
like with a lot, the evidence is mixed......a big paper a few years back comparing it to most of the major AD's showed that it was maybe a tad better(but mostly indistinguishable) from ssris/snris....comparing it to TCA's it was shown to be equal to Doxepin and Elavil and maybe a tad worse(but again mostly indistinguishable) from Tofranil....of course past studies comparing many of the above(removing Remeron from the mix) don't completely jive with the above.

so I wouldnt say there is convincing data to pick it above anything else. If it were really better than other drugs we would probably be using it a lot more than we are.

Mirtazapine in conjunction with venlafaxine was shown to be equally effective as either tranylcypromine or phenelzine (can't remember which). Of course there is the weight gain concern, but when balanced with the depressed and suicidal ineffectively treated depression, then it's probably worth consideration.

Hell, there are other "atypical" antidepressants out there like nefazodone, which if it doesn't bonk your liver, is a good drug. Also, there are significant differences among the TCAs that can be useful, like desipramine which is almost a pure NRI and has fairly compelling data showing that it is activating and seems to increase goal directed and meaningful activity.

I guess my response to Vistaril's complaint about the studies being too specific is as follows. Depression is VASTLY heterogeneous. There are probably dozens of flavors of it, and if you take a large population that meet DSM criteria for a major depressive episode, it's quite likely that if you did a latent class analysis on their symptom clusters, you'd find a variety of different subsets. The various antidepressant classes work in different ways, and quite frankly in ways we don't understand for the most part beyond the direct receptor effect. So, if you have lots of kinds of major depressive episodes and lots of different treatment modalities, doesn't it make sense to do studies of subsets of patients with specific combination treatments?

I believe it does. However, to make the research useful, you need to stop and think about what flavor of depression your patient has and what type of medicine or combination of medicines might be helpful. That takes time, effort, and mental energy. It takes a meaningful relationship with the patient that transcends symptom checklists. One cannot accomplish that with a high volume practice model which you have described as "grinding." That model does not allow for the nuanced understanding of the patient's problem and therefore CANNOT allow a nuanced treatment plan. It, as designed, will fail most patients. That is why many of us respond so negatively when you fetishize this high volume model and brag about doing it. It's just plain bad psychiatry.
 
Members don't see this ad :)
I have yet to read a recent Gibbons article that didn't have shoddy statistics or where the numbers in the paper did not support the conclusions.
You sound so David Healy-like! (it's cute when you UK'ers stick together) :naughty:

Gibbons has his detractors, but the idea that he does shoddy work just isn't correct. Generally speaking, the measures he's using, while imperfect, are still better than the crap he's being compared to. The fact that the editors at Archives keep publishing him might mean he knows what he's doing. Might not, but hey, I've done my epid degree too. He's still using prospective data, which is more than you can say about most of the things he's refuting.
 
Leaving efficacy aside....here is some info about suicidality and SSRIs I pulled from a presentation circa 2007. YMMV on newer data, but I thought this was pretty decent.

*Meta-analysis of 40,000 individuals Rx’ed ADPs in 477 RCTs.
*Suicide, suicidal ideations, and thoughts of self harm were measured
*16 suicides, 172 self-harm, and 177 episodes of suicidal thoughts.
*No difference in risk among ADPs (citalopram, escitalopram, fluoxetine, fluvoxamine, paroxetine, sertraline).
*Overall risk of suicide in both arms (placebo and ADP) of trial 39/100,000 (opposed to 10.6/100,000 in US population at large)
*No clear risk of increased suicide assoc. with SSRIs, but cannot be ruled out.

Gunnell, Saperia, & Ashby (2005). SSRIs and suicide in adults. Brit Med J. 330, 385-390.
 
I guess my response to Vistaril's complaint about the studies being too specific is as follows. Depression is VASTLY heterogeneous. There are probably dozens of flavors of it, and if you take a large population that meet DSM criteria for a major depressive episode, it's quite likely that if you did a latent class analysis on their symptom clusters, you'd find a variety of different subsets. The various antidepressant classes work in different ways, and quite frankly in ways we don't understand for the most part beyond the direct receptor effect. .

theoretically this might be true, but do you *really* believe this is what these studies do? Heck no...they're taking advantage of gobs of different data sets out there because of all the different subsets and finding some that stick(and usually barely at that).....that's also why you see so many discordant results out there when comparing what should be fairly similar subsets over time when looking at individual studies.
 
theoretically this might be true, but do you *really* believe this is what these studies do? Heck no...they're taking advantage of gobs of different data sets out there because of all the different subsets and finding some that stick(and usually barely at that).....that's also why you see so many discordant results out there when comparing what should be fairly similar subsets over time when looking at individual studies.

I do not believe that they necessarily prospectively set out to do so. However, a finding is a finding, and if there is a chance that it might help my patient(s), I want to know about it, understand why it might be, and consider including it in my practice.

Since you are so enamored with the very medical specialties and their practice models, think about oncologists. They routinely use treatment regimens with fledgling bodies of evidence on patients with severe disease that has little hope of recovery. We're not so dissimilar. At least the population that I see in my university clinic is very sick, very complex, and has been ill for quite a long time. The garden variety illness has been largely managed by primary care physicians or community providers. The sickest of the sick come here. They deserve whatever we can give them, even if the evidence is fledgling. So, I try to be a good doctor to them.

Remember, in the modern instantiation of the Hippocratic Oath, one swears not to engage in therapeutic nihilism. I have trouble with that sometimes, because it seems like some of our patients are untreatable. But, as physicians we absolutely are duty bound to them to do whatever we can, even if we have doubts. If that means reading small studies and taking findings that are relatively weak and trying them out in clinical practice, then I think it's the right thing to do.
 
I do not believe that they necessarily prospectively set out to do so. However, a finding is a finding, and if there is a chance that it might help my patient(s), I want to know about it, understand why it might be, and consider including it in my practice.

the problem is that good science dictates that if you throw a lot of stuff against the wall and something maybe sorta sticks a little, the appropriate thing to do is collect what sorta stuck a little(but probably not really), and then really put that under high scrutiny and see if it holds up to real tests.

but there is no real incentive to do that....especially not from the pharmco, which is just happy to gets it's fda approval and then market and milk it for what it can and then move on to the next one. Take cymbalta....one of the 'supporting'(sponsored by Lilly of course) studies involved a couple hundred patients with one dx(had to mdd criteria), a very specific age(over 65!), and for a very specific time(8 weeks....wonder what happened after that?)......Much like was discovered and published later, wonder how many other similar trials Cymbalta did(and just didn't submit?) for different age groups, different dx, etc...

So then you have to pick between believing that these sort of slices(when the results are pretty darn unstriking to begin with) represent something real(especially when they are rarely followed up with more power), or whether a more culmulative picture tells a better story...
 
the problem is that good science dictates that if you throw a lot of stuff against the wall and something maybe sorta sticks a little, the appropriate thing to do is collect what sorta stuck a little(but probably not really), and then really put that under high scrutiny and see if it holds up to real tests.

but there is no real incentive to do that....especially not from the pharmco, which is just happy to gets it's fda approval and then market and milk it for what it can and then move on to the next one. Take cymbalta....one of the 'supporting'(sponsored by Lilly of course) studies involved a couple hundred patients with one dx(had to mdd criteria), a very specific age(over 65!), and for a very specific time(8 weeks....wonder what happened after that?)......Much like was discovered and published later, wonder how many other similar trials Cymbalta did(and just didn't submit?) for different age groups, different dx, etc...

So then you have to pick between believing that these sort of slices(when the results are pretty darn unstriking to begin with) represent something real(especially when they are rarely followed up with more power), or whether a more culmulative picture tells a better story...

Listen, I'm we're on the same page with respect to the corruption and overall very poor quality of drug research. Some of the only good research comes out of the VA. They actually study old drugs and it's not sponsored by anyone but the gov't.

Here's my problem with the large metaanalyses. We agree that the small studies are ****. It's hard for me to believe that putting a whole bunch of ****ty studies together in order to make them more powered is going to make them any more valid.

Where we MAY differ is how we use that crappy data in clinical practice. I'm not going to stop offering a patient Cymbalta just because a large metaanalysis said it's no better than drug X. I won't offer it first probably, but if they've been on 3 SSRIs +/- buproprion and couldn't tolerate venlafaxine at actual SNRI doses (i.e. 225 mg daily) due to sexual side effects, you can damn well bet I'm prescribing the Cymbalta and filling out the prior authorization form. The reason is that the population based data is absolutely irrelevant to whether the patient in front of me responds. We have a limited toolkit and a clear goal, help patient get better. As long as I'm not doing harm with an agent and there exists a potential for clinical benefit, I'll try it. My patients understand my skepticism about the research but also the fact that if they are a responder, the research doesn't matter.

I have prescribed all sorts of crap that has really modest evidence for efficacy, and damnit, sometimes it works.

If we can agree that the research is corrupt and ****ty, let's try to understand what we can from it, with a skeptical eye, and then try to help our patients. You can reject the validity of the research enterprise while still entertaining the notion that the medication might actually help. It's not categorical and it's not comfortable, but it is the truth.
 
Listen, I'm we're on the same page with respect to the corruption and overall very poor quality of drug research. Some of the only good research comes out of the VA. They actually study old drugs and it's not sponsored by anyone but the gov't.

Here's my problem with the large metaanalyses. We agree that the small studies are ****. It's hard for me to believe that putting a whole bunch of ****ty studies together in order to make them more powered is going to make them any more valid.

Where we MAY differ is how we use that crappy data in clinical practice. I'm not going to stop offering a patient Cymbalta just because a large metaanalysis said it's no better than drug X. I won't offer it first probably, but if they've been on 3 SSRIs +/- buproprion and couldn't tolerate venlafaxine at actual SNRI doses (i.e. 225 mg daily) due to sexual side effects, you can damn well bet I'm prescribing the Cymbalta and filling out the prior authorization form. The reason is that the population based data is absolutely irrelevant to whether the patient in front of me responds. We have a limited toolkit and a clear goal, help patient get better. As long as I'm not doing harm with an agent and there exists a potential for clinical benefit, I'll try it. My patients understand my skepticism about the research but also the fact that if they are a responder, the research doesn't matter.

I have prescribed all sorts of crap that has really modest evidence for efficacy, and damnit, sometimes it works.

If we can agree that the research is corrupt and ****ty, let's try to understand what we can from it, with a skeptical eye, and then try to help our patients. You can reject the validity of the research enterprise while still entertaining the notion that the medication might actually help. It's not categorical and it's not comfortable, but it is the truth.

well of course some of my patients get better(for a while) after I run through the same algorithms many of us use and it just so happens that on trial #3 or 4 or whatever they get some response.......but I don't neccessarily attribute most of those cases to the medication itself. I suspect if I would have just reversed the order along the same timeline(ie start with Cymbalta and then go backwards), along that same timeline they probably end up responding to Prozac or whatever.....just because for whatever reason at that point in the life they were less depressed. And then of course the real test is whether or not that symptom improvement lasts(or rather it just represents one data point in the natural fluctuations of their moods). And even if it does last that isn't proof to me of much because maybe it was just a single episode that was bound to go away and not recur.
 
Leaving efficacy aside....here is some info about suicidality and SSRIs I pulled from a presentation circa 2007. YMMV on newer data, but I thought this was pretty decent.
the black box warning about suicidality is for children, adolescents, and young adults. The study you cite is for adults only.
 
I have yet to read a recent Gibbons article that didn't have shoddy statistics or where the numbers in the paper did not support the conclusions.
I was hoping to stimulate a discussion on the journal preventing valid criticism of a controversial article.

As has been mentioned, there has been a lot of critique of Gibbons' work and one would think that on of the highest tier psychiatry journal would make room for critique of their data analysis.
 
off topic: I did not realize cross posting is not allowed. I know this definitely is not the first time I cross posted in the two psych forums. Also, there have been numerous times that someone re-asked a question after not receiving enough feedback on one of the forums.

I think that the same link would lead to two very separate discussions depending on the forum. I understand the need for the rule but I do not think it needs to be applied in all situations, some flexibility may be beneficial.
 
Top