Antidepressants No Better than Placebo Except in Severely Depressed

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

positivepsych

Member
10+ Year Member
7+ Year Member
15+ Year Member
Joined
Nov 25, 2005
Messages
331
Reaction score
1
This new Meta-Analysis is all over the major news outlets.
http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0050045
http://www.time.com/time/health/article/0,8599,1717306,00.html?imw=Y
http://thelastpsychiatrist.com/2008/02/yet_another_study_on_antidepre.html#more

Here's a quick summary:

"Researchers got hold of published and unpublished data from drug companies regarding the effectiveness of the most common antidepressant drugs. Previously, when meta-analyses have been conducted on only the published data, the drugs were shown to have a clinically significant effect. However, when the unpublished data is taken into account the difference between the effects of drug and placebo becomes clinically meaningless — just a 1 or 2 point difference on a 30-point depression rating scale — except for the most severely depressed patients. Doctors do not recommend that patients come off antidepressant drugs without support, but this study is likely to lead to a rethink regarding how the drugs are licensed and prescribed.

I was wondering what some reactions are to their findings?
I'm particularly intrigued by some of the points I've heard:

1) It's not that antidepressants don't work, it's that that they generally don't work better than placebo. Should this inform clinical practice by psychiatrists?

2) If placebo is effective 35% of the time, is there an ethical way to take advantage of this? Prescribe nutritional supplements or very low-dose antidepressants (to get the placebo response without the side effects)?

3) Antidepressants don't work better for the severely depressed, but the placebo works less well for this subgroup. Why discrepancy?

Members don't see this ad.
 
This new Meta-Analysis is all over the major news outlets.
http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0050045
http://www.time.com/time/health/article/0,8599,1717306,00.html?imw=Y
http://thelastpsychiatrist.com/2008/02/yet_another_study_on_antidepre.html#more

Here's a quick summary:



I was wondering what some reactions are to their findings?
I'm particularly intrigued by some of the points I've heard:

1) It's not that antidepressants don't work, it's that that they generally don't work better than placebo. Should this inform clinical practice by psychiatrists?

2) If placebo is effective 35% of the time, is there an ethical way to take advantage of this? Prescribe nutritional supplements or very low-dose antidepressants (to get the placebo response without the side effects)?

3) Antidepressants don't work better for the severely depressed, but the placebo works less well for this subgroup. Why discrepancy?

I am actually surprised there is so much fuss about this...

How about this for an explanation:

a) we have known for a long time now that anti-depressants do not improve mood of those who are not depressed
b) the placebo effect is pretty universal: ie, give a sugar pill to anyone suffering from anything, and there is about thirty per cent chance they will FEEL better
c) antidepressants in those who are "less severely depressed" simply exert the sugar pill effect
d) and if you are ready....here is the revelation: depression is over-diagnosed, so those people that we already know would not benefit from antidepressants are being given antidepressants, and (shockingly!) antidepressants do not do any better than placebo in those people. hey, it is easier to give 60-year old recently bereaved Mrs Smith a pill than to spend some time talking to her and finding out if she has sufficient social network support (because if you find she does not, then you will feel obliged to try arrange something for her, and you could see three more Mrs Smiths during that time, and give each of them a pill - and chances are, one of them will feel better afterwards!).
 
I agree. "Sadness" or "tired of life" is grossly overdiagnosed as depression. These same people tend to have very poor coping mechanisms. Antidepessants do work for the right patient.

I'm leary of highly publicized negative result studies nowadays. Everyone yells bias when a pharma company puts out a positive result based on their drug. Nobody yells bias when the opposite happens. No one believes the government doesn't have a vested interest in proving that Trilafon is as effective as Seroquel?

In this study, do psychologists have no interest in proving that their non-medication techniques should be just as sort after medications for the treatment of depression? I'm not even sure all the investigators were non-physicians, but just wondering...one researcher is from the "Institute for Safe Medication Practices." Maybe it's too many x-files episodes...just saying.
 
Members don't see this ad :)
I haven't read this study yet but it flies in the face of decades of research.

2) If placebo is effective 35% of the time, is there an ethical way to take advantage of this? Prescribe nutritional supplements or very low-dose antidepressants (to get the placebo response without the side effects)?

In western medicine-I don't think so. We are bound to educate our patients & be honest with their treatments. We could also be practicing one of the criterion needed for malpractice by giving a placebo--not giving standard of care treatment.

In other countries, its actually allowable to keep the patient in the dark under certain circumstances. E.g. in Eastern countries such as Japan & Korea, if the family are for it, its allowable to not tell someone they for example have a terminable illness.

I'm going to have to scrutinize this study. I do though have to take one jab at it. It mentions it includes the results of unpublished as well as published studies.

Several studies are unpublished because they do not accomplish strict levels of research integrity & may be filled with errors. I had done pharmacological research for over 1 year. Several of my own studies I intentionally would not submit because I could tell something was wrong with the study--e.g. the blood pressure cuff was not working properly. E.g. we were monitoring the effects of angiotensin inhibitors on blood pressure. Since we knew that medication did indeed work, if the machines weren't showing a difference, then the machine may have been broken.

I'm not going to try to make some bogus contrarian claim that ACE inhibitors don't affect blood pressure because out stupid 30 year old blood pressure cuff for rats was busted & the stupid organization I was working at was too cheap to replace it.

he researchers were able to track down comprehensive unpublished trial results from the drug makers themselves before the drugs were authorized for sale in the U.S., and include them in their review of the literature.

While it is true that some well conducted studies are not published because they do not yield a significant result (who wants to publish those studies, doctors only want to spend time seeing studies that have significant results--where the "null hypothesis" is found to probably not be true), several are not because they simply are bad studies that were poorly conducted.

A purely superficial analysis, and I admit I need to really read the full study: If this study did pull in bad research, then to not find much of a difference between antidepressants vs placebo is to be expected. However if it only included well conducted studies as the "unpublished" data, then this article needs to be at least seriously considered & questions need to be brought up as to why its data differs from the data of so many other studies.

We need to read this study to scrutinize it further, and find out why several of these studies were not published.

Another thing to consider is several well respected 3rd party organizations such as the NIMH have done their own research showing that antidepressants do have superior efficacy over placebos. So why then is this study contradicting those other studies?
 
I agree. "Sadness" or "tired of life" is grossly overdiagnosed as depression. These same people tend to have very poor coping mechanisms. Antidepessants do work for the right patient.

I'm leary of highly publicized negative result studies nowadays. Everyone yells bias when a pharma company puts out a positive result based on their drug. Nobody yells bias when the opposite happens. No one believes the government doesn't have a vested interest in proving that Trilafon is as effective as Seroquel?

In this study, do psychologists have no interest in proving that their non-medication techniques should be just as sort after medications for the treatment of depression? I'm not even sure all the investigators were non-physicians, but just wondering...one researcher is from the "Institute for Safe Medication Practices." Maybe it's too many x-files episodes...just saying.

Is it really possible to watch too many episodes of the X-Files?
 
I need to read the study further but it sounds like the trials used were not doing multiple attempts of drugs... I.E. when one drug failed they didnt try the person a second drug. I bet antidepressants would show better efficacy if more than 1 is tried in a series and compared to placebo.
 
Alright, I read it. Time to put my research cap on.

The Good:

1) Good quality statistics. They constructed their own criteria and used it to fish through the trials to determine a score to use for the meta-analysis.

2) They aren't claiming that antidepressant dont work, they are claiming that the placebo works less at severe cases.

The Bad:

1) No one realistically expects 1 antidepressant to work instantly. You have to switch to different drugs and try them out. A meta-analysis of a series of drugs vs a series of placebo might yield better results.

2) Did I fall off the planet and tricyclics are not antidepressants any longer? So you would include Nefazodone which is not technically an SSRI cause it inhibits norepinephrine reuptake as well as the seratonin but you would not include the classic amitriptyline. Then you can't make the claim that "antidepressants" and placebo are equivilant at moderate severity.

Flawed meta-analysis. It went too far with it's conclusions and should have restricted itself to SSRIs and made the claim that SSRIs might not be the answer for moderate depression and instead a series of SSRI trials or different class of drugs should be tried.

Conclusion: Ignore the study.
 
Too late--it's already been blasted out to the media as "Antidepressants don't work". :rolleyes:

Thank you, Church of Scientology!

The BBC did an entire 3 minute story on this article and at NO time did they ever mention that the study did show AD work with more severe depression. They only kept saying "the study shows that antidepressants work no better then placebo." :mad: Without even getting into the many flaws of the study, I think it is not only irresponsible to not report on the full findings, but borders on criminal behavior. At least other news networks such as MSNBC mentioned that the study only showed antidepressants may not work in mild cases of depression. Think about all the people who hear about this study and stop taking their medications...
 
I need to read the study further but it sounds like the trials used were not doing multiple attempts of drugs... I.E. when one drug failed they didnt try the person a second drug. I bet antidepressants would show better efficacy if more than 1 is tried in a series and compared to placebo.

See STAR*D. The data aren't terribly compelling... they aren't as bad as the meta-analysis might suggest, but they're not exactly anything to jump up and down over.

Anasazi said:
I agree. "Sadness" or "tired of life" is grossly overdiagnosed as depression. These same people tend to have very poor coping mechanisms. Antidepessants do work for the right patient.

While I agree with this, in a general sense, there are threshold criteria employed that are used in the selection of participants for clinical trials. We can't say for certain, without reading the specific method for each trial included in the meta-analysis, but it is highly likely that participants met criteria for major depression as per structured clinical interview and that they had surpassed some threshold for depression symptom severity prior to being deemed eligible for participation in the FDA clinical trials.

Faebinder said:
They constructed their own criteria and used it to fish through the trials to determine a score to use for the meta-analysis.

Not sure what you are referring to, but it looks like they used a standard Cohen's d statistic, which is entirely acceptable. Remember, effect size estimates are more meaningful than p-values, which are highly dependent upon sample size and give no indication of the strength of effect (e.g., small, medium, large).
 
Not sure what you are referring to, but it looks like they used a standard Cohen's d statistic, which is entirely acceptable. Remember, effect size estimates are more meaningful than p-values, which are highly dependent upon sample size and give no indication of the strength of effect (e.g., small, medium, large).


Which is the entire reason why you do a meta-analysis in the first place.

Anyway, even if the Cohen's d statistc is not their own scoring (I don't even know what Cohen's d statistic is), at least it's objective and they describe their process pretty well. (I presume widely accepted, but I'm not in a position to criticize the acceptance of it, just the objectivity).

You might criticize them for using the fixed-effects model that doesn't account size differences but that would be true if they were trying to prove a difference. In their methods, they claim to have looked at both and the results are apparently similary but displayed in fixed effects model (for simplicity according to them).
 
While I agree with this, in a general sense, there are threshold criteria employed that are used in the selection of participants for clinical trials. We can't say for certain, without reading the specific method for each trial included in the meta-analysis, but it is highly likely that participants met criteria for major depression as per structured clinical interview and that they had surpassed some threshold for depression symptom severity prior to being deemed eligible for participation in the FDA clinical trials.

I'm quiite cynical about people's abilities to accurately fill out self report measures, or when structured interviews are conducted. I realize we're left with little choice, but from a purely clinical (non research standpoint) clinical impression and behavioral observation are much more powerful in my opinion.

If I had a penny for every patient endorsing every symptom, gameboy playing, cell-phone chatting, "10/10 depression," grandson picture-showing lady in the waiting room of the clinic that c/o "depression and anxiety,"...I'd have a lot of pennies.
 
Members don't see this ad :)
I'm quiite cynical about people's abilities to accurately fill out self report measures, or when structured interviews are conducted. I realize we're left with little choice, but from a purely clinical (non research standpoint) clinical impression and behavioral observation are much more powerful in my opinion.

I find this very interesting, because the majority of my work is in the context of combined treatment trials. From my perspective, it is much more difficult to find participants who meet our criteria for study participation than those who don't. But even then it varies based on recruitment source. When we do an inpatient recruitment, almost anyone will qualify (minus those who have contraindications or rule-out conditions). But when we do community recruitment, it can be very difficult to find people who meet our severity criteria or who meet full DSM-IV MDD criteria as per the SCID.

Just something to think about...
 
People can criticize the details of the study and the associated reporting, but I think it's disingenuous to criticize the mechanics of studies (broken BP cuffs, poor filling in of forms, fuzziness of depression variables) when skepticism was minimal when studies seemed to demonstrate efficacy. Even before this study, it has been clear that antidepressant medications have been only marginally more effective than placebo for mild/moderate depression. It has seemed clear to me that the enhanced efficacy would significantly melt away when all patients are taken into consideration (depression studies have trouble getting patients because real-world patients have lots of comorbidities that studies exclude but likely reduce the likelihood of pharmaceutical success). I think the "efficacy" of placebo is less related to placebo than to the reality that bona fide major depression fluctuates in intensity and can resolve without much intervention. It seems naive, however, to believe that it is easy to just ask a few questions, discover that someone is sad because of loneliness, and resolve the depression. Psychotherapy does take some training and practice, but it has been demonstrated to be effective for anxiety and depression; in other words, if you don't plan to learn how to do therapy, you will be doing your patients a disservice.
 
Ok, I just asked two attendings and neither has read the article yet. The article is getting so much press that one would mistaken that it is published in New England Journal of Medicine or JAMA (but it is not).

One attending's response is that:

1) There are flaws to meta-analysis. Imagine that each randomized controlled trial's p value is less than 0.05, it would mean that only one of 20 chances that trial's significant results/findings are due to chance. So combining all these valid trials, along with some unpublished data, and meta-analyzing them to come up with negative results just sound fishy.

2) Placebo effects are in general way too high in psychiatric studies. There are a couple explanations. It could be fact that being seen by a study coordinator/research M.D. each week could be what's making pts feel better. So even placebo would have a response. The life of a chronically depressed pt is quite miserable so getting some attention from medical professional, akin to psychotherapy, on a weekly basis is amazing to them. Secondly, people are paid $1000 or more to participate in these studies. When you don't have a lot of money to start off with, getting paid $1000 means you are more likely to report better symptom improvement even though you are taking placebo (pts don't know that they are taking placebo. they just want to please the doctors!).

3) Lastly, many pts relapse into depression after discontinuing their antidepressants. It could be argue that the placebo effects in these short 8-week trials will be short-lived for many depressed pts because they will slip back into depression very easily; that's not the case in the active drug group.
 
1)There are flaws to meta-analysis. Imagine that each randomized controlled trial's p value is less than 0.05, it would mean that only one of 20 chances that trial's significant results/findings are due to chance. So combining all these valid trials, along with some unpublished data, and meta-analyzing them to come up with negative results just sound fishy.

Please don't tell me that's what your attending thinks a p-value is.:(

If someone doesn't know what a p-value is, be very suspicious of everything else that comes out of their mouth about research papers. Med school doesn't teach epidemiology, and it makes me cringe how many people think they've somehow learned how to read papers critically with a few lip-service EBM seminars here and there.
 
Please don't tell me that's what your attending thinks a p-value is.:(

If someone doesn't know what a p-value is, be very suspicious of everything else that comes out of their mouth about research papers. Med school doesn't teach epidemiology, and it makes me cringe how many people think they've somehow learned how to read papers critically with a few lip-service EBM seminars here and there.

Here is a thought. What is the chance of a pt having bipolar? it is either 100% or 0% (he either has it or doesn't). Statistic in REAL clinical situation is not that helpful nor relevant.
 
Here is a thought. What is the chance of a pt having bipolar? it is either 100% or 0% (he either has it or doesn't). Statistic in REAL clinical situation is not that helpful nor relevant.

Oh if only medicine was that simple.... S/he "maybe" bipolar is a common case...

They may be "depression with a psychotic episode".
 
Ok, I just asked two attendings and neither has read the article yet. The article is getting so much press that one would mistaken that it is published in New England Journal of Medicine or JAMA (but it is not).

One attending's response is that:

1) There are flaws to meta-analysis. Imagine that each randomized controlled trial's p value is less than 0.05, it would mean that only one of 20 chances that trial's significant results/findings are due to chance. So combining all these valid trials, along with some unpublished data, and meta-analyzing them to come up with negative results just sound fishy.

2) Placebo effects are in general way too high in psychiatric studies. There are a couple explanations. It could be fact that being seen by a study coordinator/research M.D. each week could be what's making pts feel better. So even placebo would have a response. The life of a chronically depressed pt is quite miserable so getting some attention from medical professional, akin to psychotherapy, on a weekly basis is amazing to them. Secondly, people are paid $1000 or more to participate in these studies. When you don't have a lot of money to start off with, getting paid $1000 means you are more likely to report better symptom improvement even though you are taking placebo (pts don't know that they are taking placebo. they just want to please the doctors!).

3) Lastly, many pts relapse into depression after discontinuing their antidepressants. It could be argue that the placebo effects in these short 8-week trials will be short-lived for many depressed pts because they will slip back into depression very easily; that's not the case in the active drug group.


Agree with the underlined part. I stated that their inclusion criteria sucks a$$. Especially the medications included.
 
Here is a thought. What is the chance of a pt having bipolar? it is either 100% or 0% (he either has it or doesn't). Statistic in REAL clinical situation is not that helpful nor relevant.

REAL clinical situation? Unlike the fake ones where you have to evaluate literature correctly and critically? Hmm...

I'm not saying you can't be a good doctor and if you don't know statistics, because there are other ways of learning besides reading the literature with a particularly keen eye for research methodologies. You can read just the intro and conclusions of articles, read CNN medicine articles, ask your buddy, or get free dinners from drug companies. Those methods serve most of our colleagues in all fields of medicine.

But you probably can't comment on a meta-analysis' validity from an inferential standpoint if you know nothing about the assumptions used to make statistical inferences in meta-analyses, which are far from intuitive.

Come on, kids, it IS the diagnostic and STATISTICAL manual, right? ;)
 
Top