tell me it's not true...

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

BabyPsychDoc

Full Member
10+ Year Member
15+ Year Member
Joined
Apr 22, 2007
Messages
622
Reaction score
1
You might have seen this already, but just to put it out there again:



"According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that
51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall."
 

Attachments

  • Anti-Depressant Selection Bias.pdf
    239.6 KB · Views: 81
You might have seen this already, but just to put it out there again:



"According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that
51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall."

Great statistic.

Yes, this is one of many reasons why psychiatry meta-analyses are garbage -- there is incredible selection bias, which makes the final conclusions worthless.
 
Members don't see this ad :)
Yes, this is one of many reasons why psychiatry meta-analyses are garbage -- there is incredible selection bias, which makes the final conclusions worthless.


That's a great point. It was staring me right in the face but I didn't see it. But I think your point about why meta-analysis in psych should be viewed with skepticism.
 
I used to do research. Things like this do not suprise me. There is so much subjectivity, and I'm not talking psychiatry--I did it the research in hypertension.

We used rats--the blood pressure cuff machine on the rat was over 30 years old & didn't work about 20% of the time. So to get a more accurate reading you have to exfoliate the skin on the tail some of them. I'm sure that affected results. That was just one thing where things that shouldn't have been going on were--out of dozens.

Was that bad? Of course it was, but the alternative was to tell the research professor, who then yelled at you, and then you're either going to have to do the project the way I mentioned above or you're going to lose that letter of reccomendation you spent 6 months working on, and you worked 40 hrs a week at a rate of $2/hr.....

This is why replicability is very important in medical research.
 
Great statistic.

Yes, this is one of many reasons why psychiatry meta-analyses are garbage -- there is incredible selection bias, which makes the final conclusions worthless.

Total sidetrack... how can I get better at making assessments of literature? When Im a resident or attending, can I find weekend symposiums and suchlike about interpreting statistics? Ive taken Stats out the wazoo... basic, applied behavioral, and biostats & epi. Im still mathematically useless.
 
It's nearly impossible for you to truly assess the value of a study until you:

1) Have actually done research in the past.. and when I say done, I mean took the data and actually ran it through statistical software then wrote about it and actually managed to get it published. You will be able to understand what assumptions are fine and what are total BS.

and...

2) You must know enough about the subject. Not just PGY-1 level. It takes years to gather enough knowledge about a subject to truly realize what is important to know and what is just garbage.. So lets say that you are the master of research calculation... and you ignore a point some might ignore, your trial is half females/males, that might or might not be important depending if the disease is known to hit females way more than males... so how much do you know about the subject?
 
I would recommend getting a masters degree in a more rigorous science. Behavioral econ, experimental psych, evo bio, and physics come to mind.

Just joking, but that'd be ideal.

Doctors get too caught up in statistics and don't pay enough attention to study design itself. I'll give a few examples:

1)
The Study: A gigantic trial a couple of years ago showing that glucosamine and chondroitin were not effective in slowing the progression of arthritis or in symptomatic relief.

The Problem: They used people with ALL GRADES of arthritis, including a substantial proportion with bone on bone contact. The issue with this is GC is thought to work by increasing renewal and production of cartilage. I.E. increasing the activity of extant cartilage-forming cell populations. By using people in whom GC WOULD NOT BE EXPECTED to work along with people in whom it would be, they significantly diminished the likelihood that the study would show a positive benefit.

Conclusion: I'll continue taking my GC and MSM thank you very much.

2)
The Study: Cox-2 inhibitors lead to more coronary events.
The Problem: Cox-1 inhibition in NSAIDs is not insignificant. Ibuprofen, the drug most commonly compared to Cox-2s, retains significant cox-1 activity. Enough that it is contraindicated in those with thrombocytopenia or platelet diseases. If you don't believe me, follow a peds hem/onc doc around and watch what happens when a parent admits to giving their child motrin. My ears still ring from the last time I witnessed that, and the child actually had a low normal count.

The statement that cox-2s thus lead to coronary events is only one of several possible explanations from the study. The other, and more obvious conclusion being that although Ibuprofen's effect on platelet binding is smaller than that of Aspirin's, it remains possible that the effect is large enough to confer a cardioprotective benefit.

Conclusion: I love my Celebrex.

3) Any obesity study that uses BMI. Just use your eyes. You'll see plenty of flabby people with a BMI of 23. And plenty of lean people with BMIs in the high 20s and low 30s. BMI isn't an effective proxy of body composition and never has been.

I could go on for days, but you get the idea.
 
:)Thanks for that post MoM, really fascinating stuff
 
The "ibuprofen is protective" spiel was the argument that the drug companies tried to use in their trials to get out of their obviously documented malfeaseance (given the number of internal memos which surfaced which clearly demonstrated that company employees were concerned about the theoretical prothrombotic effects of unbalanced COX inhibition), and it was rejected by the courts, appropriately. Plenty of further research has demonstrated the cardiac risks of vioxx, the most selective cox-II, and less clear results for celebrex, which is not as selective. While ibuprofen may have some mild cardioprotective benefit (not substantiated), subsequent work has made it much less likely that the robust cardiotoxic effects of vioxx were due to this improper comparator.

As for BMI, we use proxies in research all the time. While the value is clearly problematic, its value in epidemiologic research has been well substantiated. Most people with high BMIs are too fat, and most people normal BMIs are not. So while it's obviously problematic, throwing out "any obestiy study that uses BMI" leaves us hardly better off than abandoning a flawed instrument.
 
The "ibuprofen is protective" spiel was the argument that the drug companies tried to use in their trials to get out of their obviously documented malfeaseance (given the number of internal memos which surfaced which clearly demonstrated that company employees were concerned about the theoretical prothrombotic effects of unbalanced COX inhibition), and it was rejected by the courts, appropriately. Plenty of further research has demonstrated the cardiac risks of vioxx, the most selective cox-II, and less clear results for celebrex, which is not as selective. While ibuprofen may have some mild cardioprotective benefit (not substantiated), subsequent work has made it much less likely that the robust cardiotoxic effects of vioxx were due to this improper comparator.

Vioxx perhaps (especially given the early event risk),Celebrex no. Have you looked at the Cardiovascular RR with regard to some of our old-school NSAIDs? Indomethacin, ketorolac, and diclofenac, are comparable to the Cox-2's or actually riskier from a cards perspective. Meloxicam isn't much better than celebrex from that perspective.

Also at the rate of adverse events we are looking at, the protective effect of ibuprofen could be rather small and still be the thing making the difference. NNT of 125 I think.

The courts will make any decision regardless of science. They've proven it 10,000 times.

And I'd still argue that unless a head-to-head trial of Cox-2s+Aspirin versus Cox-1s+Aspirin, or even a Cox2+ibuprofen versus ibuprofen alone trial, were done, we can't say for sure just how dangerous these drugs are from a cardiovascular perspective. If indeed these drugs are prothrombotic (which admittedly there are good arguments for, as based on the basic science it's easy to see hw selective cox-2 inhibition would lead to increased TXA expression), we should STILL see a difference in cardiovascular event risk.

As for BMI, we use proxies in research all the time. While the value is clearly problematic, its value in epidemiologic research has been well substantiated. Most people with high BMIs are too fat, and most people normal BMIs are not. So while it's obviously problematic, throwing out "any obesity study that uses BMI" leaves us hardly better off than abandoning a flawed instrument.

Proxies are indeed used in research all the time. It's a sore point for a lot of people. I have had to use proxies in my own research and I'm not very pleased about it. One thing that most will agree on, though, is that a good proxy (if there is such a thing) should be causally linked to the thing in question. Body weight is a function of a number different factors from organ weight, to bone weight (highly underemphasized in my opinion), to muscle, and of course fat. Because of the multifactorial nature of bodyweight, and by consequence, BMI, the proxy has a rather weak relationship to the value in question.

Mayo recently took a look at the normal weight population and found that over half of them at unhealthy levels of bodyfat (over 20% in men and over 30% in women--which is a little higher in women than I would personally use, but that's ok, it's a good start). OVER HALF. Which means that in fact, most people with a normal BMI do NOT have healthy levels of body fat.

I can't find a citation right now, but some studies have shown that up to 30% of the overweight population have normal, healthy percentages of body fat. which means that there is significant overlap in body fat between the two groups.

Such massive overlap completely invalidates just about any study that has used BMI as a proxy for the role of body fat in health.

As an example, a few years ago we learned that elderly with higher BMIs tend to have lower mortalities than the elderly with lower BMIs. Similar results were found for Alzheimer's as well. This lead the pro-fat crusaders to scream that 'hey look! fat is good!' Of course, more recent studies that actually measured bodyfat have shown that it wasn't overall weight, but in fact lean muscle mass that was correlated to these things.

the 7-site skinfold test takes a minute or two and is accurate to within 2%. The BMI can never claim that.

Waist/hip measurements, or even better, chest/waist/hip measurements, are also relatively sensitive. Unlike the BMI.

At the end of the day with 50% or more of your 'normal' population being actually unhealthily fat, and up to 30% of your overweight population actually being rather healthy, you lose a whole heck of a lot of resolution in studying the effects of body fat on health. And more concerning from my meathead exercise phys perspective, you never even have a chance of looking at the effects of lean mass on health.
 
Doctors get too caught up in statistics and don't pay enough attention to study design itself. I'll give a few examples:
Though I may disagree with MoM on some points, I wholeheartedly agree with the statement above. You need to have a reasonable understanding of statistics AND a degree of common sense to appraise research output. The "common sense" part is, sadly, frequently overlooked.
 
So what's a good resource to remind me of the pros/cons of different study designs, the definitions of risk ratio, odds ratio, confidence interval, the difference p value and clinical significance, etc.?

What do you all think of the Journal Watch publications from NEJM?
 
So what's a good resource to remind me of the pros/cons of different study designs, the definitions of risk ratio, odds ratio, confidence interval, the difference p value and clinical significance, etc.?

I don't know a good one off hand for statistics and all that.

As far as study design goes, I'm not really concerned with the whole randomization versus cohort versus case-control thing (although there are important issues there) In general double blind>single blind>cohort>case-control>case series>single case, but there are exceptions. I'm talking more a logical and science-based dissection of the study question and whether or not the experiment a) actually evaluates the study question and b) whether it does so as effectively as possible. Which has to be done in a study-by-study manner, often with your own forays into the background research.

An example of how involved it is to truly vet a study for scientific validity, there was a recent study in elderly patients involving the use of ibuprofen and weight-lifting. It was shown that the group allowed to use ibuprofen made significantly greater gains in muscle mass and strength than the group who were NOT allowed to use ibuprofen (I can't remember if they were given a placebo instead or not). The researchers then proceeded to argue that Cox-1 inhibition leads to greater muscle gains.

Looking at it, what are some issues with the study?
1. Ibuprofen is a pain reliever. Its effect on pain was not directly assessed during the study.
2. Pain relief might lead to greater intensity in workouts. Intensity was not recorded either as a subjective interpretation of the subjects, or objectively through a look at weight progressions and reps performed.
3. Have the effects of Cox-1 on muscle hypertrophy and strength been studied in the lab? It in fact has. With several studies from tissue to lab animal confirming that Cox-1 is important for muscle mRNA production and is correlated with hypertrophy. Cox-1 inhibition blunts hypertrophy considerably. This is such a well-known effect that the use of anti-inflammatory herbal agents in bodybuilders and powerlifters has gone up considerably, and many who were chronic ibuprofen users no longer touch the stuff. I, on the other hand, insist that the only reason I switched to Celebrex is because of the once-a-day dosing and significant GERD when on mobic.

My best guess is that the reason that the ibuprofen group did better than the control group is that they hurt less, and therefore put more energy into their workouts. And that the effect of increase in intensity far outweighed the negative effects of decreased Cox-1 inhibition in muscle tissue. I suspect that greater effects would have been seen if we used propoxyphene or butorphanol or demerol, but I'd be hard pressed to argue that activity on kappa or mu receptors significantly increase mTOR phosphorylation.

I would never have suspected anything was wrong with this study if I was not a meathead or a semi-pro evolutionary biologist. And there's no single book or article you can read to tell you how to look for that stuff.
 
I don't know a good one off hand for statistics and all that.

As far as study design goes, I'm not really concerned with the whole randomization versus cohort versus case-control thing (although there are important issues there) In general double blind>single blind>cohort>case-control>case series>single case, but there are exceptions. I'm talking more a logical and science-based dissection of the study question and whether or not the experiment a) actually evaluates the study question and b) whether it does so as effectively as possible. Which has to be done in a study-by-study manner, often with your own forays into the background research.

An example of how involved it is to truly vet a study for scientific validity, there was a recent study in elderly patients involving the use of ibuprofen and weight-lifting. It was shown that the group allowed to use ibuprofen made significantly greater gains in muscle mass and strength than the group who were NOT allowed to use ibuprofen (I can't remember if they were given a placebo instead or not). The researchers then proceeded to argue that Cox-1 inhibition leads to greater muscle gains.

Looking at it, what are some issues with the study?
1. Ibuprofen is a pain reliever. Its effect on pain was not directly assessed during the study.
2. Pain relief might lead to greater intensity in workouts. Intensity was not recorded either as a subjective interpretation of the subjects, or objectively through a look at weight progressions and reps performed.
3. Have the effects of Cox-1 on muscle hypertrophy and strength been studied in the lab? It in fact has. With several studies from tissue to lab animal confirming that Cox-1 is important for muscle mRNA production and is correlated with hypertrophy. Cox-1 inhibition blunts hypertrophy considerably. This is such a well-known effect that the use of anti-inflammatory herbal agents in bodybuilders and powerlifters has gone up considerably, and many who were chronic ibuprofen users no longer touch the stuff. I, on the other hand, insist that the only reason I switched to Celebrex is because of the once-a-day dosing and significant GERD when on mobic.

My best guess is that the reason that the ibuprofen group did better than the control group is that they hurt less, and therefore put more energy into their workouts. And that the effect of increase in intensity far outweighed the negative effects of decreased Cox-1 inhibition in muscle tissue. I suspect that greater effects would have been seen if we used propoxyphene or butorphanol or demerol, but I'd be hard pressed to argue that activity on kappa or mu receptors significantly increase mTOR phosphorylation.

I would never have suspected anything was wrong with this study if I was not a meathead or a semi-pro evolutionary biologist. And there's no single book or article you can read to tell you how to look for that stuff.

That was a real study? Your first mention of the study drew me to the conclusion that IB will cause increase in muscle mass due to pain relief.... duh!
 
So what's a good resource to remind me of the pros/cons of different study designs, the definitions of risk ratio, odds ratio, confidence interval, the difference p value and clinical significance, etc.?

What do you all think of the Journal Watch publications from NEJM?

A serious resrouce is the Center for Evidence Based Medicine

A great short-cut resource is the Carlat report
 
Top