Biostats question to pick your brain

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Mustafa

Member
10+ Year Member
15+ Year Member
20+ Year Member
Joined
Dec 11, 2000
Messages
109
Reaction score
1
Made up scenario, so association is of course imaginary; assume that subject selection was fair, etc...this is just a calculation question.....

65 year old men retrospectively surveyed if they develop cancer based on daily Selenium 'exposure' for the last 20 years. Two groups of 65 yo men are compared: one group that imbibed selenium daily at X units or greater for twenty years and another group that had no selenium exposure at all for 20 years (ie no selenium daily for 20 years). The data:

+ Cancer and + selenium: 120
+ Cancer and - selenium: 180
- Cancer and + selenium: 1380
- Cancer and - selenium: 820
What is the relative risk for cancer development in men NOT taking selenium compared to those who did? [Show your calculations and logic]
 
I think it's 2.25.

Risk of ca. when not taking selenium = 180/(180+820)
Risk of ca. when taking selenium = 120/(120+1380)

RR of getting ca. when not taking selenium compared to taking selenium = (180/1000) / (100/1500) = 2.25

The risk of getting cancer when not taking selenium is 2.25 greater than taken selenium.
 
Originally posted by Jaded Soul
I think it's 2.25.

Risk of ca. when not taking selenium = 180/(180+820)
Risk of ca. when taking selenium = 120/(120+1380)

RR of getting ca. when not taking selenium compared to taking selenium = (180/1000) / (100/1500) = 2.25

The risk of getting cancer when not taking selenium is 2.25 greater than taken selenium.

Jaded Soul is right.

You compute 2 absolute risk groups first, and then divide them to get the relative risk
 
Originally posted by MacGyver
Jaded Soul is right.

You compute 2 absolute risk groups first, and then divide them to get the relative risk

If only they were going to be that straight forward on the real thing...
 
It's an anal point, but I thought you couldn't make any kind of calculation or assumption about risk based on a retrospective study, only an odds ratio calculation, which approximates a relative risk but isn't quite the same thing so the terminology must be kept distinct. Relative risks can only be derived from prospective cohort studies, or so I thought. This is what I was taught, anyway.

😕
 
Originally posted by hammertime
It's an anal point, but I thought you couldn't make any kind of calculation or assumption about risk based on a retrospective study, only an odds ratio calculation, which approximates a relative risk but isn't quite the same thing so the terminology must be kept distinct. Relative risks can only be derived from prospective cohort studies, or so I thought. This is what I was taught, anyway.

😕

As I understand it, odds ratios are used to calculate risk from case-control studies. In these studies, you are trying to determine if people with a known outcome have been exposed to some risk factor. You're estimating the odds that the outcome was the result of the exposure.

Relative risk is used in cohort studies, where you start with known exposures and want follow the people to see if the outcome happens. You're estimating the risk that a given exposure will result in the outcome. It doesn't matter whether it's a retrospective or prospective cohort study.

I believe the reason odds ratios are only approximations of relative risk is because in a case-control study you cannot guarantee that the thing you are calling the "exposure" precedes the thing you are calling the "outcome" in time.

Or I could have learned it totally wrong and someone needs to correct me...
 
Originally posted by Jaded Soul
As I understand it, odds ratios are used to calculate risk from case-control studies. In these studies, you are trying to determine if people with a known outcome have been exposed to some risk factor. You're estimating the odds that the outcome was the result of the exposure.

Relative risk is used in cohort studies, where you start with known exposures and want follow the people to see if the outcome happens. You're estimating the risk that a given exposure will result in the outcome. It doesn't matter whether it's a retrospective or prospective cohort study.

I believe the reason odds ratios are only approximations of relative risk is because in a case-control study you cannot guarantee that the thing you are calling the "exposure" precedes the thing you are calling the "outcome" in time.

Or I could have learned it totally wrong and someone needs to correct me...

No, my bad. You're 100% correct in what you state above. I reread the original question (um, a little more carefully this time) and the problem here was my poor reading comprehension the first time around. Sorry if I caused any confusion.
 
Top