high yield biostats

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

nuclearrabbit77

commercial sex worker
7+ Year Member
15+ Year Member
20+ Year Member
Joined
Jul 18, 2002
Messages
565
Reaction score
2
on pg. 37-38 of high yield biostats they define type I (alpha) error to be false negative, and type II (beta) error to be false positive.

however i checked up on some other sources which say it's the opposite.

Members don't see this ad.
 
fadem (BRS BS, 3rd edition, pg 255-6) agrees with high yield

Originally posted by nuclearrabbit77
on pg. 37-38 of high yield biostats they define type I (alpha) error to be false negative, and type II (beta) error to be false positive.

however i checked up on some other sources which say it's the opposite.
 
I'm contacting the author for you to get more of an explanation.

-Scott

Lippincott Williams & Wilkins
 
Hello,
I'm in my 2nd semester of a 1 year epidemiology MPH at UNC-CH.

Type 1 error (alpha) Usually set at .05, to indicate that you accept a 5% chance of rejecting the null hypothesis when in fact the null hypothesis is correct (false positive)

Type II error (beta) Usually set at .20, to indicate that you accept a 20% chance of failing to reject the null hypothesis when in fact the null hypothesis is incorrect (false negative)

Type II is usually set so much higher than type I in study design because it is considered a more serious error to think that you have a significant result when in fact you don't than it is to think that your result is not significant when in fact it is. They both can be set to whatever the researcher wants, but by convention usually are set as above.

And I think (1-beta)=power of the study, so generally studies (are designed to) have 80% power to detect "significant results", significant results being defined differently for each study (i.e. a 10% change in the intervention group vs. the placebo group, etc.).

Ok, yeah, the above was off the top of my head, but I double checked the error thing in Gordis' book "Epidemiology" (because I'm all anal like that), so I think you're good to go with that.
:)

Hey there again:
I actually looked at the explanation in high yield, and they are correct (and in line with what I was saying) except that they are defining things by the null hypothesis--i.e. to them, "false negative" means that you are falsely (incorrectly) saying that the null hypothesis is not true. They do the same for "false positive": you are incorrectly (falsely) saying that the null hypothesis is true.

To me, it makes more sense to define things by what your result is vs. reality: so if you find a significant result when the null hypothesis is true, I would call that a false positive, and if you were to find a insignificant result when the null hypothesis is not true, I would call that a false negative.

The theory behind what is said in the high yield bios book with regard to this is correct tho, it's just a difference in what your reference is as you define the realtionships.

Hope that wasn't too confusing--bottom line go with whatever makes sense to you. I'm a visual person, so drawing out that 2x2 table on page 36 is what I would do w/regard to answering questions on this.
 
Members don't see this ad :)
Hmm, I think someone should give me some reputation points for that one. I looked stuff up and everything :)
 
Sohalia said:
Hmm, I think someone should give me some reputation points for that one. I looked stuff up and everything :)

thanks sohalia, that really helps... keep up the good work!
 
Here is what the author had to say...

There's a little more to this than meets the eye.

Conventionally, most authors call Type I errors false negative, and
Type II false positive; but I do see some authors who define them the
other way round.

I think the reason for this is that in hypothesis testing, we are
strictly speaking testing the null hypothesis, ie. the hypothesis that
there is no difference, and when we reach a positive conclusion about
the null hypothesis, we are reaching a negative conclusion about the
phenomenon we are studying.

For example, let's say we are testing a drug vs. a placebo.

Let's say that we find that there is no difference between the drug and
the placebo; the null hypothesis is that there is no difference, so we
accept the null hypothesis.

If in reality this is a mistake, we have made a false positive (type
II) error (we have reached a positive conclusion, but it is a false
one).

But some authors would say that a study which reaches a conclusion that
there IS a significant difference is a "positive" study, and one that
concludes that there is no difference (eg. between a drug and a
placebo) is a "negative" study. (This sounds like common sense, but you
have to remember that strictly speaking in statistics we are testing a
null hypothesis, not a drug).

An example of this is shown on the American College of Physicians
online primer at

http://www.acponline.org/journals/ecp/novdec01/primer_errors.htm (and
the table it is linked to)

where they say "A type I error is analogous to a false-positive result
during diagnostic testing: A difference is shown when in "truth" there
is none." - but this is only because they are taking the "common sense"
idea that a "positive" result is one that shows a "difference"
- and if you look at the table which the page links to, you see that a
type I error is one where you conclude there is no "difference" when in
fact there is one; which is the same as I am saying, except they
confuse us by referring this to a "positive" study!

It sure is confusing. But basically everyone agrees that a type I is
rejecting the null hypothesis when it is true, and a type II is
accepting a null hypothesis that is false; the "false positive" or
"false negative" terminology really depends on whether you are
referring to the null hypothesis or the alternative hypothesis.
 
From the author:

There's a little more to this than meets the eye.

Conventionally, most authors call Type I errors false negative, and
Type II false positive; but I do see some authors who define them the
other way round.

I think the reason for this is that in hypothesis testing, we are
strictly speaking testing the null hypothesis, ie. the hypothesis that
there is no difference, and when we reach a positive conclusion about
the null hypothesis, we are reaching a negative conclusion about the
phenomenon we are studying.

For example, let's say we are testing a drug vs. a placebo.

Let's say that we find that there is no difference between the drug and
the placebo; the null hypothesis is that there is no difference, so we
accept the null hypothesis.

If in reality this is a mistake, we have made a false positive (type
II) error (we have reached a positive conclusion, but it is a false
one).

But some authors would say that a study which reaches a conclusion that
there IS a significant difference is a "positive" study, and one that
concludes that there is no difference (eg. between a drug and a
placebo) is a "negative" study. (This sounds like common sense, but you
have to remember that strictly speaking in statistics we are testing a
null hypothesis, not a drug).

An example of this is shown on the American College of Physicians
online primer at

http://www.acponline.org/journals/ecp/novdec01/primer_errors.htm (and
the table it is linked to)

where they say "A type I error is analogous to a false-positive result
during diagnostic testing: A difference is shown when in "truth" there
is none." - but this is only because they are taking the "common sense"
idea that a "positive" result is one that shows a "difference"
- and if you look at the table which the page links to, you see that a
type I error is one where you conclude there is no "difference" when in
fact there is one; which is the same as I am saying, except they
confuse us by referring this to a "positive" study!

It sure is confusing. But basically everyone agrees that a type I is
rejecting the null hypothesis when it is true, and a type II is
accepting a null hypothesis that is false; the "false positive" or
"false negative" terminology really depends on whether you are
referring to the null hypothesis or the alternative hypothesis.

Hope this helps!

Feel free to contact me if I can be of any assistance

-Scott

[email protected]
 
Top