Type I and Type II error

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

jammin_g

Junior Member
7+ Year Member
15+ Year Member
20+ Year Member
Joined
Jun 24, 2003
Messages
13
Reaction score
0
To savy statisticians:
Say I increase the p value from .05 to .1, will the risk of type I error increase AND the risk of type II error decrease?

It sounds logical, but I remember an NBME practice test I took had a question similar to this, where the answers were EITHER increase likelihood of type I or decrease likelihood of type II.
 
jammin_g said:
To savy statisticians:
Say I increase the p value from .05 to .1, will the risk of type I error increase AND the risk of type II error decrease?

It sounds logical, but I remember an NBME practice test I took had a question similar to this, where the answers were EITHER increase likelihood of type I or decrease likelihood of type II.

type 1 error: "reject null" when it is true
type 2 error: "keep null" when it is false

only the incidence of type one error can be indefinitely determined from your "p" value...the probability of type 2 error can be estimated from power (which hasn't been given in this scenario)

hence by increasing your "p" you are increasing the likelihood of type 1 error.
hope this helps..;

So my buddy and I are ammending this:

1) if you are changing the criterion value....there should be no change in error of either type.

2) if you change the computed p value only....then you are definitely increasing your risk of type 1 (and possibly type 2)..

g'luck

ucb
 
Yes, increased p = increased probability of type I (saying its true when in reality it isn't)

p has no bearing on type II error. You need to check Power for that!
 
Janders said:
Yes, increased p = increased probability of type I (saying its true when in reality it isn't)

p has no bearing on type II error. You need to check Power for that!


according to HY biostatistics,

"there will always be a trade-off between type I and type II errors. increasing alpha reduces the change of a type II error, but it simultaneously increases the chance of a type I error."
 
a question on an NBME asked for the "accuracy" of a test...is this another way to say positive predictive value?
 
jammin_g said:
a question on an NBME asked for the "accuracy" of a test...is this another way to say positive predictive value?


i'm pretty sure it is..

ucb
 
pillowhead said:
according to HY biostatistics,

"there will always be a trade-off between type I and type II errors. increasing alpha reduces the change of a type II error, but it simultaneously increases the chance of a type I error."


Yeah, I really need to watch how I phrase things. There is a theoretical trade off whenever you change one. However you cannot measure this trade off only with the p value.
 
jammin_g said:
a question on an NBME asked for the "accuracy" of a test...is this another way to say positive predictive value?

accuracy is true positives plus true negatives divided by all results.

(TP + FP) divided by (TP + FP + FP + FN).

It is not the same as positive predictive value and is generally not considered an overly useful statistic. Usually you want a very sensitive (for screening, for very serious diseases with good treatment) or very specific (for ruling out, for diseases with stigma) test. A test is rarely highly sensitive and specific is rare (pregnancy tests), and and a test that is only moderately both isn't very helpful.
 
jammin_g said:
To savy statisticians:
Say I increase the p value from .05 to .1, will the risk of type I error increase AND the risk of type II error decrease?

It sounds logical, but I remember an NBME practice test I took had a question similar to this, where the answers were EITHER increase likelihood of type I or decrease likelihood of type II.


Remember, sample size plays a big role in determining p-value, confidence Interval and power. So if your sample size is large enough for high power, it is gives you the ability to detect small effect in your analysis. In this case, increasing p-value may likely increase type I error and do the opposite for type II error.
 
Top