Why should an increase precision cause an increased power?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

tarsuc

Full Member
10+ Year Member
Joined
Feb 7, 2011
Messages
232
Reaction score
19
Why should an increase precision cause an increased power? (pg 35 fa)
isnt precision ability to get same result again and again irrespective of whether it is accurate or not.
 
Why should an increase precision cause an increased power? (pg 35 fa)
isnt precision ability to get same result again and again irrespective of whether it is accurate or not.
Three things to remember:
1) precision is another way of saying reduced variability (some like to say reduced uncertainty); in terms of hypothesis testing or confidence intervals, this would translate to a smaller standard error for the parameter of interest (say, the mean).
2) power is the probability of detecting true differences or true effects (concluding a drug works when it actually does work; generically, rejecting a false null hypothesis).
3) When you want to see the effect of one thing (change in precision, for example) on another thing (power, for example) make sure to hold all other factors constant (confidence level/alpha, sample size, etc.)

You can approach this in one of two ways once you have these definitions in your head. Pretend we're examining the mean change in blood pressure for a medication.
1) Intuitively: If we have more precision, we have less uncertainty regarding our estimate of the mean change in blood pressure for the meds. If we have less uncertainty in our estimate, we are more likely to detect the true effect if it exists. It may or may not exist, but if it does, we're more likely to see it when we have more certainty about our estimate of the effect. (Imagine an increased resolution on an image, the lines become sharper.)

2) Teeny bit of math from your intro to stats classes (may or may not help with intuition, depends on who you are):
a) Calculating a test statistic for a hypothesis test, recall the basic formula (let's assume a t-statistic): t= (x-mu)/se ; where x is the sample statistic (sample mean in our example), mu is the suspected value of the true mean in the null hypothesis (in our case, this would be that the mean change in bp is zero to indicate no effect as our null), and se is the standard error of the mean change in bp. Since the se is our estimate of precision (smaller is more precise) and it is in the denominator, if it gets smaller our calculated t-stat gets larger in absolute value. Recall that larger absolute values of test statistics (any kind) correspond to smaller p-values for the test. For a set level of alpha, a smaller p-value (as a result of our increased precision) makes it "easier" to reject the null hypothesis. This means we're more likely to reject the null in all cases (whether the null is true or not). Therefore, we've met the idea that increasing precision (smaller se) leads to an increased probability of rejecting the null when the null is false (since the test statistic is "more extreme" or similarly, because the p-value on the test is smaller).

b) Think of a confidence interval (which is probably more fresh in your mind, given the way that medical literature and education is starting to prefer CIs to p-values). The 95% confidence interval for our example (mean change in bp, x-bar), if we assume a large enough sample size for simplicity, would be: x-bar +/- (1.96*se); recall the se is the standard error of our sample statistic, the sample mean change in bp. If we have increased precision, the se is smaller. If we add (subtract) a smaller value to (from) the sample statistic, the magnitude of the upper (lower) bound of the confidence interval is smaller-- that is, our confidence interval becomes narrower to reflect the increased precision (reduced uncertainty) surrounding our estimate of the parameter. If you have a narrower confidence interval, there are more values that fall outside the interval (often considered significant at the (100-CI%) level of significance). If you've made it easier to find values that are considered significant, this is true regardless of whether the null is actually true or false (regardless of a true effect or no true effect). Therefore, the increased precision made it more likely that we recognize true differences/effects when they actually exist (meeting the definition of increased power).
I hope something in here was useful! Feel free to let me know if I can make anything more clear.
 
Last edited by a moderator:
Three things to remember:
1) precision is another way of saying reduced variability (some like to say reduced uncertainty); in terms of hypothesis testing or confidence intervals, this would translate to a smaller standard error for the parameter of interest (say, the mean).
2) power is the probability of detecting true differences or true effects (concluding a drug works when it actually does work; generically, rejecting a false null hypothesis).
3) When you want to see the effect of one thing (change in precision, for example) on another thing (power, for example) make sure to hold all other factors constant (confidence level/alpha, sample size, etc.)

You can approach this in one of two ways once you have these definitions in your head. Pretend we're examining the mean change in blood pressure for a medication.
1) Intuitively: If we have more precision, we have less uncertainty regarding our estimate of the mean change in blood pressure for the meds. If we have less uncertainty in our estimate, we are more likely to detect the true effect if it exists. It may or may not exist, but if it does, we're more likely to see it when we have more certainty about our estimate of the effect. (Imagine an increased resolution on an image, the lines become sharper.)

2) Teeny bit of math from your intro to stats classes (may or may not help with intuition, depends on who you are):
a) Calculating a test statistic for a hypothesis test, recall the basic formula (let's assume a t-statistic): t= (x-mu)/se ; where x is the sample statistic (sample mean in our example), mu is the suspected value of the true mean in the null hypothesis (in our case, this would be that the mean change in bp is zero to indicate no effect as our null), and se is the standard error of the mean change in bp. Since the se is our estimate of precision (smaller is more precise) and it is in the denominator, if it gets smaller our calculated t-stat gets larger in absolute value. Recall that larger absolute values of test statistics (any kind) correspond to smaller p-values for the test. For a set level of alpha, a smaller p-value (as a result of our increased precision) makes it "easier" to reject the null hypothesis. This means we're more likely to reject the null in all cases (whether the null is true or not). Therefore, we've met the idea that increasing precision (smaller se) leads to an increased probability of rejecting the null when the null is false (since the test statistic is "more extreme" or similarly, because the p-value on the test is smaller).

b) Think of a confidence interval (which is probably more fresh in your mind, given the way that medical literature and education is starting to prefer CIs to p-values). The 95% confidence interval for our example (mean change in bp, x-bar), if we assume a large enough sample size for simplicity, would be: x-bar +/- (1.96*se); recall the se is the standard error of our sample statistic, the sample mean change in bp. If we have increased precision, the se is smaller. If we add (subtract) a smaller value to (from) the sample statistic, the magnitude of the upper (lower) bound of the confidence interval is smaller-- that is, our confidence interval becomes narrower to reflect the increased precision (reduced uncertainty) surrounding our estimate of the parameter. If you have a narrower confidence interval, there are more values that fall outside the interval (often considered significant at the (100-CI%) level of significance). If you've made it easier to find values that are considered significant, this is true regardless of whether the null is actually true or false (regardless of a true effect or no true effect). Therefore, the increased precision made it more likely that we recognize true differences/effects when they actually exist (meeting the definition of increased power).
I hope something in here was useful! Feel free to let me know if I can make anything more clear.

Thanks a lot. this was really helpful.

appreciate your time.
 
Top