# Internal and External validity, and Attrituable risk percent

#### aspiringmd1015

can someone explain these, and why systemic error specifically decreases accuracy and not precision?

OP
A

#### aspiringmd1015

anyone please? also coefficient of determination.

#### DrPicard

2+ Year Member
Not sure about external or internal validity, but I think external validity implies how applicable the results of a study are to the general population.

Attributable risk is (Incidence in exposed) - (Incidence in unexposed). Its the difference in incidence created by the risk factor. eg if lung cancer has an incidence of 10% in non smokers and 30% in smokers, the AR would be 30 - 10 = 20. Hence, of the 30% incidence, 10% is due to the baseline risk shared with non smokers, and another 20% is due to smoking.

AR% is a slightly different way of looking at the same information. What you do is calculate the AR, then divide it by the incidence of lung cancer in smokers. Since the AR was 20% and the overall incidence in smokers 30%, this becomes 20/30 = 66%. In other words, of all the cases in smokers, 66% are attributable to smoking.

Systematic error implies an error in study design. We design studies to measure something, and how well we measure it is the accuracy. If theres a systematic error in the design of our study, it will be unable to accurately measure what we intend it to. Hence the fall in accuracy.

Precision tells us how close multiple measurements are. Even if theres a systematic error rendering the study inaccurate, its quite possible all out measurements are very close to each other (ie equally wrong).

Eg if a study designed to show how tall people are has some systematic error, it may show the average population height to be 8 feet. This is poor accuracy. However, since the systematic error comes into force with every subject, its quite possible we end up adding 2 feet to every measurement. The study is inaccurate, but still precise, since the error applies equally to all measurements which means they end up lying close to each other.

The correlation coefficient explains the dependence of variables ie an independent and a dependent variable. It shows us how one variable increases or decreases as the other variable increases or decreases. The range falls between -1 (negative correlation) to +1 (positive correlation). 0 means no correlation. The farther from 0 (and the closer to -1 or 1), the stronger the correlation.

If you square the correlation coefficient, you end up with only + numbers, since two negatives multiplied can yield only a positive. Squaring it gives us what we call the coefficient of determination (r squared), whose value lies between 0 and 1. Again, the farther from 0, the stronger the correlation. Specifically, the coefficient of determination gives us the variability in the independent variable that is due to the dependent variable. eg if the coefficient of determinant is 0.56, it means 56% of the variability in the dependent variable is due to the independent variable.

I'm tired and sleepy so I hope I did a good job.