Biostats question - BRS Behavioral

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Phloston

Osaka, Japan
Removed
Lifetime Donor
10+ Year Member
Joined
Jan 18, 2012
Messages
3,880
Reaction score
1,676
P. 268 of BRS Behavioral has this question:

If the cutoff value indicating a positive PSA test is lowered from 4 ng/mL to 3 ng/mL, this would change

A) increase negative predictive value
B) decrease sensitivity
C) increase false-negative rate
D) increase positive predictive value
E) increase specificity

BRS has choice A as the answer.

Now, with respect to lowering the cutoff, sensitivity of the test would increase and specificity would decrease. This would mean increased false- and true-positives, as well as decreased false-positives and -negatives.

My concern is that there's no way to judge, based on this question, to what extent sensitivity and specificity are increasing and decreasing, respectively, and therefore it's not possible to say whether NPV and PPV would increase or decrease.

For instance, if sensitivity increases fractionally more than specificity decreases, PPV and NPV both increase. If sensitivity increases less than specificity decreases, PPV and NPV both decrease. In this question, however, there's no way to gauge whether that's even the case.

Any thoughts here?
 
Shifting Left = Increased Sensitivity = Decreased PPV / Increased NPV (more false positives)
Shifting Right = Increased Specificity = Increased PPV / Decreased NPV (more false negatives)

This is the general idea you need to have in mind.

cutoffs2.gif

Moving from B or C to A = above results.

The only way to increase both PPV and NPV would be to shift the two curves further apart with less overlap.

source for image/for further reading: http://vet.osu.edu/extension/review
 
I've seen this curve in FA, but haven't exactly analyzed the slopes or anything. However, it's a bit deceiving because the NPV/PPV relationships, as you've said, would hold true as long as we start at the intersecting point and shift left or right, whereas I would think that if we're changing the cutoff, it's because we're moving to a desired intersecting point. This actually matters because the rates at which sensitivity and specificity change relative to one another differ based on whether we're moving to or from.

Do you see what I'm getting at?
 
I've seen this curve in FA, but haven't exactly analyzed the slopes or anything. However, it's a bit deceiving because the NPV/PPV relationships, as you've said, would hold true as long as we start at the intersecting point and shift left or right, whereas I would think that if we're changing the cutoff, it's because we're moving to a desired intersecting point. This actually matters because the rates at which sensitivity and specificity change relative to one another differ based on whether we're moving to or from.

Do you see what I'm getting at?

You're overthinking this. The intersection point isn't a desired point. To get it you randomly sample a few thousand people, do the lab test, and then administer a gold standard diagnosis with no expectation to evaluate the test. From there a graph is created and shows something like this one, that compared to the gold standard results (healthy vs. diseased) these were the lab values. It's a standardized data plot of the population being studied.

Because the graph isn't changing we can conceptually think about this instead of going through the tedious algebra (I tried it once before, but playing with 3 variables is f'ing annoying).

NPV = True negatives / (True negatives + False negatives)

If you increase your sensitivity, you are by definition going to catch more healthy people because your threshold for detecting disease has been lowered. The number of true negatives will increase more than your false negatives because more people are likely to be healthy with a lower threshold for disease. This will cause a net increase in NPV.

Conversely, PPV = TP / (TP + FP)

Lower threshold for disease detection would have you increase the number of false positive you get. Type I error is the same thing as false positive rate = 1 - specificity. And we know increasing sensitivity decreases specificity. 1 - a smaller number = a higher false positive rate. Overall net decreasein PPV since our denominator got bigger.

Besides this, also remember the associations between predictive value and prevalence, which they love to test. Decreasing or increasing prevalence will generally not change spec/sens (For USMLE it will NEVER increase sens/spec, for theoretical biostatisticians, yes, it is possible).
 
Last edited:
...(For USMLE it will NEVER increase TP...

I hope you realize that that statement is the key.

If TP remain the same, then so do TN. Therefore, the only reason PPV decreases and NPV increases is because one false parameter must decrease when the other increases.

I've attached a photo of a drawing I just made.... (and, yes, I know that the NPV/PPV are the right-most column, which I've left blank).
 

Attachments

I hope you realize that that statement is the key.

If TP remain the same, then so do TN. Therefore, the only reason PPV decreases and NPV increases is because one false parameter must decrease when the other increases.

I've attached a photo of a drawing I just made.... (and, yes, I know that the NPV/PPV are the right-most column, which I've left blank).

Hey man, sorry I typed that out wrong. This is what I get for writing at 4am sorry to have confused you.

I meant to say that changing prevalence generally wont affect sens/spec and does not for the USMLE.

Denominator changes matter more in division which it why it will be smaller. Here's an online calculator for you to try out as many of these as you like.

http://vassarstats.net/clin2.html

Also, in the chart when you increase sensitivity, you would have increased in W and Y but I see maybe youre just including FP and FN.
 
Top