Sens/prevalence

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

HiddenTruth

Senior Member
7+ Year Member
15+ Year Member
20+ Year Member
Joined
Jul 30, 2002
Messages
892
Reaction score
3
How is it that changing the criteria (reference interval) for the diagnosis of a disease changes the sensitivity of a test? That doesn't make sense--I had a qbank q. on this. I would think that if you decrease the cutoff for a dx (reducing the 126-->100 for dx of DM), then that would simply increase the prevalence of the disease in the population, resulting in a decreased neg PV. The explanation goes on to say that lowering the reference interval from 126 to 100 would increase the test's sensitivity, since a lower glc cut off approaches the normal value for glc in the normal population, and thereby, increasing the neg PV. Any thoughts?

"A test's sensitivity is inversely proprotional to its specificity. Increasing the sensitivity, automatically lowers its specificity, since the number of FP's will increase". Again, I believe they're confusing this stuff. Sens and specificty are independent variables of a test.

Members don't see this ad.
 
HiddenTruth said:
How is it that changing the criteria (reference interval) for the diagnosis of a disease changes the sensitivity of a test? That doesn't make sense--I had a qbank q. on this. I would think that if you decrease the cutoff for a dx (reducing the 126-->100 for dx of DM), then that would simply increase the prevalence of the disease in the population, resulting in a decreased neg PV. The explanation goes on to say that lowering the reference interval from 126 to 100 would increase the test's sensitivity, since a lower glc cut off approaches the normal value for glc in the normal population, and thereby, increasing the neg PV. Any thoughts?

"A test's sensitivity is inversely proprotional to its specificity. Increasing the sensitivity, automatically lowers its specificity, since the number of FP's will increase". Again, I believe they're confusing this stuff. Sens and specificty are independent variables of a test.


Sensitivity is "how many cases do you catch (vs. how many do you miss)". If you lowered the cutoff to 100 (or 60, or 10) then you wouldn't miss any at all. Hence, sesitivity increases. Specificity goes down, as you're now loaded with false-positives.

NPV would also increase, as your sensitivity is now so ridiculously high, if a case falls out of that range, odds are they don't have whatever disease it is you're testing for.

BTW, decreasing the prevalence decreases the "positive" predicitive value. It "increases" the NPV.

The equations in FA explain it better, but I figured you wanted to understand the reasoning. Good luck.

HamOn
 
HamOnWholeWheat said:
Sensitivity is "how many cases do you catch (vs. how many do you miss)". If you lowered the cutoff to 100 (or 60, or 10) then you wouldn't miss any at all. Hence, sesitivity increases. Specificity goes down, as you're now loaded with false-positives.

NPV would also increase, as your sensitivity is now so ridiculously high, if a case falls out of that range, odds are they don't have whatever disease it is you're testing for.

BTW, decreasing the prevalence decreases the "positive" predicitive value. It "increases" the NPV.

HamOn

Ok, but by lowering the cutoff, you would increase the prevalence, which would increase the PPV and decrease the NPV, completely opposite of what increasing the sensitivity would do.
 
HiddenTruth said:
Ok, but by lowering the cutoff, you would increase the prevalence, which would increase the PPV and decrease the NPV, completely opposite of what increasing the sensitivity would do.

Hmmm... I'm missing something in there somewhere.

"by lowering the cutoff, you would increase the prevalence"

agreed

"which would increase the PPV and decrease the NPV"

still with you

"completely opposite of what increasing the sensitivity would do"

Lost ya. I guess I don't see that direct relationship between sensitivity and PPV/NPV. The domains they're calculated under aren't the same. I don't see how you can make a meaningful comparison between them, but that's just looking at them mathematically.

(disease)
+ -
+ a b
- c d

Prevalence IRRELEVANT
sens = a/a+c
spec = b/b+d

Prevalence RELEVANT
PPV = a/a+b
NPV = d/c+d

Therefore, changing the prevalence effects the PPV/NPV, changing the sensitivity doesn't. I know, I know, lowering the cutoff in this case increases the sensitivity AND increases the prevalence which increases the PPV, but that doesn't necessitate that increasing the sensitivity increases the PPV by itself.

Suppose you made a better test for detecting a virus. Prevalence would stay the same, yet test sensitivity would go up. "a" wouldn't be higher, "c" would just be lower (fewer false negatives). To increase "a" would mean creating new infected people, so to speak. Thus, PPV = a/a+b, no change from before we increased the sensitivity of the test.

The difference here is that the test is just a defined criteria, which also happens to define the disease. So by lowering the disease standard, you simulataneously increase the sensitivity of the disease, and the prevalence as well.

So I think the problem was that you were logically assuming that PPV/NPV and Sens./Spec. are related, when they're not. They happened to be both effected in the question you mentioned, but that doesn't necessitate a causal link.

Excellent question BTW. That one really made me think. :thumbup:

HamOn
 
Members don't see this ad :)
Wow, I think I am more confused than I was ever before about biostats

HamOnWholeWheat said:
Suppose you made a better test for detecting a virus. Prevalence would stay the same, yet test sensitivity would go up. "a" wouldn't be higher, "c" would just be lower (fewer false negatives). To increase "a" would mean creating new infected people, so to speak. Thus, PPV = a/a+b, no change from before we increased the sensitivity of the test.

I don't quite understand this. If you increase the sensitivity, you automatically decrease the FN number. I mean if a = 60 and c = 40, if you change c, then "a" automatically goes up, I'm thinking. Right now you would have a 60% sensitivity (a/a+c), but if you increase the sensitivity to 80%, then you are able to detect 20 more people that do have the disease that were not detected before, meaning your FN (c) would decrease by 20, and that should be added to your TP (a). So, now a = 80, while c=20. Giving you a total of 100 people, like before. How can you increase the sensitivity without changing the TP number? By definiton, you are now able to detect more people that do have the disease that you wern't able to before (decreasing FN, and increasing TP).

HamOnWholeWheat said:
Therefore, changing the prevalence effects the PPV/NPV, changing the sensitivity doesn't.

And, based on the same principle above, I think, by increasing your sensitivity you have a smaller FN number, and therefore your NPV is higher. I mean, isn't that the reason why tests with high sensitivity are considered as screening tests, becuase your FN number is decreased...so a test that comes back as negative represents a true negative more than likely. THat's why tests that have 100 percent sensitivity also have a 100 percent NPV. So, there is some relationship there. But, I may be missing a concept.

And, so again, back to my original question. Fine, I ccan see where you can't make a correlation of PPV with sensitivity. But, it is true that by decreasing the cut off, you increase prevalence-->increase PPV, and decrease neg PV
Likewise, the same scenario increases sensitivity of the test, so that would result in an increased neg. PV based on my explanation above (increasing sensitivity-->decreases FN--->increases NPV). So, it really doesn't make sense to me. I know I am probably wrong, because I have read that sens and specifcity don't correlate with NPV and PPV, but somehow, both those scenarios make sense. Any thoughts?
 
HiddenTruth said:
Wow, I think I am more confused than I was ever before about biostats

I don't quite understand this. If you increase the sensitivity, you automatically decrease the FN number. I mean if a = 60 and c = 40, if you change c, then "a" automatically goes up, I'm thinking. Right now you would have a 60% sensitivity (a/a+c), but if you increase the sensitivity to 80%, then you are able to detect 20 more people that do have the disease that were not detected before, meaning your FN (c) would decrease by 20, and that should be added to your TP (a). So, now a = 80, while c=20. Giving you a total of 100 people, like before. How can you increase the sensitivity without changing the TP number? By definiton, you are now able to detect more people that do have the disease that you wern't able to before (decreasing FN, and increasing TP).


"Right now you would have a 60% sensitivity (a/a+c), but if you increase the sensitivity to 80%, then you are able to detect 20 more people that do have the disease that were not detected before"

Yeah, that's actually correct. After I submitted my last message, I realized I screwed up that point but had to run. I was typing it in a rush, and it came out as babble. I think I can do it better this time. :oops:

The problem is, in this case, our "test" (blood sugar >100) is ALSO THE DEFINITION OF THE DISEASE. If we say that glucose > 100 is the cutoff for diabetes, then "a+c" is the number of people who absolutely have the disease according to that definition, and that goes up as we've just redefined what the disease is. That's why prevalence "a+c" increases in this case.

In an entirely unrelated coincidence, our "test" is also glucose > 100, increasing our sensitivity as well, but its totally unrelated to the prevalence going up. Sensitivity depends on the ratio of (a:c), but the total (a+c) should not change by changing the ratio alone.

So in this case, the sensitivity didn't increase "a" as much as the redefinition of the disease did when it increased the prevalence. It may seem like splitting hairs but its not.

This should show why this Qbank question was so confusing:

1) In isolation, just redefine the disease to be glucose <100.
Result: "a+c" goes up, "b+d" goes down. Simple right? More people meet the disease criteria.

2) Ignoring what we just did, lets say an "unknown" test for detecting diabetes (not related to blood sugar) undergoes a technological advance which increases its sensitivity (like a new gene is found in all diabetics). Lets say the old test detected 80/100 ("a" = 80, "c" = 20) effected people. Now you detect 99/100 (a= 99, c=1. Can the prevalence have gone up? No. Your sensitivity just went up. "A+C" equalled 100 before, and it still equals 100.

For this crappy Kaplan problem, the "test" is just the logical criteria of glucose >126 vs. glucose >100. So if we allow more people to fit the criteria, then our sensitivity magically went up. This test has nothing to do with beakers and pipettes, its just a cutoff. But wait, we also just redefined the disease to be glucose >100! So prevalence will go up after all.

Combine the two effects as done in this problem: "a" goes up for two reasons and the effect on prevalence appears to be linked to the effect on sensitivity.

Sorry for the nearly incomprehensible previous reply. Hopefully that should clear it up.

HamOn
 
HamOnWholeWheat said:
"Right now you would have a 60% sensitivity (a/a+c), but if you increase the sensitivity to 80%, then you are able to detect 20 more people that do have the disease that were not detected before"

Yeah, that's actually correct. After I submitted my last message, I realized I screwed up that point but had to run. I was typing it in a rush, and it came out as babble. I think I can do it better this time. :oops:

The problem is, in this case, our "test" (blood sugar >100) is ALSO THE DEFINITION OF THE DISEASE. If we say that glucose > 100 is the cutoff for diabetes, then "a+c" is the number of people who absolutely have the disease according to that definition, and that goes up as we've just redefined what the disease is. That's why prevalence "a+c" increases in this case.

In an entirely unrelated coincidence, our "test" is also glucose > 100, increasing our sensitivity as well, but its totally unrelated to the prevalence going up. Sensitivity depends on the ratio of (a:c), but the total (a+c) should not change by changing the ratio alone.

So in this case, the sensitivity didn't increase "a" as much as the redefinition of the disease did when it increased the prevalence. It may seem like splitting hairs but its not.

This should show why this Qbank question was so confusing:

1) In isolation, just redefine the disease to be glucose <100.
Result: "a+c" goes up, "b+d" goes down. Simple right? More people meet the disease criteria.

2) Ignoring what we just did, lets say an "unknown" test for detecting diabetes (not related to blood sugar) undergoes a technological advance which increases its sensitivity (like a new gene is found in all diabetics). Lets say the old test detected 80/100 ("a" = 80, "c" = 20) effected people. Now you detect 99/100 (a= 99, c=1. Can the prevalence have gone up? No. Your sensitivity just went up. "A+C" equalled 100 before, and it still equals 100.

For this crappy Kaplan problem, the "test" is just the logical criteria of glucose >126 vs. glucose >100. So if we allow more people to fit the criteria, then our sensitivity magically went up. This test has nothing to do with beakers and pipettes, its just a cutoff. But wait, we also just redefined the disease to be glucose >100! So prevalence will go up after all.

Combine the two effects as done in this problem: "a" goes up for two reasons and the effect on prevalence appears to be linked to the effect on sensitivity.

Sorry for the nearly incomprehensible previous reply. Hopefully that should clear it up.

HamOn
Thanks for the explanation. It makes sense, and I understand what you're saying. However, that being said, it's still a shady question, because it depends on which component you are looking at, the prevalence part or the sensitivity part. And, obviously they were looking for an answer in response to an increase in sensitivity, while I was looking for an answer in response to an increase in prevalence. And, they were both present.

In any case, thanks for the clarification. On a side note, is this a reasonable assumption, based on my previous post: (increasing sensitivity-->decreases FN--->increases NPV)?
 
HiddenTruth said:
In any case, thanks for the clarification. On a side note, is this a reasonable assumption, based on my previous post: (increasing sensitivity-->decreases FN--->increases NPV)?

Absolutely. As long as the "test" doesn't redefine the disease and thus alter the prevalence, then increasing sensitivity-->decreases FN--->increases NPV.

Glad to help,

HamOn
 
Top