- Joined
- Jul 30, 2002
- Messages
- 892
- Reaction score
- 3
How is it that changing the criteria (reference interval) for the diagnosis of a disease changes the sensitivity of a test? That doesn't make sense--I had a qbank q. on this. I would think that if you decrease the cutoff for a dx (reducing the 126-->100 for dx of DM), then that would simply increase the prevalence of the disease in the population, resulting in a decreased neg PV. The explanation goes on to say that lowering the reference interval from 126 to 100 would increase the test's sensitivity, since a lower glc cut off approaches the normal value for glc in the normal population, and thereby, increasing the neg PV. Any thoughts?
"A test's sensitivity is inversely proprotional to its specificity. Increasing the sensitivity, automatically lowers its specificity, since the number of FP's will increase". Again, I believe they're confusing this stuff. Sens and specificty are independent variables of a test.
"A test's sensitivity is inversely proprotional to its specificity. Increasing the sensitivity, automatically lowers its specificity, since the number of FP's will increase". Again, I believe they're confusing this stuff. Sens and specificty are independent variables of a test.