Hello SDN,
Doing questions, and came up on this topic. Will prevalence not effect false positive values?
TIA!
Doing questions, and came up on this topic. Will prevalence not effect false positive values?
TIA!
This is so wrong on so many levels and is frequently inappropriately taught (along with the major fallacy that a 90% specific test that comes back negative means a 90% chance the patient is disease free, for example). Literature easily shows sensitivity and specificity are variable with characteristics of the patients, for example. One example I recall is Sens/Spec for detecting CAD changing based on the patient's age.So why would prevalence make a difference?
Specificity and sensitivity are intrinsic properties of the test. If the test was less specific, i.e. 90%, the false positive rate would go up, i.e. 10%. It has nothing to do with the prevalence of the disease.
This is so wrong on so many levels and is frequently inappropriately taught (along with the major fallacy that a 90% specific test that comes back negative means a 90% chance the patient is disease free, for example). Literature easily shows sensitivity and specificity are variable with characteristics of the patients, for example. One example I recall is Sens/Spec for detecting CAD changing based on the patient's age.
Nope, not confused about either of the four. Sensitivity and specificity are often incorrectly taught by nonstatisticians as "properties of the test" that are constant but they aren't constant across patient characteristics. See Harrell et al. (a true statistician, and the former founding chair of biostats department at Vanderbilt). He uses this as an example to show one of many reasons that sensitivity and specificity are pretty silly metrics for physicians to be wrapped up with (it almost literally adds nothing to patient care).Pretty sure you have sensitivity/specificity and positive/negative predictive value confused. Many students do this. Again, sensitivity and specificity are properties of the test and do not vary with prevalence. If you contest this, please cite the articles you are referring to. Otherwise, "I remember reading this here" doesn't cut it.
I'm bringing this up only to reference other commonly incorrect "teachings" by nonstatisticians since most don't have a good understanding of these four metrics and how probability works.A 90% specific test that comes back negative does not mean that a patient is 90% disease free. You are correct here.
No one's said anything to the contrary regarding the definitions of sens/spec, ppv/npv.Because whether that patient is disease free would depend on the prevalence or pre-test probability. If your pre-test probability is high, then the predictive value of that negative test is low. If you wish to know the probability of no disease given a negative test, you would need to know the negative predictive value, which is not the same as specificity. Negative predictive value depends on prevalence.
So you've contradicted yourself a few times in the same few sentences. See the reference. I'm also sure you can dig in your own med school slides or up to date articles that reference different sensitivity or specificity for different patient groups (literally varying with patient characteristics for the same test).Finally, any test that detects CAD that varies with age presumably varies because of pre-test probabilities, i.e. prevalence. If you're 20 years old with a positive test, the chances are it's the false positive. This is not sensitivity/specificity but rather predictive value. The positive predictive value of a test for CAD in a 20 year old is bound to be low.
I don't think these are really necessary...I love PSU for free, relatively accurate resources to hand people, but I'm also willing to bet two things: 1) those course materials are designed to be very introductory 2) they're not written by someone with a statistics background and this is really a topic on statistics. Margaret Pepe has written a book (one of many books by statisticians) on the topic entitled "The Statistical Evaluation of Medical Tests for Classification and Prediction" (or something close to that) and this isn't really a topic of epidemiology more than it is a topic of psychology...it's a statistical idea and the properties of NPV, PPV, Sensitivity, and Specificity are all statistical questions in nature.Here are some quick exercises regarding sensitivity/specificity and PPV/NPV if you want to brush up: 10.3 - Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value | STAT 507
Nope, not confused about either of the four. Sensitivity and specificity are often incorrectly taught by nonstatisticians as "properties of the test" that are constant but they aren't constant across patient characteristics. See Harrell et al. (a true statistician, and the former founding chair of biostats department at Vanderbilt). He uses this as an example to show one of many reasons that sensitivity and specificity are pretty silly metrics for physicians to be wrapped up with (it almost literally adds nothing to patient care).
Its not that the test administration is more complicated in any sense (read Harrell's paper-- he's usually an explicit and clear communicator and that paper when actually read is very clear what's going on). Administering an EKG in a 20 year old is the same as a 40 year old, on average and all else constant. What matters is that the probability of seeing that "thing" that makes the test positive, for example, may vary with patient characteristics. This is a large, but not the largest or only, element of the argument for using probability of disease estimates which account for various patient covariates rather than arbitrarily (no, most cutpoints in biomedical literature aren't real) and suboptimally creating cutpoints for tests.From what I gather from here (https://www.jclinepi.com/article/S0895-4356(08)00157-1/fulltext), it seems like the underlying cause of sensitivity/specificity changing with prevalence is because of patient characteristics that complicate administration of the test. For instance, if a 20 year old with CAD is administered a clinical test and the result is positive, you're may be more likely to attribute that to the underlying CAD versus if you administer the same clinical test to a 70 year old and it's positive, you may be more likely to chalk it up to comorbidities, etc. So there is no direct link between prevalence and sensitivity/specificity - just that they may vary together due to a shared underlying mechanism. Is that roughly correct?
Also, if you're taking USMLE, I would assume that sensitivity/specificity are invariable across prevalences.