How to use "rule out"

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

HHCHANG

New Member
15+ Year Member
Joined
Nov 5, 2003
Messages
1
Reaction score
0
I was confused about how to use the word, "rule out".
In our hospital, the meaning of "rule out" is "suspect".
So, we can see "rule out carcinoma"(mean "suspect carcinoma") in chart or hear that when disccusion.

Is it formal or coeerct?


Regards,
 
Usually you're ordering some lab/imaging in order to "rule out" your worst-case-scenerio differentials, even though they're unlikely (?)

e.g. in a plan for severe HA, you may write "Head CT R/O neoplasm" ???

In this case, you may feel that a neoplasm is unlikely, but there's enough clinical suspicion to support ordering the study in order to confidently dismis that d/dx (?)
 
Rule out is generally used as Teufulhunden mentions. However, as a radiology resident, I have to let you know that you should actually not use "rule out ____" as your indication. Any study with this as the sole history will not be reimbursed by medicare or many insurance companies (an unfortunatly reality). Instead, you should give a constellation of symptoms (right arm numbess, left facial droop) and your suspected diagnosis (evaluate for stroke or mass). This will also help the radiologist immensely in reading a study and diminish the number "clinical corrolation recommended" reports you will receive. Extremety x-rays with history of "pain" are not helpful. More specific info is needed "pain over left distal radius after fall on outstretched hand" is much better.
 
Generally, the term "rule out" is referring to a diagnostic test that has a high sensitivity (ie it will pick up the disease if present) or a test that is considered the gold standard (ie if the test is positive, by definition, the patient has the disease). "rule out" is not acceptable to put in patients charts for billing purposes though, insurance companies will not pay for tests if you say that you are getting them to rule something out, you have to list it as "suspect". It's a silly nomenclature thing that insurance companies use to avoid paying doctors and hospitals. You will find a lot of chart haggling that insurance companies will do in order to avoid reimbursing doctors.
 
agreed with posts above. throw out r/o and use suspect. learned recently that the term sma is outdated. now we use bmp.
 
OK, along the same lines...can anyone explain to me what it means to have a "high index of suspicion"?

I always seem to hear it used to mean something like "you have to really suspect something that you have little reason to suspect". Like, you have to have a high index of suspicion to diagnose pancreatic cancer from abdominal pain because you usually wouldn't order the appropriate diagnostic tests given that symptom. So as best I can tell, "to have a high index of suspicion" really means "to make a lucky guess". Which, of course, seems counter to what we're usually taught in medicine, which is to make diagnoses based on evidence and not on hunches.

Does anyone have better insight into the term?
 
Originally posted by Mad Scientist
OK, along the same lines...can anyone explain to me what it means to have a "high index of suspicion"?

I always seem to hear it used to mean something like "you have to really suspect something that you have little reason to suspect". Like, you have to have a high index of suspicion to diagnose pancreatic cancer from abdominal pain because you usually wouldn't order the appropriate diagnostic tests given that symptom. So as best I can tell, "to have a high index of suspicion" really means "to make a lucky guess". Which, of course, seems counter to what we're usually taught in medicine, which is to make diagnoses based on evidence and not on hunches.

Does anyone have better insight into the term?

High index of suspicion just means exactly what it sounds like it means. It means that you have a high pre-test probability before doing whatever diagnostic test you are going to be doing. Generally, it usually doesn't mean that you don't order the test that you would need to order to confirm your diagnosis, but it can mean that you treat empirically, particularly if there isn't a good test that you can do to make the diagnosis. An example of this using the example that you used would be if you had a patient with painless jaundice and weight loss. That means that you will have a high index of suspicion for pancreatic cancer, and that will mean that you will CT the abdomen. If your lab tests are unequivicol and CT is negative, you may choose to re-scan the abdomen or do a more invasive procedure some time later because your high index of suspicion would cause you to be more likely to disregard negative results of your diagnositic test. A more classic example of this occurs in ID, where you are often treating empirically for organisms that you have a high index of suspicion being present, but no culture that confirms that organism or any organism is present.
 
Well, that makes sense. But it isn't usually the way I hear the term used...usually it's something like "there's no good signs or symptoms or historical clues to this condition, so you're going to have to have a high index of suspicion to find it". Which seems to be a contradiction...as you suggest, a high index of suspicion should be the result of signs, symptoms, and history. My illustration of abdominal pain/pancreatic CA isn't quite right; it would be more like the situation in the early stages of many cancers, where there is no historical or physical evidence of its existence.

Anyway, I think you're right...I think some people just use the term wrong.
 
Madscientist, I've only ever heard the phrase "high index of suspicion" used the way you describe (and never the way ckent describes -- regional variation?)

I don't think it's contradictory, either. Go back to your example of abdominal pain/pancreatic cancer: Crampy abdominal pain in a 30 year old woman would not ordinarily prompt a workup for pancreatic cancer UNLESS some other factor had raised your levels of suspicion (say, she had three female relatives die of pancreatic cancer in their 40s). The way I think of it is that the clinical picture and preliminary test results or physical exam findings are not highly suggestive of the diagnosis, but some other detail in the history, the circumstances, or your experience with previous patients provokes a more thorough investigation than would otherwise be warranted. It's an intuition that even though you're not looking at the classic picture of an illness, it's still there, and it's only your high index of suspicion (as opposed to concrete evidence) that makes you proceed with the workup. If you had a low index of suspicion, you either wouldn't bother investigating further or you wouldn't have noticed a problem in the first place.

By the way, do a quick google search on the phrase "high index of suspicion". You'll find lots of examples of it similar to the one above, though probably expressed more clearly.
 
Originally posted by grouptherapy
learned recently that the term sma is outdated. now we use bmp.

Well, that's strictly facility dependent. The "SMA" is from sequential multichannel analyzer (which may have been a brand name). The SMA-7 is one, then the SMA-8, then the SMA-17 (aka the "SMAC" - SMA-comprehensive). BMP is one way, the "7" is another, the "chem 7" is another, and the "OP7" is YET another.

Semantics, all. (The difference between the 7 and 8? The calcium.)
 
Whoops, after reading your guys explanation, I like your guys definition of "high index of suspicion" better then mine. Your guys definition is how it is essentially used at my school too, ie PE-->any pt with tachycardia and dypsnea, should consider CT with IV contrast. It just means keeping something on your radar screen. My ID attending did used to always use high index of suspicion for certain organisms as part of his decision making for choosing antibiotics though. I think that the key is to have a large differential on your radar screen so that you won't discount any disease process because classic presentations are rare.
 
Originally posted by ckent
Generally, the term "rule out" is referring to a diagnostic test that has a high sensitivity

Hey, I hate to be an epi geek here, but actually you'd be looking for a test with a high negative predictive value.

I know, I know, you'll hear the sensitivity thing in the "real world" when people really mean predicitve value negative but that doesn't make it true.

An extreme example: You could screen a population for HIV with the "2 legs test" (if they have 2 legs they have HIV). The test would be very sensitive (but unspecific and with low predictive values.

All right. I'll stop now. 😉
 
Originally posted by BellKicker
Hey, I hate to be an epi geek here, but actually you'd be looking for a test with a high negative predictive value.

I know, I know, you'll hear the sensitivity thing in the "real world" when people really mean predicitve value negative but that doesn't make it true.

An extreme example: You could screen a population for HIV with the "2 legs test" (if they have 2 legs they have HIV). The test would be very sensitive (but unspecific and with low predictive values.

All right. I'll stop now. 😉


I had to think about this one a while, and re-edit my post several times, but I'm not sure if I agree with your assessment. A test with a high sensitivity is used to rule out a disease. NPV=true negative/(true negative+false negative), while sensitivity= true positive/(true positives + false negatives). Take the HIV ELISA, that has a very high sensitivity, making it a good screening test to rule out HIV, particularly in low risk patients. Any test with a high sensitivity should have a high NPV, since false negatives fall under the denominator in both equations, thus as false negatives rise, both numbers will drop precipitously (even though I guess that NPV would fall faster then sensitivity. Was that your point?). Anyways, that's my interpretation, I'm not an epi expert, and I'm always open to being corrected.
 
Yeah, but you're forgetting the true negatives when you asses NPV. This number was low in the "two legs test". You could make any test extremely sensitive if you just make it pick up 100% of the population because there would be no true negatives.

I've always liked this version of sensitivity vs. NPV:

Sensitivty: The chance that someone with the disease will have a positive test.

NPV: The chance that a person is well if the test is negative.

If you had all 4 values (sens, spec, PPV, NPV), you'd always use the predictive values.

The problem with predictive values is that they vary between different test populations. I'll give an example.

Let's say you screen 10,000 blood donors for HIV. In this population, there's 1 personwith HIV. The high specificity of 96% will pick up the one person with 96% certainty (which is good). Now, the specificity is 92%. That means that 8% of all the non-HIV people will come out positive. That's 8000 persons.

What's the PPV here? PPV=True Pos + false Pos= 1/8001, ie. a ridiculously low number. So in this population a positive ELISA shouldn't make anyone nervous.

I think this is why sensitivity is quoted so often; because it's a constant. But when you do a test and you want a probablity that your patient has the disease, you need the PPV.


I'm almost certain the above is true. If anyone disagrees, please say so.
 
it seems that when somebody says that you need a high index of suspicion in the way the OP mentioned, it refers to the attitude of the person ordering the test and how much evidence it takes to convince you to consider a pathology.


And ckent is right - a test with a very high sensitivity is good for ruling something out - afterall, if the test catches nearly every suspicious test sample, the odds that somebody DOES NOT have the disease when testing negative is very high.

A test with very high specificity is very good for rulling something in. If a test is really picky about calling something abnormal, then you can be more confident that a sample called "abnormal" actually is.
 
Originally posted by Adcadet



And ckent is right - a test with a very high sensitivity is good for ruling something out - afterall, if the test catches nearly every suspicious test sample, the odds that somebody DOES NOT have the disease when testing negative is very high.


Yes, but high sensitivity alone isn't enough. What you're really looking for in a test is a high NPV (ie. also a low specificity).

Sorry man, I know the term is used that way in real life but that doesn't make it right.
 
Originally posted by BellKicker
Yes, but high sensitivity alone isn't enough. What you're really looking for in a test is a high NPV (ie. also a low specificity).

Sorry man, I know the term is used that way in real life but that doesn't make it right.

Yes, what you really want is a great NPV for a rule out. But NPV is population-dependent, and thus you would have to calculate the NPV for every test your considering. I doubt the test manufacturer provides this info on a standardized population. Maybe they do, but I've never seen this info easily available. If I had received a needle stick recently, I want my HIV test to have great sensitivity to help me rule HIV infectioun out (assuming I'm not worried about false positives).

A low specificity is not necessary for a test to have great NPV. But if your talking about optimizing an assay specifically for NPV, then sure, you could power it for crap specificity if all you cared about was a rule out.
 
Originally posted by Adcadet
Yes, what you really want is a great NPV for a rule out. But NPV is population-dependent, and thus you would have to calculate the NPV for every test your considering. I doubt the test manufacturer provides this info on a standardized population. Maybe they do, but I've never seen this info easily available. If I had received a needle stick recently, I want my HIV test to have great sensitivity to help me rule HIV infectioun out (assuming I'm not worried about false positives).

We agree 100%. I think I wrote some of the same things in the posts above. I hear you about HIV and the ELISA test. In that scenario, you wouldn't be worrying about false positives. But in the donor scenario above, the false positives are suddenly something to worry about. Yes, I know, we agree.


Originally posted by Adcadet


A low specificity is not necessary for a test to have great NPV. But if your talking about optimizing an assay specifically for NPV, then sure, you could power it for crap specificity if all you cared about was a rule out.

A low specificity and great NPV? Is that possible (these things can be hard to wrap one's mind around)? Obviously, I'd want a very high sensitivity and specificity (and thus very high predictive values).

Oh, and the test has to be cheap and carried out in the office. Kinda like in the old days when people actually believed auscultation of the chest had a high predictive value🙂.
 
We should be careful about discussing this stuff in the general residency forum. The surgeons and ER doc's are going to start making fun of us. 😉
 
I have to concur that all physicians should remove "rule out" from their medical vocabulary, as "rule out" = no money.

A lot of insurance companies are starting to refuse payment for Head CTs for "syncope" and "altered mental status".... "transient loss of consciousness" and "confusion" is better.

Just something to keep in mind, a few little words can make a big difference in the bottom line of the hospital.
 
Top