What MDCalc "scores" do you use in the ED?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

wonderbread12

Full Member
10+ Year Member
Joined
Nov 14, 2011
Messages
215
Reaction score
112
Something I hated about medicine were all the random "scores" that we were expected to randomly know and apply to patients (e.g. MELD score, etc) as med students.

Now I'm on my EM rotation and same story with new scores i've never heard of. What similar scores/scales do you guys use as EM docs? It'd be nice to put together a list so that I (and other students) can pull them out and not look like a deer in headlights when asked about a random method of calculating some likelihood

Members don't see this ad.
 
They aren't a random method of calculating some likelihood; they are the ACTUAL useful scores we use in day-to-day emergency medicine. And I agree with above, PERC and WELLS are important to known and understand. I love HEART for chest pain, but again you have to understand what it is giving you and its limitations. I think anion gaps and osm gaps should be understood (esp. for tox). GCS, while old fashioned, still comes up.

CENTOR criteria, NEXUS/Canadian Cspine, and Ottowa Ankle, while note scores, are good rules to know.

CHADS2 comes up occasionally. While I don't have the PSI/PORT score for pneumonia memorized, I do use the calculatory occasionally to justify admission. NIHSS must be documented for most stroke center quality metrics, and is useful for TPA decisions. Boston Syncope I don't memorize but do occasionally use.
 
Members don't see this ad :)
All of the above, plus:
Alvarado for avoiding peds CT for appy when US not available.

Corrected phenytoin - I can do the math in my head, but use it for teaching residents.
 
ABCD2 for TIA
San francisco sycope rule
 
All of our residents get something similar - a trifold card that easily fits into a pocket and has all these decision rules. If the place you're rotating at doesn't have something like this, it's a nice way to stand out and sort of impress the faculty. Put one together and make some copies, then hand it out to the other rotators/students and see if the chiefs are interested in distributing it regularly.
 
Above + Glasgow Blatchford score for UGI Bleed.

I'd like to echo Janders that, while I use these scores every day, they are just a tool. If you don't understand how they're supposed to be applied and the limitations of the literature they're based on, then the scores become much less useful...potentially harmful.
 
They aren't a random method of calculating some likelihood; they are the ACTUAL useful scores we use in day-to-day emergency medicine.

Just poor choice of words but I realize the value in them....it's just hard to know which are used day-to-day as a newb

Above + Glasgow Blatchford score for UGI Bleed.

I'd like to echo Janders that, while I use these scores every day, they are just a tool. If you don't understand how they're supposed to be applied and the limitations of the literature they're based on, then the scores become much less useful...potentially harmful.

Definitely agree, just comes with more clinical experience but it's nice to have this small list as a kick start


Thanks to everyone who contributed! Keep em coming if you use others, great to hear what others find useful
 
I've never been a "decision rule" guy. You should learn them and get to a point you're using them intuitively without necessarily computing scores on people constantly.

Although these decision rules can be helpful and you should know them, be aware they usually use strict inclusion and exclusion criteria. They're made and developed in a perfect "research ED world." Any one of your patients may or may not have even been included in any given study used to formulate a specific decision rule.

That's probably the most important thing to know about these rules and something most people cranking away computing these scores never even stops to think about.

Did they exclude patients above/below a certain age?

Did they exclude patients who were too sick/not sick enough?

Did they exclude admitted patients or discharged patients?

Did they exclude/exclude intoxicated patients?

You're patient may not fit within the box they drew for their study.

Also, some of them make me laugh, really really laugh. These rules will come and go over the years. They'll be looking at something like PE, cervical spine injury or MI. They they'll conclude, "98.5% sensitive, only 1.5 (__insert above never-miss diagnosis___) out of 100 _________s were missed."

Then people react like this, "Wow, that's amazing. 98.5% accurate. That's as close to 100% as you get. I'm using this rule. It's validated."

Sure. As long as you're okay with missing 1-2 MI/PE/c-spine fractures out of every 100 angina/SOB/neck-injury patients you see.

They're interesting tools, but they don't replace your brain. I've found that if after your initial evaluation, you haven't immediately dismissed the need for a study out of hand, and you're worried enough to actually see the time to go through the motions of using some decision rule to make your decision for you, then you probably ought to be ordering the test to actually rule out what you're considering and not relying on some decision rule developed in EBM-Utopia World to talk you out of ordering a test your gut is apparently telling you you're worried enough to probably need.

In other words, if you want to use a decision rule to talk yourself into ordering a test, fine. But if you're using it to talk yourself out of ordering one, be careful. They're interesting research tools but they don't cover your gluteus, in the real world, like a gold standard test does. Name one decision rule that holds up in court as an actual gold standard for ruling out any disease.

Do an experiment for yourself. When you read about missed diagnoses, M&Ms and review cases where EPs are getting sued, go back and calculate whether your favorite decision rule would have picked up on the diagnosis the EP missed. You might be surprised what you find.
 
Last edited:
Members don't see this ad :)
Are either of these validated?

To clarify, I mean to passively aggressively imply that neither of these two scores are sensitive enough in the ED to have any significant bearing on management.

I use them to sell an admit, rather than justify a discharge. Medicine tends to like them
 
Remember, they're clinical decision instruments, not rules. You don't have to follow them, they're there to assist you.

Do an experiment for yourself. When you read about missed diagnoses, M&Ms and review cases where EPs are getting sued, go back and calculate whether your favorite decision rule would have picked up on the diagnosis the EP missed. You might be surprised what you find.

This point is valid, but so is the counterpoint. We are likely to miss 1-2% of everything we look for. Sorry, that's just how it works. When you look at the research Jeff Kline has put out showing that people who use PERC miss fewer PEs than people who use gestalt, it becomes the elephant in the room. People who use PERC also do way more CTs than the gestalt group.
PECARN Low risk head CT and NEXUS are examples of two that have really good sensitivity for the disease they're looking for. Ottawa knee and ankle, while good in design, simply prevents xrays of joints, which are low risk/low harm/high yield. I teach them, but I don't use them routinely. Not worth the argument.

Really, the only way to never miss anything is to a)consider it as a diagnosis, and b)let the patient/family know that you've considered it, and while currently the test/clinical decision instrument says it's unlikely, it doesn't mean it isn't there. Thus, strong return precautions.
 
This point is valid, but so is the counterpoint .
Almost, but not quite. Again, a negative gold-standard diagnostic study offers extremely strong medical-legal protection. A decision rule, "tool," or whatever you want to call it, offers little if any.

Again, what is one decision "rule/tool/instrument" that rises to the level of a gold standard, by any definition of standard of care, anywhere?

Anyone?
 
Shock Truth - Decision Rules are designed to get you to order less tests, not to help you make diagnoses, or detect disease and injuries.
 
PECARN Low risk head CT and NEXUS are examples of two that have really good sensitivity for the disease they're looking for.
But they still don't rule anything in. You still need a gold standard test. Therefore, they only tell you who (supposedly) doesn't need a test, if you believe the rule.

Ottawa knee and ankle, while good in design, simply prevents xrays of joints...
Yeeessss. My point exactly.


Really, the only way to never miss anything is ....
Of course, we agree there's no way to "never miss," only different ways of covering your arschnickle in the inevitable event that you're wrong. Gold standards do that. Decision rules don't.
 
But they still don't rule anything in. You still need a gold standard test. Therefore, they only tell you who (supposedly) doesn't need a test, if you believe the rule.
So, you scan every child with low risk head injury? Up to and including sedating the patient? You PE protocol scan the young woman with chest pain and SOB, regardless of whether she's had 40 of them in the last 3 years? If not, then you're using your own personally created CDI, regardless of whether it's in a paper or not. You just haven't codified it, but you're tacitly saying "my way is better than yours." And it certainly hasn't been validated.
 
So, you scan every child with low risk head injury?

No. Find where it said that.

You PE protocol scan the young woman with chest pain and SOB, regardless of whether she's had 40 of them in the last 3 years?
No. Again, find where I said, "Test everyone."

If not, then you're using your own personally created CDI, regardless of whether it's in a paper or not.

No. At that point I'm using clinical judgement, which is exactly what decision rules were invented to replace. In fact, I find it quite interesting, when decision rules throw in a qualifier such as "clinical judgment" in their decision tree. The whole point of a clinical decision rule is to eliminate reliance on clinical judgement or gestalt, then they completely invalidate their own supposed validity by putting that qualifier in there.

...you're tacitly saying "my way is better than yours."
No I'm not. I'm just trying to warn people about over-reliance on clinical decision rules in over ruling their own gestalt. That's all.

For example, PECARN:

http://www.mdcalc.com/pecarn-pediatric-head-injury-trauma-algorithm/

Are you going to sit on a 20 month old, with a 5 second loss of consciousness who the mother told the triage nurse "is not acting normally" and willfully miss 9 out of 1,000 clinically important traumatic brain injuries?

PECARN says yes.

Except...............................PECARN also says "clinical experience" can over ride PECARN within its own algorithm.

Hell, if I get that far down a clinical decision algorithm and the algorithm shrugs and says (in dumb-guy voice) "Hell I dunno, ask a frickin doctor, okay" then I'm not asking PERCAN who the I shouldn't be ordering CT scans on.

You see no irony there?
 
Last edited:
1) USA ankle and knee rules: If they come to the ER for knee or ankle pain after trauma, they get an x-ray. On top of sometimes being diagnostic, X-rays are often therapeutic.

2) I've never seen a virus that didn't go away with antibiotics.

3) When a patient comes in for abdominal pain, I can't stress the importance of physical exam. And the best physical exam is a CT scan.
 
No. Find where it said that.
No. Again, find where I said, "Test everyone."
No. At that point I'm using clinical judgement, which is exactly what decision rules were invented to replace. In fact, I find it quite interesting, when decision rules throw in a qualifier such as "clinical judgment" in their decision tree. The whole point of a clinical decision rule is to eliminate reliance on clinical judgement or gestalt, then they completely invalidate their own supposed validity by putting that qualifier in there.
The point of that thought exercise was to prove that you do use your own personal decision instrument. You call it judgement, but it's an instrument you've developed over time based on patients you've seen, bouncebacks you've had, etc.
No I'm not. I'm just trying to warn people about over-reliance on clinical decision rules in over ruling their own gestalt. That's all.
And I'm warning against using gestalt over a CDI in certain instances. You shouldn't rely overwhelmingly on either one, but you shouldn't ignore them either.
For example, PECARN:
http://www.mdcalc.com/pecarn-pediatric-head-injury-trauma-algorithm/
Are you going to sit on a 20 month old, with a 5 second loss of consciousness who the mother told the triage nurse "is not acting normally" and willfully miss 9 out of 1,000 clinically important traumatic brain injuries?
PECARN says yes.
Except...............................PECARN also says "clinical experience" can over ride PECARN within its own algorithm.
Hell, if I get that far down a clinical decision algorithm and the algorithm shrugs and says (in dumb-guy voice) "Hell I dunno, ask a frickin doctor, okay" then I'm not asking PERCAN who the I shouldn't be ordering CT scans on.
It doesn't say that at all. Are you looking at the same rule? For that patient, "PECARN recommends Observation vs CT; 0.9% risk of clinically important Traumatic Brain Injury." It doesn't tell you to do anything, it gives reasonable recommendations.
Also, if you look at the "clinically important TBIs" in the under 2 age group, there was a surgical incidence of 0.1%, with zero deaths. Sounds pretty reasonable.
Of note
  • PECARN has now been externally validated in 2 separate studies.
    • One trial of 2439 children in 2 North American and Italian centers found PECARN to be 100%sensitive for ruling out ciTBI in both age cohorts.
    • The rates of ciTBI at 0.8% (19/2439) and those requiring neurosurgery 0.08% (2/2439) were similar to the PECARN trial.
    • A second trial at a single US emergency department of 1009 patients under 18 years of age prospectively compared PECARN to two other pediatric head CT decision aids (CHALICE and CATCH) as well as to physician estimate and physician practice.
    • 2% (21/1009) had ciTBI and neurosurgery was needed in 0.4% (4/1009) of this sample.
    • Again PECARN was found to be 100% sensitive for identifying ciTBI.
    • PECARN outperformed both the CHALICE and CATCH decision aids (91% and 84% sensitive for ciTBI, respectively).
So if you want to rail against CDIs, you should pick one that isn't as good as this one. There are plenty out there (San Francisco syncope, etc). Head CTs are not harmless, we know this. So you need to weigh harms vs benefits.
pecarn-algorithm.png

You see no irony there?
Not at all. Do you follow ACLS? Is it your gold standard? It recommends getting expert consultation, which is, of course, you.[/quote][/quote]
 
Do you follow ACLS? Is it your gold standard? It recommends getting expert consultation, which is, of course, you.

ACLS is not a clinical decision rule. It's a rote treatment protocol. "Gold standard" refers to testing. That's comparing apples to oranges. ACLS is not designed to get people to order less tests, which is the inherent bias in decision rules. So it's irrelevant to this discussion. But yes, I've always tried to follow ACLS as best as possible. ACLS does not tell what tests not to do. ACLS tells me what to do.

The point of that thought exercise was to prove that you do use your own personal decision instrument. You call it judgement, but it's an instrument you've developed over time based on patients you've seen, bouncebacks you've had, etc.

Yes. And the whole point of a decision rule is to tell you your own judgement based on "physician experience" is wrong. Otherwise, you wouldn't need a clinical decision rule. Except when the decision rule itself admits it can't overrule your clinical experience, when in its own pathway, defers to "physician experience." Decision rules are fine for teaching. You just have to know the limitations and biases.

It doesn't say that at all. Are you looking at the same rule? For that patient, "PECARN recommends Observation vs CT; 0.9% risk of clinically important Traumatic Brain Injury."

We are in fact talking about the same rule. It's right there in your post, and below, right lower corner. See "intermediate risk," "shared decision making," and under that where they defer to "physician experience."

ImageUploadedBySDN Mobile1439069881.091362.jpg


Again, why would I ever allow a decision rule that relies on my own physician experience, to over rule my own physician experience?


Also, if you look at the "clinically important TBIs" in the under 2 age group, there was a surgical incidence of 0.1%, with zero deaths. Sounds pretty reasonable.

It sounds like you're saying what the rule defines as "clinically important TBIs" were actually clinically "unimportant" traumatic brain injuries? Again, I'm skeptical about casting aside my own clinical judgement in favor of a clinical decision rule that doesn't know the difference between, or how to define, what's clinically important or not. Use the rule. Whatever. I'm just pointing out peculiar inconsistencies I tend to see coming from these protocols that proclaim to tell me when they know a patient of mine doesn't need a test, when the authors that wrote them have never examined or taken a history from my patient. Do you take a history from and examine your patients before deciding if they need a CT or not? Assuming, yes, then why do you allow them to talk you out of a test without allowing them to do the same?

So if you want to rail against CDIs, you should pick one that isn't as good as this one. There are plenty out there (San Francisco syncope, etc).

I didn't pick it. You picked it as an example in your post. I just responded to your example. I'm not "railing against CDIs" and I'm not saying this one doesn't have value or that it wasn't well constructed. I'm just pointing out their inherent flaws and biases. Also, pointing out that there are "plenty" CDIs out there that aren't good, is a peculiar way to defend them.

As a side note, I find it curious that they report 100% sensitivity for their low risk arm. That is enough right there to give me pause. That tells me nothing except the study wasn't powered enough. Nothing is 100% sensitive. Even if you did a CT scan on every head injury patient, you can't get to 100% sensitivity. CT scan, the gold standard in this situation, itself isn't 100% sensitive. They can miss very small early bleeds, non-hemorraghic axonal sheer, or flat out missed bleeds due to radiologist error, admittedly not very often, but it takes only 1 miss to get below 100%. So don't tell me based on 25 patients, your decision rule is 100% sensitive, and that I'm supposed to over rule my pre-existing clinical decision based on your 25-patient group, which is supposedly, and impossibly, 100% sensitive.


Head CTs are not harmless, we know this. So you need to weigh harms vs benefits.
I despise the pressure place on physicians to practice defensive medicine. I'm well aware that CTs have risks and that defensive medicine can be harmful:

http://epmonthly.com/blog/a-nameless-faceless-killer/

That being said, I'm not the one who allowed the medical malpractice system to get out of control. I'm not one of the politicians refusing to reign in the trial lawyers. I'm not someone who has been on a jury rendering plaintiff's verdicts for powerball-lottery sized verdicts in cases where doctors may not have even committed malpractice. But I have been falsely accused of, and successfully defended against, at least one malpractice suit.

In general, I think well constructed decision rules do have value. They're good for teaching students and residents how to learn to work up certain patient complaints. When you're building your clinical judgement and experience, and when you don't have a strong sense of clinical experience to be overridden, then a decision rule is helpful. They also help to highlight which factors in the history and exam have the greatest negative predictive (probably don't need to test) and positive predictive value (probably do need to test). Also, they're worth reviewing in the sense that we have the responsibility to not order unnecessary tests. But unfortunately, over time, the only way to definitively determine which tests were necessary and which tests were not, is to look retrospectively at what the results of tests that were done, were. In other words, you never really know definitively and with certainty if a given test on a given patient was necessary, until you do it and get the result back. If you didn't think it was necessary, you would have even considered ordering it. Prospectively, the only person responsible for a test not ordered, is a physician who has enough confidence to say its not necessary. Since decision rules are constructed under the assumption that physicians over test, would prefer to test everyone if they could and need help deciding who not to order tests on, as opposed to who needs test, they can therefore only do one of two things, either 1-Make the physician feel better about a test ordered or, 2-Talk a physician out of ordering a test he was inclined to order. I don't think they have the ability to make anyone feel better about a test their gut told them should have been ordered, if the decision rule turns out to be wrong and a diagnosis is missed.

Ultimately, I don't think residents should discard decision rules. But I do think they should be aware, when using them, that they have a tremendous financial bias, no less than that of pharma funded studies. Here how it goes:

The largest health care payer in the United States, The Center for Medicare and Medicaid Services as component of the Department of Health and Human Services, thinks doctors order too many tests. When test are ordered, they have to pay for them, and therefore they make less money. Knowing this is a hot button area ripe for grants and research money, academic physicians who operate in a "publish or perish" financial environment of their own, take on research projects to make clinical decision rules. The clinical decision rules are then distributed and advertised as justification for physicians to avoid ordering tests they otherwise would have have necessary to order if not for the decision rule.

Now, using your example of PECARN, let take a quick look at the article:

http://www.pecarn.org/documents/kuppermann_2009_the-lancet.pdf

On page 1 under funding, who is listed?

"The Emergency Medical Services for Children Programme of the Maternal and Child Health Bureau, and the Maternal and Child Health Bureau Research Programme, Health Resources and Services Administration, US Department of Health and Human Services."

What agency are the The Emergency Medical Services for Children Programme of the Maternal and Child Health Bureau, and the Maternal and Child Health Bureau Research Programme all run by?

You guessed it: The Department of Health and Human Services.

Who also is run by The Department of Health and Human Services?

You guessed it: Medicare and Medicaid services, the largest health care payer in the United States that thinks you and I order too many tests.

Again, I'm not saying a well constructed decision rule doesn't have academic and teaching value, I think you just have to be aware of the inherent biases especially when relying on one to cancel a test you initially thought was necessary. Choosing wisely,
doesn't always = "Choosing Wisely."
 
Last edited:
Birdstrike,

You keep throwing around the term "gold standard" but there are very few actual gold standard results available in the ED.

You lambast decision rules for missing 9 in 1000, but almost all of the tests I can get in the ED have <99.1% sensitivity (I'll ignore the fact that PECARN does not tell me I can't/shouldn't CT someone in the 0.9% TBI group if I want to).

Of course we shouldn't consider CDI's to be absolutely definitive, or even binding. I don't think anyone on this thread is suggesting that.

I don't think you're naive enough to think this, but, for the benefit of impressionable young med students, I want to make it explicit: practicing 0% risk medicine is not just untenable, it's impossible.
 
Birdstrike,

You keep throwing around the term "gold standard" but there are very few actual gold standard results available in the ED.

You lambast decision rules for missing 9 in 1000, but almost all of the tests I can get in the ED have <99.1% sensitivity (I'll ignore the fact that PECARN does not tell me I can't/shouldn't CT someone in the 0.9% TBI group if I want to).

Of course we shouldn't consider CDI's to be absolutely definitive, or even binding. I don't think anyone on this thread is suggesting that.

I don't think you're naive enough to think this, but, for the benefit of impressionable young med students, I want to make it explicit: practicing 0% risk medicine is not just untenable, it's impossible.
For every diagnosis in the ED, there is a "gold standard." Gold standard does not = "perfect" or "100% accurate." It means = the best test available to rule out a given diagnosis at the time. That test may be 99% sensitive, 90% sensitive or 75% sensitive. Also, one does not always have to order "the gold standard" to appropriately handle every patient.

My only point regarding gold standard testing, is that there is no decision rule that has the medical-legal power of a gold standard test. There is no scenario, as far as I know, where having used a clinical decision rule allows one to say, "I did the best test available. There was nothing more that I could have done." I'm aware that perfection is unobtainable. "Protection" however, is attainable.

A decision rule can't offer that. A decision rule, when wrong, can only leave one wondering, "What went wrong? I supposed I should have ordered ___X____ test." That's all I'm saying. We can agree to disagree. It's all good. It's just the opinion of one guy on the Internet. I've gotten as far deep in the weeds off gold standards and decision rules for the moment. I have to go eat a filet mignon now.
 
For every diagnosis in the ED, there is a "gold standard." Gold standard does not = "perfect" or "100% accurate." It means = the best test available to rule out a given diagnosis at the time. That test may be 99% sensitive, 90% sensitive or 75% sensitive. Also, one does not always have to order "the gold standard" to appropriately handle every patient.

My only point regarding gold standard testing, is that there is no decision rule that has the medical-legal power of a gold standard test. There is no scenario, as far as I know, where having used a clinical decision rule allows one to say, "I did the best test available. There was nothing more that I could have done." I'm aware that perfection is unobtainable. "Protection" however, is attainable.

A decision rule can't offer that. A decision rule, when wrong, can only leave one wondering, "What went wrong? I supposed I should have ordered ___X____ test." That's all I'm saying. We can agree to disagree. It's all good. It's just the opinion of one guy on the Internet. I have to go eat a filet mignon now.

To each his own, I'm going to eat bone-in ribeye.

😉
 
ABCD2 for TIA - doesn't change a damn thing I do however our clinical review people (who decide if patient is an obs vs full admit) can make patient a full admit if they reach a certain score (don't remember exactly). Again, doesn't make a damn bit of difference to me - I order the same tests and still don't let the patient go home, but whatever helps make the hospital their money.

CHADS2-Vasc- to anticoagulate or not anticoagulate new onset afib in absence of any obvious bleeding risk. Score of 0 or 1 I'll leave it up to the inpatient team

I think the PNA decision rules are stupid and don't actually provide any clinical utility.
 
They aren't a random method of calculating some likelihood; they are the ACTUAL useful scores we use in day-to-day emergency medicine. And I agree with above, PERC and WELLS are important to known and understand. I love HEART for chest pain, but again you have to understand what it is giving you and its limitations. I think anion gaps and osm gaps should be understood (esp. for tox). GCS, while old fashioned, still comes up.

CENTOR criteria, NEXUS/Canadian Cspine, and Ottowa Ankle, while note scores, are good rules to know.

CHADS2 comes up occasionally. While I don't have the PSI/PORT score for pneumonia memorized, I do use the calculatory occasionally to justify admission. NIHSS must be documented for most stroke center quality metrics, and is useful for TPA decisions. Boston Syncope I don't memorize but do occasionally use.

CENTOR? Common' Janders, you're better than that.
 
Decision rules are useful if they tell you to do what you already wanted to do. Then you can write it in the chart. Otherwise useless. If I think someone has a PE I will CT them even if PREC is negative and Well's is low.
 
Decision rules are useful if they tell you to do what you already wanted to do. Then you can write it in the chart. Otherwise useless. If I think someone has a PE I will CT them even if PREC is negative and Well's is low.

Never mind that first do no harm concept, eh? What's the specificity of CT for PE? What are the consequences of a false positive?
 
Never mind that first do no harm concept, eh? What's the specificity of CT for PE? What are the consequences of a false positive?

Yeah . . . actually I agree with your sentiment. PE is overdiagnosed and overtreated. If someone has a negative CTA for PE I am comfortable with discharging them even with the 10-15% chance they have a PE since I think non-central, non-massive PEs probably shouldn't be treated anyway. And when I have someone with a subsegmental PE I discuss the risks/benefits of anticoagulation with them before deciding whether to admit them or start any anticoagulant.
 
They force the hand of the admitting residents. It's a lot tougher for them to send a patient home when you write in the chart "Risk Class IV, 8.2-9.3% mortality. Hospitalization recommended based on risk."
This is a good use of a decision rule, using it to your and the patients' advantage. You're using this to help admit a patient you think needs to be admitted, as opposed allowing some decision rule funded by some insurance company to override your gestalt to save them money by you ordering less tests, admitting less patient or the like.
 
CENTOR? Common' Janders, you're better than that.

Sweet, I am better than something! w00t!

My most typical use of centor is to list them to a patient [who has a typical "cold" URI virus, but wants antibiotics] and have them agree with me that they only get +1 for fever [I'm giving them credit for the "99.1" since they usually run 96]. As such, they shouldn't get a strep test NOR antibiotics. It shows they have a viral process, and they need chicken soup and a work note for a day. But I'm not a mean doctor, I'm protecting them from MRSA and CDIFF and yeast infections!

See, CENTOR criteria are great 🙂
 
They force the hand of the admitting residents. It's a lot tougher for them to send a patient home when you write in the chart "Risk Class IV, 8.2-9.3% mortality. Hospitalization recommended based on risk."

Precisely how I've taken to using the PNA rules and the TIA scores. MDCALC, cut-and-paste into chart. Makes the resource utilization and inpatient team all happy. I like making people happy.
 
Top