er metrics hocus pocus

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.


This is the big failure of our profession in general and our specialty specifically. On the one hand, we complain about having to answer to metrics that we believe are not in the interests of our patients and don't correlate to quality of care. And yet we fail to provide an alternate, better way of measuring quality of care. "Trust me, I'm the doctor" isn't going to work anymore. We have to either come up with our own KPIs, or stop complaining that the MBA/MHAs are doing it for us.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
This is the big failure of our profession in general and our specialty specifically. On the one hand, we complain about having to answer to metrics that we believe are not in the interests of our patients and don't correlate to quality of care. And yet we fail to provide an alternate, better way of measuring quality of care. "Trust me, I'm the doctor" isn't going to work anymore. We have to either come up with our own KPIs, or stop complaining that the MBA/MHAs are doing it for us.

I don't know. Perhaps the issue is so complex that it is difficult to quantify quality, which is exactly why it's called "quality of care" and not "quantity of care." It's very hard to measure quality with any metric.
 
  • Like
Reactions: 1 user
I don't know. Perhaps the issue is so complex that it is difficult to quantify quality, which is exactly why it's called "quality of care" and not "quantity of care." It's very hard to measure quality with any metric.

Agreed, its a very complex issue. But it seems exactly the sort of thing some high powered academic department could tackle. I would like to see a set of "Harvard metrics" or "Stanford metrics" or some other alternative to the typical Patient Satisfaction/Door-to-Doc/LWOBS/etc combination.
 
  • Like
Reactions: 1 user
We've lost on this. Period.

The big issue is not quality. We can propose all we like alternative "quality" measurements that we'd like to see. They will fall on deaf ears. This is all about government non-payment of medical costs, nothing else.

This is exactly why government should not be involved in healthcare. Those of you voting for Hillary are essentially voting for more government control, and more of these insane metrics.

Acknowledging that we've lost, means that cynically I will continue to do unethical things, and things I find ridiculous (code sepsis for example) in exchange for maintaining my livelihood. The only options are to go into another field outside of medicine, or move to another country.
 
We've lost on this. Period.

The big issue is not quality. We can propose all we like alternative "quality" measurements that we'd like to see. They will fall on deaf ears. This is all about government non-payment of medical costs, nothing else.

This is exactly why government should not be involved in healthcare. Those of you voting for Hillary are essentially voting for more government control, and more of these insane metrics.

Acknowledging that we've lost, means that cynically I will continue to do unethical things, and things I find ridiculous (code sepsis for example) in exchange for maintaining my livelihood. The only options are to go into another field outside of medicine, or move to another country.

absolutely, they want to add so many hoops and hope you fail so they don't pay
I wonder if the makers of culture bottles or NS bags are in bed with the govt. guess they can live with writing worthless protocols, seeing mortality rates rise, then "oops, my bad, lets stop this beta blocker, central line, cultures for pneumonia" thing

the ceo's bonus is 1/3 of their salary.....all from metrics. read that in time magazine 2 yrs ago. it's like $300-400,000

squeeze the little guys, force them to join groups to make the "big 3" as they say in business. it's easier for the govt to control 3 groups than many

you're right, my livelyhood superseeds their bull****. until then its , "click on the pt" to stop the clock, "place orders" to screw the pt
 
Agreed, its a very complex issue. But it seems exactly the sort of thing some high powered academic department could tackle. I would like to see a set of "Harvard metrics" or "Stanford metrics" or some other alternative to the typical Patient Satisfaction/Door-to-Doc/LWOBS/etc combination.

That's a good idea. Could change the tide if a big project like that was undertaken by a powerhouse university.
 
Saving money in reducing unnecessary tests?


Sent from my iPhone using Tapatalk


Everything that the government has done including code sepsis, metrics, ... has led to increased costs. Order more tests, use less clinical judgement, hit those time metrics among others. CMS and the govt in general has their bureaucrats working in a bubble and do not foresee the consequences of their actions. They are looking at decreasing reimbursements but fail to see that they are pressuring doctors and hospitals to increase spending so that their reimbursements aren't touched. They are tripping over dollars to pick up pennies.
 
Saving money in reducing unnecessary tests?


Sent from my iPhone using Tapatalk

The government always makes us do more unnecessary testing. This is getting worse with "quality measures" and "patient satisfaction" that make us do tons of medically irrelevant, but expensive testing.
 
When I was in residency in New York, we had to push every patient to get an HIV test from the emergency department. There was already evidence that the testing was so low yield that any meaningful diagnosis was lost in the population. The state implemented the program anyway and asking someone with an ankle sprain if he wanted an HIV test was usually an awkward encounter.

Me: would you like an HIV test?
Patient: you think I have HIV?
Me: no, but the state requires that I offer you one.
Patient: when do I get the results back?
Me: it takes about a week and you'll have to follow up with her primary care clinic to get the results.
Patient: is the test for free?
Me: no, and your insurance may not even pay for it or the follow-up visit. All this would be out of pocket.
Patient: no thanks.

And off I go to the next patient – two minutes wasted because of this silly metric.
 
Members don't see this ad :)
Everything that the government has done including code sepsis, metrics, ... has led to increased costs. Order more tests, use less clinical judgement, hit those time metrics among others. CMS and the govt in general has their bureaucrats working in a bubble and do not foresee the consequences of their actions. They are looking at decreasing reimbursements but fail to see that they are pressuring doctors and hospitals to increase spending so that their reimbursements aren't touched. They are tripping over dollars to pick up pennies.

The government always makes us do more unnecessary testing. This is getting worse with "quality measures" and "patient satisfaction" that make us do tons of medically irrelevant, but expensive testing.

Agreed on both points, government regulation in our field has failed on many points. But we don't get to complain about bad government metrics unless we offer some viable private sector alternative.
 
This is the big failure of our profession in general and our specialty specifically. On the one hand, we complain about having to answer to metrics that we believe are not in the interests of our patients and don't correlate to quality of care. And yet we fail to provide an alternate, better way of measuring quality of care. "Trust me, I'm the doctor" isn't going to work anymore. We have to either come up with our own KPIs, or stop complaining that the MBA/MHAs are doing it for us.

Though I am not at Harvard or Stanford this has been an academic area of focus for me for the last 5 years. I started on this, because, like you, I feel that we need to either put up or shut up.

I'm here to say that creating a meaningful metric is quite a lot of work. However, we managed to create some fair metrics that appeared to actually improve patient care without penalizing good docs (I'm working on the manuscript this summer, which is why I'll ask you to just take my word for this, rather than divulging all the details here). It was so successful that the university's health system administration took notice and wanted to use it as an example for other departments. So, I had several meetings with them, explained the importance of having secondary reviews, monitoring for exclusion criteria, being able to provide case-specific feedback, etc.

The administration took it over from me and in their infinite wisdom decided to fully automate it.

I'm sure you'll all be shocked to learn that they then completely f*cked it up.

In my experience, for such metrics to work requires oversight by a physician who understands the work environment being measured. Measuring LA-County with a metric that worked for Berkshire Health System but has not been modified will not give you meaningful results. In order to do this well requires a significant amount of physician-hours, which are costly.

Fortunately, for CMS's purposes, it's really quite easy to create a binary blunt bludgeon that 95% of practitioners will fail. Just don't expect there to be any consistency from one quarter to the next in terms of who your top performers are. This variability has the nice side effect of creating learned helplessness in those being measured, which turns recalcitrant physicians into malleable "providers".
 
  • Like
Reactions: 3 users
Though I am not at Harvard or Stanford this has been an academic area of focus for me for the last 5 years. I started on this, because, like you, I feel that we need to either put up or shut up.

I'm here to say that creating a meaningful metric is quite a lot of work. However, we managed to create some fair metrics that appeared to actually improve patient care without penalizing good docs (I'm working on the manuscript this summer, which is why I'll ask you to just take my word for this, rather than divulging all the details here). It was so successful that the university's health system administration took notice and wanted to use it as an example for other departments. So, I had several meetings with them, explained the importance of having secondary reviews, monitoring for exclusion criteria, being able to provide case-specific feedback, etc.

The administration took it over from me and in their infinite wisdom decided to fully automate it.

I'm sure you'll all be shocked to learn that they then completely f*cked it up.

In my experience, for such metrics to work requires oversight by a physician who understands the work environment being measured. Measuring LA-County with a metric that worked for Berkshire Health System but has not been modified will not give you meaningful results. In order to do this well requires a significant amount of physician-hours, which are costly.

Fortunately, for CMS's purposes, it's really quite easy to create a binary blunt bludgeon that 95% of practitioners will fail. Just don't expect there to be any consistency from one quarter to the next in terms of who your top performers are. This variability has the nice side effect of creating learned helplessness in those being measured, which turns recalcitrant physicians into malleable "providers".

1) Glad to hear smart people like you are working on it. I look forward to reading your paper!
2) Saddened but not surprised that admin would screw things up.

I think there must be a technical solution to this problem though. Yes, it is going to be difficult. And I don't expect a set of cut offs and rules to apply everywhere from NYC to rural Arkansas. However, until we come up with viable measures of quality that work everywhere, we are going to be beholden to whatever CMS and corporate overlords come up with. My hope is that something in the machine learning arena will rescue us by coming up with an all encompassing algorithm that looks at all your data points and spits out a quality score.
 
  • Like
Reactions: 1 user
Though I am not at Harvard or Stanford this has been an academic area of focus for me for the last 5 years. I started on this, because, like you, I feel that we need to either put up or shut up.

I'm here to say that creating a meaningful metric is quite a lot of work. However, we managed to create some fair metrics that appeared to actually improve patient care without penalizing good docs (I'm working on the manuscript this summer, which is why I'll ask you to just take my word for this, rather than divulging all the details here). It was so successful that the university's health system administration took notice and wanted to use it as an example for other departments. So, I had several meetings with them, explained the importance of having secondary reviews, monitoring for exclusion criteria, being able to provide case-specific feedback, etc.
I'm not in EM but am in a similar boat. My group got tired of being asked to fulfill "quality metrics" that had nothing to do with our specialty or practice location and so we agreed to propose, implement and study what we considered to be relevant quality metrics. We did so well that not only is our manuscript in preparation but 2 of the 4 largest area private insurers have asked us to write a case study (from the business, not medicine side of things) that they intend to present to the regional Medicare admin company to help get more things paid for.

It's been about 3 years of this. Put up or shut up is hard work. Which is why most of us don't do it (and I wouldn't have taken on any part of this if I hadn't been more or less forced to by my boss).

The administration took it over from me and in their infinite wisdom decided to fully automate it.

I'm sure you'll all be shocked to learn that they then completely f*cked it up.
This hasn't happened yet to us. But the C-suite just got word of our results a few weeks ago. I suspect the goat rodeo is headed our way as well.
 
  • Like
Reactions: 3 users
This has become the new "holy grail" for emergency medicine - controlling metrics through improved paint care. Sadly, I'm concerned that if the government was even remotely interested in our input or contribution towards a solution, they would have solicited us (or our bodies - ABEM, ABOEM, ACEP, AAEM, etc) before the metrics were even approved.

If they did already do this, then our own specialty is to blame. In the setting of increased revenue for LLSA, Concert, etc, which our specialties have openly embraced in the interest of "defending the value of our speciality" I think they have already drank the Kool-Aid. Any trip to an ACEP convention filled with ads for scribes can attest to that.

The optimist in me wants to say that academic medicine can fix this, but since they too are paid by CMS, and their entire workforce (residents) are paid solely by CMS, I see little hope that they will be 1) granted the funding to perform an appropriately sampled study to force CMS to pay more and 2) heard if they did.
 
.
 
Last edited:
You guys can spend all the effort you like. Academic institutions can throw millions of dollars and hundreds of associate professors at the problem. It's not going to matter.

As I've stated before the government has no interest in "quality". It's all about non-payment and they want to control those metrics. Too many docs/hospitals hitting the "quality" metrics? They will move the bar as they see fit in order to reduce payments.

The only out that doctors have is to not take medicare/medicaid or private insurance, and to close EDs so that EMTALA doesn't apply.
 
  • Like
Reactions: 1 user
You guys can spend all the effort you like. Academic institutions can throw millions of dollars and hundreds of associate professors at the problem. It's not going to matter.

As I've stated before the government has no interest in "quality". It's all about non-payment and they want to control those metrics. Too many docs/hospitals hitting the "quality" metrics? They will move the bar as they see fit in order to reduce payments.

The only out that doctors have is to not take medicare/medicaid or private insurance, and to close EDs so that EMTALA doesn't apply.

Yes, the government may only care about costs, but right now we don't even have a solid argument against that except 'We don't like it!'. If we had good measures of quality, it would be at least possible (although far from guaranteed) that we could mount a lobbying campaign to pressure the government into tracking the good metrics rather than the bad. At least it would be a possibility to get something that we want. Right now, we don't even have an 'ask'. If the CMS turned around tomorrow and said, 'Fine GeneralVeers, you win. We will switch to whatever quality metrics you propose. What will it be?" you wouldn't even have a real answer to give them.
 
If the CMS turned around tomorrow and said, 'Fine GeneralVeers, you win. We will switch to whatever quality metrics you propose. What will it be?" you wouldn't even have a real answer to give them.

Most of us come from scientific backgrounds where there is a love for measurements and calculations, and so I understand why everyone feels there is a need for metrics. But in reality, it just doesn't work. Good healthcare is like being a good parent, spouse or a good teacher. You know it when you see it but trying to quantify it is simply wrong.

Imagine if we wanted to make dating for single people easier and better. Let's say I am a politician and that is my platform. I would say that we need to reward quality girlfriends and boyfriends. I would probably even promise to find a way to identify them so they can all find each other and fall in love and live happily ever after and raise great kids that will cure cancer. I would then provide government funding for dates to luxury resorts and give even more funding to those who perform well on dates. We will now ask the single people to record and report return call times, number of times you show affection during dates, how often you cook without prompting, number of orgasms achieved each sexual encounter, etc. Does this seem like a good way to find a life partner? Why would something just as subjective as healthcare be any different?

This was a spur of the moment post so please forgive my imprecise analogy...but hopefully I made my point.
 
  • Like
Reactions: 1 users
Yes, the government may only care about costs, but right now we don't even have a solid argument against that except 'We don't like it!'. If we had good measures of quality, it would be at least possible (although far from guaranteed) that we could mount a lobbying campaign to pressure the government into tracking the good metrics rather than the bad. At least it would be a possibility to get something that we want. Right now, we don't even have an 'ask'. If the CMS turned around tomorrow and said, 'Fine GeneralVeers, you win. We will switch to whatever quality metrics you propose. What will it be?" you wouldn't even have a real answer to give them.

http://emergencymedicinecases.com/measuring-quality-the-value-of-health-care-metrics/

"Let’s take the example of emergency physicians Dr. A and Dr. B, who work in the same busy community hospital emergency department. Dr. A sees twice as many patients per hour as Dr. B. In and of itself, this doesn’t necessarily point to Dr. A providing better care than Dr. B (and quite possibly could mean the opposite). But what if Dr. A’s CT/MRI scan utilization, length of stay times, consultation rates, and admission rates are all lower, and yet her return visit rate is half that of Dr. B? Patient outcomes are hard to come by, but on paper this all points to Dr. A providing better and faster care than Dr. B. At face value, Dr. B appears to be slow, insecure, and possibly unsafe. But what if Dr. B is the one who “cleans up” after the other physicians? What if he routinely attends to the elderly and complex patients to provide exemplary and patient-centred care, instead of picking up the young and “easier to treat” patients, as Dr. A does? Which physician would you want taking care of your elderly mother?

This exemplifies the inherent limitations of using data without fully understanding the perspectives of front-line providers. In this case, clinicians would have recognized that major differences in case-mix between these two physicians renders their comparison meaningless. The same concept can be applied to broader analyses of hospital systems at the regional level. Worse patient outcomes in certain hospitals are sometimes due to the lower socio-economic status of the population they serve rather than the actual care they provide. It is not that the measurements are wrong or biased; it is that the interpretation of the data is more complex than a simple spreadsheet analysis.

Front-line providers may not necessarily be the best people to interpret the data. In fact, many become defensive and attack the validity of any metric that portrays them in ways that are inconsistent with their own overly positive self-assessment. However, front-line providers must be part of the discussion. Having everyone at the same table ensures not only that metrics are understood and interpreted in a way that matches reality, but also that metrics are utilized in a productive and patient-centred fashion.

At the end of the day, transparency is good, accountability is necessary, and measurements are here to stay. But to reap the greatest benefits from measuring quality, we must ensure we do it well. We must abandon the “everything can and must be analyzed” mantra and instead strive to measure only what makes sense. The leaders at the helm of hospitals and health-care systems must ensure that clerical and administrative supports are in place to help the front-line providers in the collection of key data. And, importantly, the interpretation of data should remain focused on patient care."
 
  • Like
Reactions: 1 user
ACEP has initiated the CEDR which purports to be the "first Emergency Medicine specialty-wide registry at a national level, designed to measure and report healthcare quality and outcomes". [https://www.acep.org/cedr/]

CEDR Specific PQRS Measures Supported

CEDR # Measure NQS Domain Type
CEDR #1
Emergency Department Utilization of CT for Minor Blunt Head Trauma for Patients Aged 18 Years and Older Efficiency & Cost Reduction Process
CEDR #2 Emergency Department Utilization of CT for Minor Blunt Head Trauma for Patients Aged 2 Through 17 Years Efficiency & Cost Reduction Process
CEDR #3 Coagulation Studies in Patients Presenting with Chest Pain with No Coagulopathy or Bleeding Efficiency & Cost Reduction Process
CEDR #4 Appropriate Emergency Department Utilization of CT for Pulmonary Embolism Efficiency & Cost Reduction Process
CEDR #10 Anti-coagulation for Acute Pulmonary Embolism Patients Patient Safety Process
CEDR #11 Pregnancy Test for Female Abdominal Pain Patients Patient Safety Process
CEDR #15 Tobacco Screening and Cessation Intervention: Percentage of asthma and COPD patients aged 18 years and older who were screened for tobacco use AND who received cessation counseling intervention if identified as a tobacco user. Community-Population Health Process
CEDR #48 ED Median Time from ED Arrival to ED Departure for Discharged Pediatric Patients in Moderate Volume EDs (20k-39,999) Patient Experience of Care Outcome
CEDR #49 ED Median Time from ED Arrival to ED Departure for Discharged Pediatric Patients in Low Volume EDs (19,999 and less) Patient Experience of Care Outcome
CEDR #50 ED Median Time from ED Arrival to ED Departure for Discharged Pediatric Patients in Freestanding EDs Patient Experience of Care Outcome


PQRS Measures Supported

PQRS # Measure NQS Domain Type
PQRS #66
Appropriate testing for children with pharyngitis Efficiency and cost reduction Process
PQRS #54 Emergency Medicine: 12-Lead Electrocardiogram (ECG) Performed for Non-Traumatic Chest Pain Clinical Effectiveness Process
PQRS #76 Prevention of Catheter-Related Bloodstream Infections (CRBSI): Central Venous Catheter Insertion Protocol Patient Safety Process
PQRS #91 Acute Otitis Externa (AOE): Topical Therapy Clinical Effectiveness Process
PQRS #93 Acute Otitis Externa (AOE): Systemic Antimicrobial Therapy - Avoidance of Inappropriate Use Efficiency & Cost Reduction Process
PQRS #116 Antibiotic treatment for adults with acute bronchitis: avoidance of inappropriate use Efficiency & Cost Reduction Process
PQRS #187 Stroke and Stroke Rehabilitation: Thrombolytic Therapy (tPA); also known as hospital STK-4 Clinical Effectiveness Process
PQRS #254 Ultrasound Determination of Pregnancy Location for Pregnant Patients with Abdominal Pain Clinical Effectiveness Process
PQRS #255 Rh Immunoglobulin (Rhogam) for Rh-Negative Pregnant Women at Risk of Fetal Blood Exposure Clinical Effectiveness Process
PQRS # 317
Cross-Cutting
Preventive Care and Screening: Screening for High Blood Pressure and Follow-Up Documented Community-Population Health Process
PQRS #326 Atrial Fibrillation and Atrial Flutter: Chronic Anticoagulation Therapy (aka STK-3) Clinical Effectiveness Process
PQRS #415 ED Utilization of CT for Minor Blunt Head Trauma for Patients Ages 18+ Years Efficiency and Cost Reduction Efficiency
PQRS #416 ED Utilization of CT for Minor Blunt Head Trauma for Patients Ages 2-17 Years Efficiency and Cost Reduction Efficiency
To view the detailed measure specification for PQRS Measures, please visit the CMS PQRS Measure Codes website →
 
ACEP has initiated the CEDR which purports to be the "first Emergency Medicine specialty-wide registry at a national level, designed to measure and report healthcare quality and outcomes". [https://www.acep.org/cedr/]

I agree with most of the above measures (with exception of tobacco cessation needing to be done by the doctor). I think most of these measures are things all of us do all the time. Pregnancy test on female with abdominal pain? 100%. EKG on everyone with non-traumatic pain? 100% here.

The problem with all of these is that they are good medicine and we do them all pretty close to 100% of the time. How is government going to move the bar in order to cut reimbursement if we hit all of these? It might get to a ridiculous level, that if you don't do one EKG in a year on one non-traumatic chest pain patient then you fall out, and lose money.
 
I agree with most of the above measures (with exception of tobacco cessation needing to be done by the doctor). I think most of these measures are things all of us do all the time. Pregnancy test on female with abdominal pain? 100%. EKG on everyone with non-traumatic pain? 100% here.

The problem with all of these is that they are good medicine and we do them all pretty close to 100% of the time. How is government going to move the bar in order to cut reimbursement if we hit all of these? It might get to a ridiculous level, that if you don't do one EKG in a year on one non-traumatic chest pain patient then you fall out, and lose money.

Right now, our biggest fight is with the hospital and nursing staff to get patients in and out of the ED as quickly as possible. Nursing and tech staff is at a minimum, times are not good, and patient satisfaction is so-so. Nursing managers are throwing the physicians under the proverbial bus to cover their own failings. We are not even talking about the above numbers in CEDR but room to doctor, doctor to disposition, and disposition to release intervals. We are also talking about picking up patients as soon as they hit the board but are also asked about "why did it take you 7-20 minutes to go see them?".

Some of the nursing managers are non-clinical and do not understand physician workflows and clinical issues crop up when you least expect them to. They have no idea how to look and interpret data both statistically and realistically yet go ahead and send them up the hospital chain.

This is the world we live in. We are being played and controlled by know-nothing managers. We are fighting them but this is a political issue and we are at their mercy given our SDG.
 
But what if Dr. A’s CT/MRI scan utilization, length of stay times, consultation rates, and admission rates are all lower, and yet her return visit rate is half that of Dr. B? Patient outcomes are hard to come by, but on paper this all points to Dr. A providing better and faster care than Dr. B. At face value, Dr. B appears to be slow, insecure, and possibly unsafe. But what if Dr. B is the one who “cleans up” after the other physicians? What if he routinely attends to the elderly and complex patients to provide exemplary and patient-centred care, instead of picking up the young and “easier to treat” patients, as Dr. A does? Which physician would you want taking care of your elderly mother?
You (and your source) are leaving out the fact that Dr. A's hospital doesn't want a lower admission rate. If they could, they would have you admit everyone that meets the minimum criteria. Because they get paid. Same as MRIs and CTs. It's never the hospital that complains, it's radiology because they have to work harder, etc.

Plenty of hospitals have been sued for trying to achieve an arbitrary "admit rate" metric, because the implication is that not admitting patients makes you a bad doctor. And yet, most of us hate working behind the 50% admit rate docs, because that leaves us a lobby to clear out.
 
  • Like
Reactions: 1 user
Top