Arterial Lines?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
not quite. a really solid RCT can be better in certain situations. meta-analyses are only as good as the data they draw from and the similarity in methodologies. They provide much higher power than RCT's, but it still doesn't matter if you're powering weak study designs.
 
The thing to remember about systemic reviews, meta-analysis is garbage in, garbage out - they are only as good as the studies that they look at. Furthermore they've been shown to not be predictive of a RCT of similar patient size. Add in that there will be a bias in the literature for positive result papers which are so much more sayisfying to publish and I think we should all be a healthy skeptic of sexy meta-analysis data.

At the end if the day we all have to actually use our brains and apply all of this. Any doc that can be replaced by an EBM computer algorithm, should be.

I wait with bated breath Watson, skynet, and the rise of the machines.
 
Agree--my understanding is that meta analysis is the top of the pyramid in terms of research quality
The issue with meta-analyses, as with computer programming is, GIGO. Garbage in, garbage out. If you can get access to the primary data from a well designed and executed study that you're using in your meta-analysis then yes, you're going to get an excellent analysis of all the available data. If, OTOH, you're doing a meta-analysis of a bunch of non-randomized or uncontrolled studies, you're going to wind up with crap.

There's a good reason why the methods section of every meta-analysis study includes a statement like "we found 2982394879865 studies that met initial criteria however after excluding studies that did not meet our strict criteria of "written in English and had more than 6 patients included" we analyzed 3 studies including one retrospective analysis, one single-institution, single-arm Phase II study and one randomized Phase III study that was closed early after 17 years and 26 patients accrued".

EDIT: McNinja and jdh type faster than I do.
 
The issue with meta-analyses, as with computer programming is, GIGO. Garbage in, garbage out. If you can get access to the primary data from a well designed and executed study that you're using in your meta-analysis then yes, you're going to get an excellent analysis of all the available data. If, OTOH, you're doing a meta-analysis of a bunch of non-randomized or uncontrolled studies, you're going to wind up with crap.

There's a good reason why the methods section of every meta-analysis study includes a statement like "we found 2982394879865 studies that met initial criteria however after excluding studies that did not meet our strict criteria of "written in English and had more than 6 patients included" we analyzed 3 studies including one retrospective analysis, one single-institution, single-arm Phase II study and one randomized Phase III study that was closed early after 17 years and 26 patients accrued".

Very true, on the flip side, a collection of good studies in a metaanalysis > individual studies alone.

One of the problems with the early central line literature was the fem lines from before instituting standard precautions were included in meta-analysis from later years when IJs started to be done with gowning/draping.

Either way, just because something is a meta-analysis doesn't mean that the initial studies included were necessarily garbage.
 
Either way, just because something is a meta-analysis doesn't mean that the initial studies included were necessarily garbage.
Or necessarily good. And this is actually my biggest issue with meta-analyses in general. We have to trust that the biases of both the original authors (all hojiggity million of them) and the meta-analysis authors are minimal and don't affect the results. In an RCT, you only have to deal with the biases of 1 set of authors and they're pretty obvious.
 
Or necessarily good. And this is actually my biggest issue with meta-analyses in general. We have to trust that the biases of both the original authors (all hojiggity million of them) and the meta-analysis authors are minimal and don't affect the results. In an RCT, you only have to deal with the biases of 1 set of authors and they're pretty obvious.

Couldn't agree more with the first statement because it's difficult to sort out the bias with a meta-analysis, however, I'd argue that there's a lot of hidden bias in any RCT (reference the tPA thread) so may not be so obvious.

Overall, I think this should improve with prospective reporting of studies on sites like healthcare.gov so that meta-analyses can take into account negative unpublished studies but there will always be people who try to game the system. Additionally, I wonder whether the onerous reporting requirements of a government bureaucracy will do more to harm research rather than help it by making the threshold to participate so difficult
 
Couldn't agree more with the first statement because it's difficult to sort out the bias with a meta-analysis, however, I'd argue that there's a lot of hidden bias in any RCT (reference the tPA thread) so may not be so obvious.

Overall, I think this should improve with prospective reporting of studies on sites like healthcare.gov so that meta-analyses can take into account negative unpublished studies but there will always be people who try to game the system. Additionally, I wonder whether the onerous reporting requirements of a government bureaucracy will do more to harm research rather than help it by making the threshold to participate so difficult
We agree about 98% but I think the biases of authors are quite easy to discern. They're studying "crappy standard of care" vs. "Magic Disease Curing Beans".

When MDCB fail miserably, I trust the study. This is rare.

When MDCB succeed incredibly, I trust the study. This is even more rare.

When MDCB show a statistically significant trend toward success...somebody got paid and nobody but MDCB Inc. will benefit.

I desperately hope for publication of negative studies.
 
We agree about 98% but I think the biases of authors are quite easy to discern. They're studying "crappy standard of care" vs. "Magic Disease Curing Beans".

When MDCB fail miserably, I trust the study. This is rare.

When MDCB succeed incredibly, I trust the study. This is even more rare.

When MDCB show a statistically significant trend toward success...somebody got paid and nobody but MDCB Inc. will benefit.

I desperately hope for publication of negative studies.

What percentage of "research" do you actually trust nowadays, seriously?
 
As always birdstrike brings good perspective.

Not to get too far from the topic about a lines, but still uncertain about this skepticism over meta-analyses

So my understanding is that any form if research including RCTs can be flawed and must be evaluated individually. In general though RCTs are excellent when looking at the type of research.

Now if a topic has high quality RCTs and a meta analysis is performed. The first step is to eliminate inadequate studies....so garbage gets taken to the curb. Then you sum the highest quality evidence so the outcome should also be excellent when the type is evaluated as a whole (again, individual meta-analysis may be weak)

As for it being the weakest form...it's stronger than a single observational cohort study, case report / series, and has the potential to be truly great at the leading edge of meta-analyses (of course my thoughts)
 
We agree about 98% but I think the biases of authors are quite easy to discern. They're studying "crappy standard of care" vs. "Magic Disease Curing Beans".

When MDCB fail miserably, I trust the study. This is rare.

When MDCB succeed incredibly, I trust the study. This is even more rare.

When MDCB show a statistically significant trend toward success...somebody got paid and nobody but MDCB Inc. will benefit.

I desperately hope for publication of negative studies.

Where might I find these beans you speak off???!

Can you smoke them??
 
What percentage of "research" do you actually trust nowadays, seriously?

I trust it more if I have met and like any of the authors. Quite scientific of me. I recently realized I am very anti-magic bean and am more likely to believe someone who says "the recommended treatment doesn't work." I tend to scoff and stop paying attention when a lecturer talks about a wonderful, expensive new drug that shows a barely significant benefit in an x-thousand person study.
 
I think it fairly ironic that the the pushing of EBM, which was in sure if the best of intentions (we want what we do to actually make a difference rather than be as good as chance on a whim) has made many of us just that much more cynical about it all. And at the end of the day, I think there are really very few "big" things that have been handed down to us by EBM that have really made that big of a difference.
 
Boston is incorrect about meta-analysis being the lowest form of evidence (I'm look at you case reports). But RCTs are still the gold standard and a large meta-analysis is not the same grade of evidence as a large RCT. Meta-analysis has problems that can lead to confounding such as heterogeneity (where dissimilar studies are inappropriately combined) that have to be addressed. And they're still vulnerable to garbage-in, garbage-out. Well-done meta-analysis will address these issues, but usually at the cost of having to say that there's insufficient evidence to determine the answer.

Also, in terms of CVCs, there are huge systematic differences in they way we put lines in now vs. when the papers showing increased risk of infection for groin lines (full-sterile barriers, U/S guided, antibiotic disc/dressing). So while I think it's important not to blindly accept that one negative study overturns a large body of positive studies, the intervention being measured today might at well be a different procedure from the one that was initially studied.
 
Boston is incorrect about meta-analysis being the lowest form of evidence (I'm look at you case reports). But RCTs are still the gold standard and a large meta-analysis is not the same grade of evidence as a large RCT. Meta-analysis has problems that can lead to confounding such as heterogeneity (where dissimilar studies are inappropriately combined) that have to be addressed. And they're still vulnerable to garbage-in, garbage-out. Well-done meta-analysis will address these issues, but usually at the cost of having to say that there's insufficient evidence to determine the answer.

Also, in terms of CVCs, there are huge systematic differences in they way we put lines in now vs. when the papers showing increased risk of infection for groin lines (full-sterile barriers, U/S guided, antibiotic disc/dressing). So while I think it's important not to blindly accept that one negative study overturns a large body of positive studies, the intervention being measured today might at well be a different procedure from the one that was initially studied.

I can get on board with this appraisal
 
I think it fairly ironic that the the pushing of EBM, which was in sure if the best of intentions (we want what we do to actually make a difference rather than be as good as chance on a whim) has made many of us just that much more cynical about it all. And at the end of the day, I think there are really very few "big" things that have been handed down to us by EBM that have really made that big of a difference.

Agree. And the whole concept of EBM is often abused by many. The instance that everything done by any provider ever, must be supported by EBM is fraudulent on it's face. Every clinical scenario is different. Every patient is different. Wide open populations are often radically different than the narrowly selected study populations. Even routine seeming clinical scenarios can be much more complex than the simple clinical questions asked by a clinical trial.

Yet, your overall clinical experience which has been amassed by seeing tens of thousand of patients can be immediately shot down by some nitwhit behind a desk or someone with zero clinical experience who wants to pull the "EBM trump card," as if there's an abstract to answer every clinical decision. Look how many subjects have half the studies saying one thing and half saying the exact opposite? Tpa, steroids in spinal cord injury, cholesterol reduction and primary ACS prevention....the list goes on and on.

With the standard 0.05 P value cut off, disagreeable results should be very uncommon: specifically 5%, or 1 in 20. You should have to do 20 of the identical study before you find one to randomly disagree by chance. Yet, "EBM" as you call it is all over the map. Studies disagree more often that not on many subjects.

"EBM" is only as perfect as the imperfect people, with imperfect skills or motivations who click the "publish" button.

Yet that doesn't stop the government, insurance companies, some non-clinician or someone with zero clinical experience or judgement from pulling the EBM card. Reimbursement is now frequently tied to "EBM." The vast majority of clinical decision we make can't immediately be justified by a PubMed search. Try it if you disagree. Find an article for every test or treatment you perform on a given day, that exactly justifies your clinical decision without significant variation.

"EBM" can be used almost as an ad hominem attack, where someone who just wants to shut someone else down just says, "Show me the evidence." Then you're shot down if you allow it.

Guiding what we do with sensible evidence is smart. But don't pray at the alter of EBM either. Much (if not most) of it's drug company funded, or pushed by people with agendas, hidden or otherwise ("publish or perish"). Realize "EBM" can and is sometimes used as a weapon in today's world.


Sent from my iPhone using SDN Mobile
 
I don't quote Donald Rumsfeld often, but here's a good time for it. "Absence of evidence is not evidence of absence."

When there is evidence apropos our clinical scenario, we should know and apply it. And when an expensive or invasive therapy has been tested over and over again, yet no benefit has been found, we should ditch it. But when there is no evidence to answer a clinical question, the appropriate response is to apply our experience, knowledge of pathophysiology and judgement, not to just shrug our shoulders.
 
Boston is incorrect about meta-analysis being the lowest form of evidence (I'm look at you case reports). But RCTs are still the gold standard and a large meta-analysis is not the same grade of evidence as a large RCT. Meta-analysis has problems that can lead to confounding such as heterogeneity (where dissimilar studies are inappropriately combined) that have to be addressed. And they're still vulnerable to garbage-in, garbage-out. Well-done meta-analysis will address these issues, but usually at the cost of having to say that there's insufficient evidence to determine the answer.

Also, in terms of CVCs, there are huge systematic differences in they way we put lines in now vs. when the papers showing increased risk of infection for groin lines (full-sterile barriers, U/S guided, antibiotic disc/dressing). So while I think it's important not to blindly accept that one negative study overturns a large body of positive studies, the intervention being measured today might at well be a different procedure from the one that was initially studied.

I am definitely in agreement with latter part of your statement, the procedure today is done significantly different than in the past
 
I'm sorry meta analysis is the weakest form of evidence?

I'm not a research heavy guy so I'm probably the only one who isn't following this logic.

If there is a meta analysis on aspirin for chest pain...it will likely have evaluated the fifteen trials you mention and yes I would have great pause on the action.

My understanding is that a meta analysis is essentially the best form of literature because if you have lots of high quality RCTs on the subject you get an even better meta analysis. If you have lots of poorly done set of fifteen trials, you get at least as good but most likely better meta analysis when you combine them.

Am I way off base?


As for A lines, I would add that in some of the less efficient throughput locations I worked, we used a lines regularly on our boarded critical care patients. So I think LOS in the ED is a factor on their utility.

You should take a look at the meta analysis data in etomidate.

Met analysis have many flaws. It is true that a well constructed meta analysis can be a very valuable tool, but unfortunately many of them are drawn from **** data and are thus as unimpressive as the individual rct's.

Edit: I'm late to the party, echoing JDH, gutonc and mcninja
 
Last edited:
"There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know." -Donald Rumsfeld
 
You should take a look at the meta analysis data in etomidate.

Met analysis have many flaws. It is true that a well constructed meta analysis can be a very valuable tool, but unfortunately many of them are drawn from **** data and are thus as unimpressive as the individual rct's.

Edit: I'm late to the party, echoing JDH, gutonc and mcninja

I would never venture to say any one type of study is ALWAYS perfect but when compared to case reports, case series, retrospective trials, observational cohort studies, in general a meta analysis offers me more confidence in my practice than these. There are poorly done RCTs where the criteria for inclusion may cause bias, or the patient volumes are too low, or the study asks a question no one cares about…there are meta analyses done poorly, I am in total agreement, but they are far from the weakest form of evidence in my humble (albeit not research fellowship trained, not professor of EM etc) opinion.
 
Top