Poor Studies drive me nutz

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Nope. Most just want to pad their resumes. Otherwise they would just recognize that they suck at it and stop.

This entire research hysteria would go away the second we would stop promoting people based on it. In the end, we are doctors, not researchers; our measure of success should be our outcomes. If in academia, it should be the quality of our lectures and bedside teaching. (Teachers are not evaluated based on what they publish.) But, hey, when the people running the show have padded resumes, don't teach, and are clinically unimpressive...

So true. I am not a researcher. If I wanted to do or was good at that stuff, I would have gotten a PhD. And I hate reading studies that aren’t meta analysis studies. I don’t understand all that statistical Mumbo jumbo. I just read the abstract, methods and outcomes. Unless it’s a meta analysis.

I am gonna need to do a study in fellowship that I am dreading.

Members don't see this ad.
 
All the statistical mumbo-jumbo is based on various hypotheses, which the researchers should respect to the letter. One mistake and the conclusion is false. And since most "researchers" have no clue about math and statistics in the first place...

The entire concept of meta-analysis is worthless to anybody who understands even a lick of math. When one adds together a number of studies, the likelihood of error skyrockets. There is a huge chance that all these meta-analyses are meaningless, unless confirmed by a subsequent RCT.

And let's not mention the irrelevance of pee values in many studies.
 
Last edited by a moderator:
  • Like
Reactions: 1 user
All the statistical mumbo-jumbo is based on various hypotheses, which the researchers should respect to the letter. One mistake and the conclusion is false. And since most "researchers" have no clue about math and statistics in the first place...

The entire concept of meta-analysis is worthless to anybody who understands even a lick of math. When one adds together a number of studies, the likelihood of error skyrockets. There is a huge chance that all these meta-analyses are meaningless, unless confirmed by a subsequent RCT.

And let's not mention the irrelevance of pee values in many studies.
Well, clearly you are way more educated about this stuff than I.
 
Members don't see this ad :)
Well, clearly you are way more educated about this stuff than I.
All these statistical methods are based on mathematical theorems (IF this THEN that). So, as in math, if the hypotheses are not respected (to the letter), the conclusions can be false.

The more complicated a method and its interpretation (and most are), the higher the likelihood of a mistake and of a false conclusion, especially in the hands of non-mathematicians. This explains why we have all these fantastic studies whose outcomes cannot be reproduced in clinical practice.

And let's not mention the ethical... laxity, unacceptably frequent in the developing world, and not only:

 
Last edited by a moderator:
  • Like
Reactions: 1 user
The problem arises when we guard what we were taught in residency or fellowship to be the absolute truths, anything else is blasphemy. That is the currently prevailing sentiment in most of academia, FFP and I both see that and that is why we are critical.
.
Absolutely true.
 
  • Like
Reactions: 1 user
How does a pre-op dose of metoprolol give you less one year MI and more stokes? Physiologically i'm lost.

physiologically, lower HR maximizes supply/demand ratio so prob decrease MI
lower HR also causes more stasis which increases clot formation and thus stroke
 
All these statistical methods are based on mathematical theorems (IF this THEN that). So, as in math, if the hypotheses are not respected (to the letter), the conclusions can be false.

The more complicated a method and its interpretation (and most are), the higher the likelihood of a mistake and of a false conclusion, especially in the hands of non-mathematicians. This explains why we have all these fantastic studies whose outcomes cannot be reproduced in clinical practice.

And let's not mention the ethical... laxity, unacceptably frequent in the developing world, and not only:



Good post. Thank you. There’s been lots of debate recently between ED and Neurology over TPA data. There are threads in both the ED and Sociopolitical forums about a recent NYT article which was complete crap. Anyway, I’m pretty impressed with the vigor in which some of the ED guys (including PulmCrit blog author) tease apart reportedly ‘good data’.
 
The PulmCrit guy is an intensivist in Burlington, VT. The EmCrit blog (of which PulmCrit is a part) is owned by an EM physician from Stony Brook, NY.
 
  • Like
Reactions: 1 user
All these statistical methods are based on mathematical theorems (IF this THEN that). So, as in math, if the hypotheses are not respected (to the letter), the conclusions can be false.

The more complicated a method and its interpretation (and most are), the higher the likelihood of a mistake and of a false conclusion, especially in the hands of non-mathematicians. This explains why we have all these fantastic studies whose outcomes cannot be reproduced in clinical practice.

And let's not mention the ethical... laxity, unacceptably frequent in the developing world, and not only:



Fantastic video, FFP. Especially, at about the 15 min mark. It outlines how these journals, editors, and researchers all team up for financial reasons. All at the pearl of medicine and patient care.
 
  • Like
Reactions: 1 user
Septic cardiomyopathy is a bull**** diagnosis.
Pathophysiology, echocardiographic evaluation, biomarker findings, and prognostic implications of septic cardiomyopathy: a review of the literature | Critical Care | Full Text

13054_2018_2043_Fig1_HTML.gif
 
  • Like
Reactions: 1 user
Top