People are allowed to make all sorts of choices that could potentially cause them harm. In fact most things in life have the potential to cause harm so it is easy to make this generic statement.
Industry X is a field that needs to be regulated so that people can be prevented from harm
Do you believe that the government should regulate all activities that may cause people harm?
I don't believe that the government should regulate all activities that may cause people harm. I do believe that the someone should regulate activities that have the potential to cause harm to others (not yourself). I don't care if you choose to punch yourself in the face all day. I do care if you decide to punch someone else in the face. That's when I want someone stepping in.
First of all there is a fairly substantial body of literature that compares NP vs MD care. I understand that many of the studies may have flaws, however dismissing the entire body as a whole is a bit disingenuous. Have you read all of the papers? If the flaws are really as glaring as you make them out to be, how did the studies ever make it into journals like the JAMA. You act as though these are all op-ed pieces in a small town newspaper. I'm hoping you can provide the readers of this thread with at least a meta-analysis or a link to an objective analysis showing why the 20+ papers are completely irrelevant to the argument.
I actually
have read the majority of those studies. The questions is, have you? Many nursing midleves cite studies without having read anything further than the abstract. Unfortunately, the abstract doesn't tell you much about methodology.
You also realize that JAMA isn't really that great of a journal right? It seems to publish anything and everything to get some splash coverage. I'm guessing you're referring to the Mundinger study that nursing midlevels seem to think is the greatest thing on earth:
http://www.ncbi.nlm.nih.gov/pubmed/10632281?dopt=Abstract
One of their main outcome measures was patient satisfaction surveys. Like I mentioned previously, satisfaction surveys are a bad way to asses medical competency. Just because people are satisfied with me doesn't mean that I provided good medical care. Also, why did they measure diastolic value? That makes absolutely no sense. They took a value that is pretty much meaningless and found a statistical significance for it. Clearly that's a mark of a scientific study. The fact that they're finding statistical significance in a useless marker does NOT imply equal outcomes. The worst part of the "study" is that it lasts only 6 months! I'm not even trained in medicine and, even to me, 6 months is far too short of a time period to see anything meaningful (not that I'm saying the rest of the study design was adequate). What it seems to me like is that Mundinger intentionally designed the study in this manner because it's the only way for her to show "equivalent" outcomes. That's just bad science there.
This study is actually a
perfect example of how NOT to do a study. It's a pretty poorly designed study and the fact that you posted it thinking it proved equal outcomes shows that you don't understand how studies are designed or interpreted. I recommend that you learn to do this rather than depend on other people to tell you what the results of a study are.
Even though they say in the paper that they will come back and look in 2 years to find equivalency, they never did. Look through all the issues of JAMA; they never get back to it. Instead they publish in the high impact factor, widely read, highly regarded "Medical Care Research and Review" (
http://mcr.sagepub.com/cgi/content/abstract/61/3/332). They basically didn't have enough follow up to publish a meaningful study.
How about this "wonderful" meta-analysis:
http://www.bmj.com/content/324/7341/819.full
It even mentions in the paper that a lot of the studies were of poor quality. Once again, it looks at patient satisfaction (a useless measure). Not only that, the study also says:
"None of the studies in our review was adequately powered to detect rare but serious adverse outcomes. Since one important function of primary care is to detect potentially serious illness at an early stage, a large study with adequate length of follow up is now justified. "
Since a meta-analysis is only as strong as the studies it looks at, I'm going to have to say that this is a pretty useless meta-analysis.
So yea, I'm going to stand by my statement that the "fairly substantial body of literature that compares NP vs MD care" is a pretty crappy body of literature. It's bad enough to make me question the competence of these "researchers." Either they don't have much of an understanding of statistics and experimental design or they're purposefully trying to mislead others. Either way, that's bad.
Again you are missing the point of freedom of choice. Choice means that I'm allowed to make up my own mind on what is best for me for better or worse. I can choose to eat unhealthy and not exercise if I want, even though 100s of trials and studies have shown it will have a large negative effect on my health.
Bad example there. You can choose to eat unhealthy if you want because you're only hurting
yourself there, not others. However, when you start practicing voodoo medicine, you put
others at risk. There's a big difference there. Choose to harm yourself all you want. But the moment you start to knowingly put others at risk (due to lack of training), I think someone needs to step in to regulate.