Difference between an NP and PA

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
But really, if you are going to use the >>>>>>'s, you should see it the way physicians might see it. Under the best of circumstances for you guys, doctors look at it as MD/DO >>>>>>>>>>>PA>NP. But most of them see it as
MD>>>>>>>>>>>>>>>>>>>>>PA+NP.

Boatswain is not worth speaking to. I'll refer you to a nearby thread when a new comer was asked about transitioning from paramedic to RN then NP. He has none of these credentials, he still decides to chime in. He's not worth speaking with.

As far as the research question all the reframing in the world won't make him correct. Meta-analysis are often filled with RCT, except there's dozens of them instead of one.

Members don't see this ad.
 
Last edited by a moderator:
I just don't see how interesting thread is when all anyone does is pile on, and it... never... is... any... different. Its always a turf war on a small insignificant scale. I go on allnurses and run into the folks who drink the nursing coolaid, and I'm not one of them.
 
That's a fellowship. That's not an NP degree. There is no such thing as a NP derm program. That "derm NP" will have not have a differen license or credential.
You missed my point, that is, why is there a need for NPs doing fields like derm at all. Are NP (and PAs)' role to fill a need (eg. primary care, fast track EM) like is often touted by these NP/DNP programs? To become another way to enter "lucrative" fields like derm and RN->CRNA? To assist and work alongside physicians? Or, to compete with physicians for the future aspiration of complete autonomy and being called doctor in the same damn hospital setting as physicians? Enough blurring the lines.
 
Last edited by a moderator:
  • Like
Reactions: 1 users
Members don't see this ad :)
You missed my point, that is, why is there a need for NPs doing fields like derm at all. Are NP (and PAs)' role to fill a need (eg. primary care, fast track EM) like is often touted by these NP/DNP programs? To become another way to enter "lucrative" fields like derm and RN->CRNA? To assist and work alongside physicians? Or, to compete with physicians for the future aspiration of complete autonomy and being called doctor in the same damn hospital setting as physicians? Enough blurring the lines.

LOL. You are going off script here, because I thought attacks on NP's were due to concerns over patient safety and not about preserving market share.

Ask the physicians who employ PA's and NP's in derm about whether there is a need for them, because that's where they are going to work. Stand alone NP's don't stand a chance if they hang out a shingle to compete.
 
Last edited:
I can see where you are coming from on the FNP, and it would be hard to assert the educational prowess between an FNP and a PA. I'm a believer in PA education. But coming down the pike, there are even nurse led initiative that will screw nurses over plenty, so sit back and wait for those to hit, and then you'll get some satisfaction if the powers that be pull them off. Maybe you would like the idea of NP's being pushed more towards a commitment to a specific track. I see it already happening to the PA world as well.

I get no satisfaction from anything that will screw over NPs.

But really, if you are going to use the >>>>>>'s, you should see it the way physicians might see it. Under the best of circumstances for you guys, doctors look at it as MD/DO >>>>>>>>>>>PA>NP. But most of them see it as
MD>>>>>>>>>>>>>>>>>>>>>PA+NP.

I used that to describe the educational pathways, not in practice. Yes, in practice it's generally MD/DO>>>>>PA+NP. But looking at the educational pathways, everyone (I believe yourself included) agrees it's MD/DO>>>>>PA>>>>>NP (with various numbers of >>>>>)

Boatswain is not worth speaking to. I'll refer you to a nearby thread when a new comer was asked about transitioning from paramedic to RN then NP. He has none of these credentials, he still decides to chime in. He's not worth speaking with.

As far as the research question all the reframing in the world won't make him correct. Meta-analysis are often filled with RCT, except there's dozens of them instead of one.

Some Meta-analysis are filled with good RCTs, but they are still open to and often ripe with selection bias. However many meta-analysis are done by reviewing poorly done studies, and done with such heavy bias that the results are terrible.

I'm not saying there isn't a use for meta-analysis because there certainly are, but you have to look at them carefully.

A prospective double blinded, placebo controlled RCT is the gold standard.

I just don't see how interesting thread is when all anyone does is pile on, and it... never... is... any... different. Its always a turf war on a small insignificant scale. I go on allnurses and run into the folks who drink the nursing coolaid, and I'm not one of them.

I don't see it as "piling on". We have one student-NP who is militant that her NP education is going to bring her to the rough equivalent of a MD/DO (ie: ready for independent practice). She's ridiculously wrong, and doesn't have the wherewithall to listen to the many people who have tried to show her that NP education is SEVERELY lacking.

That's all.
 
One thing I can't understand about NP school is how there isn't an advanced anatomy course...
 
  • Like
Reactions: 3 users
I get no satisfaction from anything that will screw over NPs.



I used that to describe the educational pathways, not in practice. Yes, in practice it's generally MD/DO>>>>>PA+NP. But looking at the educational pathways, everyone (I believe yourself included) agrees it's MD/DO>>>>>PA>>>>>NP (with various numbers of >>>>>)



Some Meta-analysis are filled with good RCTs, but they are still open to and often ripe with selection bias. However many meta-analysis are done by reviewing poorly done studies, and done with such heavy bias that the results are terrible.

I'm not saying there isn't a use for meta-analysis because there certainly are, but you have to look at them carefully.

A prospective double blinded, placebo controlled RCT is the gold standard.



I don't see it as "piling on". We have one student-NP who is militant that her NP education is going to bring her to the rough equivalent of a MD/DO (ie: ready for independent practice). She's ridiculously wrong, and doesn't have the wherewithall to listen to the many people who have tried to show her that NP education is SEVERELY lacking.

That's all.

If you think a single article suggesting every 3rd Sunday on a leap year a single RCT is better than a meta analysis then you need more education in research.
 
If you think a single article suggesting every 3rd Sunday on a leap year a single RCT is better than a meta analysis then you need more education in research.
I do hope that, when you leave your sheltered life as a student, you can give a little more respect to people whom you disagree with.
 
  • Like
Reactions: 1 users
I do hope that, when you leave your sheltered life as a student, you can give a little more respect to people whom you disagree with.

I'm simply treating you the way you treat others.

I'm sorry you don't have much education in research, I suggest you start studying.
 
Last edited by a moderator:
I will give you as much respect as you give me. I'm sorry you don't have much education in research, I suggest you start studying. Want a tissue?
My school has both an NP and PA program. The PA program requires Gross Anatomy with full cadaver dissections and regular exams. The NP program only requires undergraduate A&P which most take at the el cheapo community college prior to beginning the RN program. By the time they begin the NP program they haven't had any basic anatomy or physiology education in over 4 years.

I agree it would be great if NPs took the cadaver lab.

It's funny how in your brain every NP takes the easiest sciences available at a community college from 4 years ago and every PA program is almost 3 years and attached to a medical school.

You sound exactly like someone else. Maybe that person is simply trolling on 2 accounts?
 
It's actually four accounts...you can add Leviathan and Mad Jack....because there can only be ONE person who disagrees with you, right?

Or, it could be that those who disagree with you are correct (well...at least 90% of the time!)
 
Last edited:
  • Like
Reactions: 1 user
I'm simply treating you the way you treat others.

I'm sorry you don't have much education in research, I suggest you start studying.
I provided sources that you ignored, sources from major oncology journals produced by academics that have infinitely more research experience than yourself. PAs have higher quality education, and, in the light of limitations in current research, I chose to believe PAs are superior to NPs upon graduation.
 
  • Like
Reactions: 2 users
Members don't see this ad :)
I provided sources that you ignored, sources from major oncology journals produced by academics that have infinitely more research experience than yourself.

I read your sources. Every major association disagrees with your couple of articles.
 
So they are automatically correct and every professional organization and college in the country is incorrect?
In medicine, large RCTs are and have been considered for some time the gold standard in clinical decision making, due to the targeted nature of RCTs versus the generalized matter of meta analysis.

From Cleveland Clinic:

META-ANALYSIS VS LARGE RANDOMIZED CONTROLLED TRIALS

There is debate about how meta-analyses compare with large randomized controlled trials. In situations where a meta-analysis and a subsequent large randomized controlled trial are available, discrepancies are not uncommon.

LeLorier et al6 compared the results of 19 meta-analyses and 12 subsequent large randomized controlled trials on the same topics. In 5 (12%) of the 40 outcomes studied, the results of the trials were significantly different than those of the meta-analysis. The authors mentioned publication bias, study heterogeneity, and differences in populations as plausible explanations for the disagreements. However, they correctly commented: “this does not appear to be a large percentage, since a divergence in 5 percent of cases would be expected on the basis of chance alone.”6

A key reason for discrepancies is that meta-analyses are based on heterogeneous, often small studies. The results of a meta-analysis can be generalized to a target population similar to the target population in each of the studies. The patients in the individual studies can be substantially different with respect to diagnostic criteria, comorbidities, severity of disease, geographic region, and the time when the trial was conducted, among other factors. On the other hand, even in a large randomized controlled trial, the target population is necessarily more limited. These differences can explain many of the disagreements in the results.

A large, well-designed, randomized controlled trial is considered the gold standard in the sense that it provides the most reliable information on the specific target population from which the sample was drawn. Within that population the results of a randomized controlled trial supersede those of a meta-analysis. However, a well conducted meta-analysis can provide complementary information that is valuable to a researcher, clinician, or policy-maker.

http://www.ccjm.org/cme/cme/article...da538b96d6adee.html?tx_ttnews[sViewPointer]=1
 
  • Like
Reactions: 1 users
In medicine, large RCTs are and have been considered for some time the gold standard in clinical decision making, due to the targeted nature of RCTs versus the generalized matter of meta analysis.

From Cleveland Clinic:

META-ANALYSIS VS LARGE RANDOMIZED CONTROLLED TRIALS

There is debate about how meta-analyses compare with large randomized controlled trials. In situations where a meta-analysis and a subsequent large randomized controlled trial are available, discrepancies are not uncommon.

LeLorier et al6 compared the results of 19 meta-analyses and 12 subsequent large randomized controlled trials on the same topics. In 5 (12%) of the 40 outcomes studied, the results of the trials were significantly different than those of the meta-analysis. The authors mentioned publication bias, study heterogeneity, and differences in populations as plausible explanations for the disagreements. However, they correctly commented: “this does not appear to be a large percentage, since a divergence in 5 percent of cases would be expected on the basis of chance alone.”6

A key reason for discrepancies is that meta-analyses are based on heterogeneous, often small studies. The results of a meta-analysis can be generalized to a target population similar to the target population in each of the studies. The patients in the individual studies can be substantially different with respect to diagnostic criteria, comorbidities, severity of disease, geographic region, and the time when the trial was conducted, among other factors. On the other hand, even in a large randomized controlled trial, the target population is necessarily more limited. These differences can explain many of the disagreements in the results.

A large, well-designed, randomized controlled trial is considered the gold standard in the sense that it provides the most reliable information on the specific target population from which the sample was drawn. Within that population the results of a randomized controlled trial supersede those of a meta-analysis. However, a well conducted meta-analysis can provide complementary information that is valuable to a researcher, clinician, or policy-maker.

http://www.ccjm.org/cme/cme/article...da538b96d6adee.html?tx_ttnews[sViewPointer]=1

Excellent article. Let's put this into practice:

A single RCT in one geographic location comparing NP vs MD is only more valid in the location it was conducted.

A meta-analysis conducted in multiple cities and states shows greater generalizability for national or state wide healthcare access decisions.
 
I think we can take a meta-analysis of this thread, and a couple of other recent threads, to get an idea of what is wrong with NP education.
 
  • Like
Reactions: 1 user
Excellent article. Let's put this into practice:

A single RCT in one geographic location comparing NP vs MD is only more valid in the location it was conducted.

A meta-analysis conducted in multiple cities and states shows greater generalizability for national or state wide healthcare access decisions.
Except for the fact that none of the nursing studies were RCTs, and their methodology was poor, in addition to most of them being extremely small in size. Ultimately, they amount to a pile of anecdotes that have been patched together to reach a biased conclusion- positive selection of studies, as addressed in the article.
 
  • Like
Reactions: 1 users
Except for the fact that none of the nursing studies were RCTs, and their methodology was poor, in addition to most of them being extremely small in size. Ultimately, they amount to a pile of anecdotes that have been patched together to reach a biased conclusion- positive selection of studies, as addressed in the article.

I'm still waiting patiently for you to pick any of the studies in the Meta analysis I linked and show poor methodology. You even get to pick what you feel the worst study is, it couldn't possibly be easier for you.
 
You pick one and we'll work from there.

Here are the discussions on Quality and Methodology from the Meta-Analysis.

"While studies reporting a broad range of outcomes were included, only outcomes that were reported by at least 3 studies were selected to aggregate. The study results for these outcomes were summarized. A 2-step process was then used to evaluate the quantity and consistency of the evidence strength. First, the strength of the evidence from the aggregated outcomes was assigned a baseline grade of high, moderate, low, or very low. The initial strength of evidence was graded as high if it was supported by at least 2 RCTs or 1 RCT and 2 high-quality observational studies. The initial strength of evidence grade was moderate if supported by either 1 RCT, 1 high-quality observational, and 1 low-quality observational study or by 3 high-quality observational studies. The initial strength-of-evidence grade was low when there were fewer than 3 high-quality observational studies.

Strength of the aggregated evidence was graded a second time using an adapted GRADE Working Group Criteria.31 This process provided a systematic, transparent, and “explicit approach to making judgments about the quality of evidence and the strength of recommendation.”31 The body of evidence for each outcome was graded using the adapted GRADE criteria, which included consideration of the number, design, and quality of the studies; consistency and directness of results (extent to which results directly addressed our question); and likelihood of reporting bias. Using these criteria, the baseline grade was re-examined. The grade for each outcome was decreased by 1 level for each of the following: if the body of evidence was sparse, not of the strongest design to answer the question, had poor overall quality, results were inconsistent, or there was a possibility of reporting bias. The final strength-of-evidence grade was then assigned.

In grading the evidence, the direction of effects was evaluated as to whether it favored NPs, favored the comparison group, or made no significant difference. In many cases, showing equivalence of outcome was considered a good outcome, similar to equivalence trials where the aim is to show the therapeutic equivalence of 2 treatments.32 This was the case when comparing outcomes of care involving NPs with outcomes of care involving only physicians."

Methods
"The systematic approach used for this review included identifying and selecting relevant studies, reviewing and rating the individual studies, and then synthesizing findings on patient outcomes and grading the aggregated results. The project team comprised nurses, a physician, health services researchers, and experts on systematic reviews."

Study Selection:

"Studies that met the following criteria were included: randomized controlled trial (RCT) or observational study of at least 2 groups of providers (eg, NP working alone or in a team compared to other individual providers working alone or in teams without an NP), carried out in the US between 1990 and 2009, with patient outcomes for quality, safety, or effectiveness reported.28 and 29 Studies conducted outside the US were excluded because NP education, role implementation, and scope of practice in other countries are different and access, insurance, costs of care, and other characteristics of health care systems in other countries vary significantly from the US.

Studies in which NPs worked autonomously or in collaboration with MDs, as compared to MDs working autonomously or in collaboration with other MDs, were included with the knowledge that the critical difference between these 2 provider groups was the addition of the NP. Because provider practice and health care interventions change over time, studies prior to 1990 were excluded. Studies reporting only processes of care (eg, self report of completion of selected patient assessments or care documentation) were not included as they measure care delivery and practice activities rather than actual health outcomes. Studies were also excluded if they were not published in English or failed to report quantitative data or outcomes that could reasonably be expected to be affected by NPs.

The review proceeded from titles to abstracts and then to the full articles following a sequential multi-step process (Figure 1). The Web-based database software TrialStat® was used to store and organize all citations, develop standardized abstraction forms for the review, and allow reviewers to access the studies. Two independent reviewers examined and determined, according to the criteria listed above, whether to include or exclude each title, abstract, and full article. If articles met inclusion criteria after examination by both reviewers, they were included in the final data abstraction. Differences of opinion regarding article eligibility were resolved through consensus adjudication."

Quality Assessment:

"After applying the criteria described above, a sequential review process was used to abstract data from remaining articles. Data abstraction forms were completed by the primary reviewer and checked for completeness and accuracy by the second reviewer. Personnel with both clinical and methodological expertise were included in reviewer pairs. The reviews were not blinded. Consensus adjudication was used if differences of opinion between the reviewers could not be otherwise resolved.

Quality assessment is used in a systematic review to examine potential threats from individual studies to the validity of the findings. The Jadad scale (designed for RCTs that use double-blinding, etc), which quantifies the presence or absence of certain design characteristics, is commonly used to assess quality.30 A modified quality scale informed by the Jadad scale was developed to better assess the quality of studies (both RCTs and observational studies) represented in this review (eg, similarity of groups and settings, group sample sizes, potential sources of bias).28 and 29

The quality of each study was independently rated by 2 reviewers using the modified Jadad and scale items scored differently by the 2 reviewers were discussed. The modified Jadad scale yielded scores ranging from 0-8. A study quality score of ≥ 5 was considered to be high quality, and a score of ≤ 4 was considered to be low quality. These categories were determined independent of score distribution and based on the judgment that a study scoring ≤ 4 was likely to represent high bias and low attribution. The same criteria and cut points were used for both RCT and observational studies."


Conclusion

"Multiple policy implications can be drawn from these results.70 The evidence identified in this review supports the premise that outcomes of NP-provided care are equivalent to those of physicians. Thus the question of the comparability of NP/MD quality, safety, and effectiveness of care is answered, to a very considerable degree, by this review."


http://www.sciencedirect.com.ezproxy.lib.uwm.edu/science/article/pii/S1555415513004108?np=y



Please review this RCT taken from the Meta-Analysis:

http://www.ncbi.nlm.nih.gov/pubmed/10632281

Population:

Of 3397 adults originally screened, 1316 patients (mean age, 45.9 years; 76.8% female; 90.3% Hispanic) who had no regular source of care and kept their initial primary care appointment were enrolled and randomized with either a nurse practitioner (n = 806) or physician (n = 510).

Outcomes:

No significant differences were found in patients' health status (nurse practitioners vs physicians) at 6 months (P = .92). Physiologic test results for patients with diabetes (P = .82) or asthma (P = .77) were not different. For patients with hypertension, the diastolic value was statistically significantly lower for nurse practitioner patients (82 vs 85 mm Hg; P = .04). No significant differences were found in health services utilization after either 6 months or 1 year. There were no differences in satisfaction ratings following the initial appointment (P = .88 for overall satisfaction). Satisfaction ratings at 6 months differed for 1 of 4 dimensions measured (provider attributes), with physicians rated higher (4.2 vs 4.1 on a scale where 5 = excellent; P = .05).

Conclusions:

In an ambulatory care situation in which patients were randomly assigned to either nurse practitioners or physicians, and where nurse practitioners had the same authority, responsibilities, productivity and administrative requirements, and patient population as primary care physicians, patients' outcomes were comparable.
 
Last edited by a moderator:
Here are the discussions on Quality and Methodology from the Meta-Analysis.

"While studies reporting a broad range of outcomes were included, only outcomes that were reported by at least 3 studies were selected to aggregate. The study results for these outcomes were summarized. A 2-step process was then used to evaluate the quantity and consistency of the evidence strength. First, the strength of the evidence from the aggregated outcomes was assigned a baseline grade of high, moderate, low, or very low. The initial strength of evidence was graded as high if it was supported by at least 2 RCTs or 1 RCT and 2 high-quality observational studies. The initial strength of evidence grade was moderate if supported by either 1 RCT, 1 high-quality observational, and 1 low-quality observational study or by 3 high-quality observational studies. The initial strength-of-evidence grade was low when there were fewer than 3 high-quality observational studies.

Strength of the aggregated evidence was graded a second time using an adapted GRADE Working Group Criteria.31 This process provided a systematic, transparent, and “explicit approach to making judgments about the quality of evidence and the strength of recommendation.”31 The body of evidence for each outcome was graded using the adapted GRADE criteria, which included consideration of the number, design, and quality of the studies; consistency and directness of results (extent to which results directly addressed our question); and likelihood of reporting bias. Using these criteria, the baseline grade was re-examined. The grade for each outcome was decreased by 1 level for each of the following: if the body of evidence was sparse, not of the strongest design to answer the question, had poor overall quality, results were inconsistent, or there was a possibility of reporting bias. The final strength-of-evidence grade was then assigned.

In grading the evidence, the direction of effects was evaluated as to whether it favored NPs, favored the comparison group, or made no significant difference. In many cases, showing equivalence of outcome was considered a good outcome, similar to equivalence trials where the aim is to show the therapeutic equivalence of 2 treatments.32 This was the case when comparing outcomes of care involving NPs with outcomes of care involving only physicians."

Methods
"The systematic approach used for this review included identifying and selecting relevant studies, reviewing and rating the individual studies, and then synthesizing findings on patient outcomes and grading the aggregated results. The project team comprised nurses, a physician, health services researchers, and experts on systematic reviews."

Study Selection:

"Studies that met the following criteria were included: randomized controlled trial (RCT) or observational study of at least 2 groups of providers (eg, NP working alone or in a team compared to other individual providers working alone or in teams without an NP), carried out in the US between 1990 and 2009, with patient outcomes for quality, safety, or effectiveness reported.28 and 29 Studies conducted outside the US were excluded because NP education, role implementation, and scope of practice in other countries are different and access, insurance, costs of care, and other characteristics of health care systems in other countries vary significantly from the US.

Studies in which NPs worked autonomously or in collaboration with MDs, as compared to MDs working autonomously or in collaboration with other MDs, were included with the knowledge that the critical difference between these 2 provider groups was the addition of the NP. Because provider practice and health care interventions change over time, studies prior to 1990 were excluded. Studies reporting only processes of care (eg, self report of completion of selected patient assessments or care documentation) were not included as they measure care delivery and practice activities rather than actual health outcomes. Studies were also excluded if they were not published in English or failed to report quantitative data or outcomes that could reasonably be expected to be affected by NPs.

The review proceeded from titles to abstracts and then to the full articles following a sequential multi-step process (Figure 1). The Web-based database software TrialStat® was used to store and organize all citations, develop standardized abstraction forms for the review, and allow reviewers to access the studies. Two independent reviewers examined and determined, according to the criteria listed above, whether to include or exclude each title, abstract, and full article. If articles met inclusion criteria after examination by both reviewers, they were included in the final data abstraction. Differences of opinion regarding article eligibility were resolved through consensus adjudication."

Quality Assessment:

"After applying the criteria described above, a sequential review process was used to abstract data from remaining articles. Data abstraction forms were completed by the primary reviewer and checked for completeness and accuracy by the second reviewer. Personnel with both clinical and methodological expertise were included in reviewer pairs. The reviews were not blinded. Consensus adjudication was used if differences of opinion between the reviewers could not be otherwise resolved.

Quality assessment is used in a systematic review to examine potential threats from individual studies to the validity of the findings. The Jadad scale (designed for RCTs that use double-blinding, etc), which quantifies the presence or absence of certain design characteristics, is commonly used to assess quality.30 A modified quality scale informed by the Jadad scale was developed to better assess the quality of studies (both RCTs and observational studies) represented in this review (eg, similarity of groups and settings, group sample sizes, potential sources of bias).28 and 29

The quality of each study was independently rated by 2 reviewers using the modified Jadad and scale items scored differently by the 2 reviewers were discussed. The modified Jadad scale yielded scores ranging from 0-8. A study quality score of ≥ 5 was considered to be high quality, and a score of ≤ 4 was considered to be low quality. These categories were determined independent of score distribution and based on the judgment that a study scoring ≤ 4 was likely to represent high bias and low attribution. The same criteria and cut points were used for both RCT and observational studies."


Conclusion

"Multiple policy implications can be drawn from these results.70 The evidence identified in this review supports the premise that outcomes of NP-provided care are equivalent to those of physicians. Thus the question of the comparability of NP/MD quality, safety, and effectiveness of care is answered, to a very considerable degree, by this review."


http://www.sciencedirect.com.ezproxy.lib.uwm.edu/science/article/pii/S1555415513004108?np=y



Please review this RCT taken from the Meta-Analysis:

http://www.ncbi.nlm.nih.gov/pubmed/10632281

Population:

Of 3397 adults originally screened, 1316 patients (mean age, 45.9 years; 76.8% female; 90.3% Hispanic) who had no regular source of care and kept their initial primary care appointment were enrolled and randomized with either a nurse practitioner (n = 806) or physician (n = 510).

Outcomes:

No significant differences were found in patients' health status (nurse practitioners vs physicians) at 6 months (P = .92). Physiologic test results for patients with diabetes (P = .82) or asthma (P = .77) were not different. For patients with hypertension, the diastolic value was statistically significantly lower for nurse practitioner patients (82 vs 85 mm Hg; P = .04). No significant differences were found in health services utilization after either 6 months or 1 year. There were no differences in satisfaction ratings following the initial appointment (P = .88 for overall satisfaction). Satisfaction ratings at 6 months differed for 1 of 4 dimensions measured (provider attributes), with physicians rated higher (4.2 vs 4.1 on a scale where 5 = excellent; P = .05).

Conclusions:

In an ambulatory care situation in which patients were randomly assigned to either nurse practitioners or physicians, and where nurse practitioners had the same authority, responsibilities, productivity and administrative requirements, and patient population as primary care physicians, patients' outcomes were comparable.
The study you cite is 2/3 of a patient panel- hardly enough to craft national policy around. The typical FM MD has 2,000 patients on their panel. That's like saying you can judge every McDonald's in the country by evaluating the performance of one low-volume McDonald's. Given the small number of patients, it is impossible to know how many NPs were involved in the study, but given that there are only 806 patients assigned to the NP side, it is likely it is a small number, possibly as low as one. The same is true on the physician side of things- 501 patients is a quarter of a panel, practically nothing. Number and experience of providers is not addressed in the study, The study also is of short duration- 1 year is typically comprised of 2 visits or less. Given that these were patients that already had no PCP, they likely weren't heavy utilizers of health care services. The number of visits per patient is not addressed- if each patient rates their experience highly, but they only had one or two visits, how valid is that assessment? 1 year is also hardly enough time for adverse outcomes (or positive ones, for that matter), to begin showing in a patient population with a mean age in the mid-40s. Finally, these patients were over 90% Hispanic, a population that culturally tends to defer to their providers and trust their judgment with far fewer questions or resistance than white non-Hispanic or black populations. It also does not address language proficiency of the NP or MD/DO involved in care, which would be a substantial confounder in a >90% Hispanic population. Basically, it's a study that says nothing but "NPs don't kill their patients within a year, nor do doctors, and both have favorable first impressions on the first couple of visits."

Bravo. If our country is trusting the fate of the public to stunning displays of research such as that, surely we are going to be a health care utopia by the time I retire :rolleyes:
 
  • Like
Reactions: 1 users
The study you cite is 2/3 of a patient panel- hardly enough to craft national policy around. The typical FM MD has 2,000 patients on their panel. That's like saying you can judge every McDonald's in the country by evaluating the performance of one low-volume McDonald's. Given the small number of patients, it is impossible to know how many NPs were involved in the study, but given that there are only 806 patients assigned to the NP side, it is likely it is a small number, possibly as low as one. The same is true on the physician side of things- 501 patients is a quarter of a panel, practically nothing. Number and experience of providers is not addressed in the study, The study also is of short duration- 1 year is typically comprised of 2 visits or less. Given that these were patients that already had no PCP, they likely weren't heavy utilizers of health care services. The number of visits per patient is not addressed- if each patient rates their experience highly, but they only had one or two visits, how valid is that assessment? 1 year is also hardly enough time for adverse outcomes (or positive ones, for that matter), to begin showing in a patient population with a mean age in the mid-40s. Finally, these patients were over 90% Hispanic, a population that culturally tends to defer to their providers and trust their judgment with far fewer questions or resistance than white non-Hispanic or black populations. It also does not address language proficiency of the NP or MD/DO involved in care, which would be a substantial confounder in a >90% Hispanic population. Basically, it's a study that says nothing but "NPs don't kill their patients within a year, nor do doctors, and both have favorable first impressions on the first couple of visits."

Bravo. If our country is trusting the fate of the public to stunning displays of research such as that, surely we are going to be a health care utopia by the time I retire :rolleyes:

What a ridiculous response. Either RCT's are the gold standard or they aren't, how in the world do you keep changing your mind? Oh yeah... Texas Sharpshooter. You read the abstract and nothing more, you put zero work into an actual exploration of the article. This is one RCT amongst hundreds with the same consistent results.

It's impossible to find a study without limitations, I dare you to try. You definitely need some more classes on evidence based research if science that doesn't agree with your ideology you don't believe is science at all.

I will trust the MD's, PhD's and biostatician verifying this RCT as a high level of evidence over a 10 minute review by a medical student of the abstract who has very little actual research education and experience everyday of the week; twice on Sunday.
 
Last edited by a moderator:
What a ridiculous response. Either RCT's are the gold standard or they aren't, how in the world do you keep changing your mind? Oh yeah... Texas Sharpshooter. You read the abstract and nothing more, you put zero work into an actual exploration of the article. This is one RCT amongst hundreds with the same consistent results.

It's impossible to find a study without limitations, I dare you to try. You definitely need some more classes on evidence based research if science that doesn't agree with your ideology you don't believe is science at all.

I will trust the MD's, PhD's and biostatician verifying this RCT as a high level of evidence over your 10 minute review of the abstract and very little actual research education and experience everyday of the week; twice on Sunday.
Being able to determine whether a study has limitations is kind of a big deal. The vast majority of scientific studies are ultimately proven false, as explored in this paper: http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

This is the whole reason that physicians are trained in biostatistics, epidemiology, and study methodology. Clearly, analyzing study design is another weak point in NP education, as the study you have linked to has several obvious and notable flaws. Even upon further inspection, all of my complaints but one hold up, and even that one still mostly holds up- of those patients, only around 600 had more than two visits in the year of the study period. Determining whether your doctor is doing a great job isn't going to be something you can figure out in the visit or two the majority of those patients got. Every other one of my complaints in study design is not addressed. My argument is that NPs are fine enough of the time that we need a large, multicenter RCT of long duration to start finding the areas where they fall short, as less than a single physician's panel, with a largely homogenous population (>70% female, >90% Hispanic), unknown provider numbers, unknown language barriers, and a short study duration cannot possibly set this issue to rest.

I particularly loved your appeal from authority approach, which would be met with great derision in any public academic. You should know better than to utilize such an argument in a discussion of study design, as studies should stand on their own merit, not that of the authority publishing them. That you're opening with poor studies and logical fallacies is quite poor in both content and form.
 
  • Like
Reactions: 1 users
<Sarcasm font on>RIDICULOUS response Mad Jack...utterly RIDICULOUS!!!! SHOCKING that someone would reply like that about this RCT.

YOU NEED MORE CLASSES to mitigate your VERY LITTLE ACTUAL RESEARCH EDUCATION AND EXPERIENCE (and TWICE ON SUNDAY) before you could possibly understand the brilliance of your opponent in this argument. <Sarcasm font off>

She has obviously been inculcated into the militant belief that NPs are far superior to any other medical provider. Like someone who believes in kinesiology to cure cancer...there is no overcoming such rigidity.

I just post here to try to give others another perspective.
 
  • Like
Reactions: 1 users
<Sarcasm font on>RIDICULOUS response Mad Jack...utterly RIDICULOUS!!!! SHOCKING that someone would reply like that about this RCT.

YOU NEED MORE CLASSES to mitigate your VERY LITTLE ACTUAL RESEARCH EDUCATION AND EXPERIENCE (and TWICE ON SUNDAY) before you could possibly understand the brilliance of your opponent in this argument. <Sarcasm font off>

She has obviously been inculcated into the militant belief that NPs are far superior to any other medical provider. Like someone who believes in kinesiology to cure cancer...there is no overcoming such rigidity.

I just post here to try to give others another perspective.

She?

Being able to determine whether a study has limitations is kind of a big deal. The vast majority of scientific studies are ultimately proven false, as explored in this paper: http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

This is the whole reason that physicians are trained in biostatistics, epidemiology, and study methodology. Clearly, analyzing study design is another weak point in NP education, as the study you have linked to has several obvious and notable flaws. Even upon further inspection, all of my complaints but one hold up, and even that one still mostly holds up- of those patients, only around 600 had more than two visits in the year of the study period. Determining whether your doctor is doing a great job isn't going to be something you can figure out in the visit or two the majority of those patients got. Every other one of my complaints in study design is not addressed. My argument is that NPs are fine enough of the time that we need a large, multicenter RCT of long duration to start finding the areas where they fall short, as less than a single physician's panel, with a largely homogenous population (>70% female, >90% Hispanic), unknown provider numbers, unknown language barriers, and a short study duration cannot possibly set this issue to rest.

I particularly loved your appeal from authority approach, which would be met with great derision in any public academic. You should know better than to utilize such an argument in a discussion of study design, as studies should stand on their own merit, not that of the authority publishing them. That you're opening with poor studies and logical fallacies is quite poor in both content and form.

Thank you for proving my point that meta-analysis are a higher level of evidence, as a large factor of answering a research question is finding out how many people have already reached the same conclusion.

You keep constantly changing your mind on what your own points even are. In one message you say RCT's are the gold standard, I show you a RCT and you give a comparison to a McDonalds telling me a single RCT means nothing.

At this point you are just talking in circles. Pointing out the limitations of a RCT does not invalidate the results. The sample was large enough to use a evidence. Even if the study was done in a perfect Marshfield Epidemiological Study Area you would say something like "well sure, but we don't know anything about minorities!"

Your view that the overwhelming research showing that supervised NP's deliver safe care is completely nonsensical.
 
She?



Thank you for proving my point that meta-analysis are a higher level of evidence, as a large factor of answering a research question is finding out how many people have already reached the same conclusion.

You keep constantly changing your mind on what your own points even are. In one message you say RCT's are the gold standard, I show you a RCT and you give a comparison to a McDonalds telling me a single RCT means nothing.

At this point you are just talking in circles. Pointing out the limitations of a RCT does not invalidate the results. The sample was large enough to use a evidence. Even if the study was done in a perfect Marshfield Epidemiological Study Area you would say something like "well sure, but we don't know anything about minorities!"

Your view that the overwhelming research showing that supervised NP's deliver safe care is completely nonsensical.
Large RCTs are the gold standard for developing CPGs. Specifically large, multicenter RCTs. That was clearly spelled out in the articles I posted. I don't know how you missed it.

The sample was large enough to be some evidence, but hardly anything profound. Better than nothing at all is not even close to definitive.

You could send a person to an absolute quack twice in a year and they probably wouldn't die, nor would they be unhappy if the service provided was "good enough." You'd basically have to miss something huge or actively try to kill someone to cause an adverse outcome in such a short period of time, unless they were highly complicated elderly patients (which this study didn't really cover). Let me just put it this way- I could do the same study with naturopaths and probably get the same result, because the population isn't big enough, the time period isn't long enough, etc. It's a good design to use for an expanded study that covers more people and a longer span of time, but at the moment, it's extremely lacking. It's like judging whether a person is meant to be the love of your life based on two dates in your teens lol, that's the level of evidence it provides.
 
  • Like
Reactions: 1 users
<Sarcasm font on>RIDICULOUS response Mad Jack...utterly RIDICULOUS!!!! SHOCKING that someone would reply like that about this RCT.

YOU NEED MORE CLASSES to mitigate your VERY LITTLE ACTUAL RESEARCH EDUCATION AND EXPERIENCE (and TWICE ON SUNDAY) before you could possibly understand the brilliance of your opponent in this argument. <Sarcasm font off>

She has obviously been inculcated into the militant belief that NPs are far superior to any other medical provider. Like someone who believes in kinesiology to cure cancer...there is no overcoming such rigidity.

I just post here to try to give others another perspective.
Hey, I've got no problem with PAs. You guys are legit. Was super close to going PA myself, took all the prereqs to cover 50 different programs, but ultimately decided on med school instead. It's a great career, and you guys make great clinicians.

The "NP=MD" mentality is ridiculous, and doesn't hold up to scrutiny. I have plenty of college educated friends that, if they ask whether they should see a NP, I convince of this not by words, but just by saying, "look, here's the studies they use to claim equivalence. What are the problems with their methodology?" The studies speak for themselves to anyone that is looking at it objectively, but OP has a heavy confirmation bias.
 
  • Like
Reactions: 1 users
Large RCTs are the gold standard for developing CPGs. Specifically large, multicenter RCTs. That was clearly spelled out in the articles I posted. I don't know how you missed it.

The sample was large enough to be some evidence, but hardly anything profound. Better than nothing at all is not even close to definitive.

You could send a person to an absolute quack twice in a year and they probably wouldn't die, nor would they be unhappy if the service provided was "good enough." You'd basically have to miss something huge or actively try to kill someone to cause an adverse outcome in such a short period of time, unless they were highly complicated elderly patients (which this study didn't really cover). Let me just put it this way- I could do the same study with naturopaths and probably get the same result, because the population isn't big enough, the time period isn't long enough, etc. It's a good design to use for an expanded study that covers more people and a longer span of time, but at the moment, it's extremely lacking. It's like judging whether a person is meant to be the love of your life based on two dates in your teens lol, that's the level of evidence it provides.

Some good points. Your "2 date" analogy was off base for about a thousand reasons. Thank you for confirming this study does provide some evidence, and keep in mind it is considered strong evidence to the dozens of experts in research, medicine and statistics who rigorously reviewed all studies in the meta analysis, including this one. I invite you to review the methodological methods and review systems in place to ensure high quality evidence was used in the meta analysis. You may want to keep in mind these reviewers expertise in this area. Please enjoy reading the couple dozen other RCT's found in the meta-analysis I posted showing supervised (and some unsupervised) NP outcomes from neonates to ICU ventilator days to cardiac complications are comparable to MD's. There are many hundreds more where these came from.

Have a good rest of the day and enjoy the rest of your summer.
 
Last edited by a moderator:
Hey, I've got no problem with PAs. You guys are legit. Was super close to going PA myself, took all the prereqs to cover 50 different programs, but ultimately decided on med school instead. It's a great career, and you guys make great clinicians.

The "NP=MD" mentality is ridiculous, and doesn't hold up to scrutiny. I have plenty of college educated friends that, if they ask whether they should see a NP, I convince of this not by words, but just by saying, "look, here's the studies they use to claim equivalence. What are the problems with their methodology?" The studies speak for themselves to anyone that is looking at it objectively, but OP has a heavy confirmation bias.

I don't have a problem with NP's either...except for the militants who advocate NP=MD, or think they should be able to practice medicine (err....ahem...I mean "advanced nursing") rigtht out of school without any kind of supervision.
 
  • Like
Reactions: 3 users
I don't have a problem with NP's either...except for the militants who advocate NP=MD, or think they should be able to practice medicine (err....ahem...I mean "advanced nursing") rigtht out of school without any kind of supervision.

It would be wonderful if that statement was true about your intentions on this forum.

Couldn't agree more with how absurd it is to practice independently right out of school. There should be a 2 step NP exam, one for supervised and one for independent practice, with the independent practice step only after 3-5 years of physician supervised practice.
 
It would be wonderful if that statement was true about your intentions on this forum.

Couldn't agree more with how absurd it is to practice independently right out of school. There should be a 2 step NP exam, one for supervised and one for independent practice, with the independent practice step only after 3-5 years of physician supervised practice.
I think you guys should have to take the USMLE if you want equivalency.
 
  • Like
Reactions: 1 user
Some good points. Your "2 date" analogy was off base for about a thousand reasons. Thank you for confirming this study does provide some evidence, and keep in mind it is considered strong evidence to the dozens of experts in research, medicine and statistics who rigorously reviewed all studies in the meta analysis, including this one. I invite you to review the methodological methods and review systems in place to ensure high quality evidence was used in the meta analysis. You may want to keep in mind these reviewers expertise in this area. Please enjoy reading the couple dozen other RCT's found in the meta-analysis I posted showing supervised (and some unsupervised) NP outcomes from neonates to ICU ventilator days to cardiac complications are comparable to MD's. There are many hundreds more where these came from.

Have a good rest of the day and enjoy the rest of your summer.
Evidence does not equal strong evidence.
 
Evidence does not equal strong evidence.

Feel free to review the methodology behind the reviewers finding that the study is strong evidence based on a two tier reviewed system in a quantitative and verified tool. You may enjoy the read, I certainly did.

I'll help:

"While studies reporting a broad range of outcomes were included, only outcomes that were reported by at least 3 studies were selected to aggregate. The study results for these outcomes were summarized. A 2-step process was then used to evaluate the quantity and consistency of the evidence strength. First, the strength of the evidence from the aggregated outcomes was assigned a baseline grade of high, moderate, low, or very low. The initial strength of evidence was graded as high if it was supported by at least 2 RCTs or 1 RCT and 2 high-quality observational studies. The initial strength of evidence grade was moderate if supported by either 1 RCT, 1 high-quality observational, and 1 low-quality observational study or by 3 high-quality observational studies. The initial strength-of-evidence grade was low when there were fewer than 3 high-quality observational studies.

Strength of the aggregated evidence was graded a second time using an adapted GRADE Working Group Criteria.31 This process provided a systematic, transparent, and “explicit approach to making judgments about the quality of evidence and the strength of recommendation.”31 The body of evidence for each outcome was graded using the adapted GRADE criteria, which included consideration of the number, design, and quality of the studies; consistency and directness of results (extent to which results directly addressed our question); and likelihood of reporting bias. Using these criteria, the baseline grade was re-examined. The grade for each outcome was decreased by 1 level for each of the following: if the body of evidence was sparse, not of the strongest design to answer the question, had poor overall quality, results were inconsistent, or there was a possibility of reporting bias. The final strength-of-evidence grade was then assigned."

There's also much more information for you in the actual meta-analysis.
 
Feel free to review the methodology behind the reviewers finding that the study is strong evidence based on a two tier reviewed system in a quantitative and verified tool. You may enjoy the read, I certainly did.

I'll help:

"While studies reporting a broad range of outcomes were included, only outcomes that were reported by at least 3 studies were selected to aggregate. The study results for these outcomes were summarized. A 2-step process was then used to evaluate the quantity and consistency of the evidence strength. First, the strength of the evidence from the aggregated outcomes was assigned a baseline grade of high, moderate, low, or very low. The initial strength of evidence was graded as high if it was supported by at least 2 RCTs or 1 RCT and 2 high-quality observational studies. The initial strength of evidence grade was moderate if supported by either 1 RCT, 1 high-quality observational, and 1 low-quality observational study or by 3 high-quality observational studies. The initial strength-of-evidence grade was low when there were fewer than 3 high-quality observational studies.

Strength of the aggregated evidence was graded a second time using an adapted GRADE Working Group Criteria.31 This process provided a systematic, transparent, and “explicit approach to making judgments about the quality of evidence and the strength of recommendation.”31 The body of evidence for each outcome was graded using the adapted GRADE criteria, which included consideration of the number, design, and quality of the studies; consistency and directness of results (extent to which results directly addressed our question); and likelihood of reporting bias. Using these criteria, the baseline grade was re-examined. The grade for each outcome was decreased by 1 level for each of the following: if the body of evidence was sparse, not of the strongest design to answer the question, had poor overall quality, results were inconsistent, or there was a possibility of reporting bias. The final strength-of-evidence grade was then assigned."

There's also much more information for you in the actual meta-analysis.
They deemed it a high level evidence if two RCTs supported it regardless of the size, scale, scope, or characteristics of the RCT involved.

Not all RCTs are created equal. That last one was quite poor. Many of the other ones I've read in the past were as well.
 
  • Like
Reactions: 1 user
They deemed it a high level evidence if two RCTs supported it regardless of the size, scale, scope, or characteristics of the RCT involved.

Not all RCTs are created equal. That last one was quite poor. Many of the other ones I've read in the past were as well.

I'm sorry, but you are incorrect. Please read the full article and review the evidence table so you can understand that each article is reviewed on its own merits with a two step criteria before being included in the meta-analysis for an aggregate review.

I've proven you objectively wrong. When you start your residency and your attending cardiologist tells you the patient is in Afib and to start Coumadin, are you going to reply you think he's wrong and his expertise is not as important as your individual assessment?

You may just continue arguing all night that you are an expert in research and the PhD's, biostaticians and MD's who created this meta analysis are wrong.

I know I accept the opinions of experts and admit to being wrong when I am. Looks like you may not.
 
Wasn't that tried already? A watered down version of step 3?


Sent from my iPhone using SDN mobile app

I would love to read the results. I heard the NP's did not do well. However, perhaps the worst NP school in the country was chosen as the cohort. Who knows. I would be very open to that discussion.

I would love to see a NP residency and a much more difficult NP licensing exam, with 2 steps before independent practice can be achieved.
 
It would be wonderful if that statement was true about your intentions on this forum.
.

So now you call me a liar. I doubt you would do so in person cupcake. You may want to try some civility on these boards...we're not at AllNurses.
 
So now you call me a liar. I doubt you would do so in person cupcake. You may want to try some civility on these boards...we're not at AllNurses.

I would love to call you a liar in person. On the phone. On the internet. By Morse code or smoke signals. Whatever mode you would prefer.

Wish the moderator would just lock this thread.
 
I would love to read the results. I heard the NP's did not do well. However, perhaps the worst NP school in the country was chosen as the cohort. Who knows. I would be very open to that discussion.

I would love to see a NP residency and a much more difficult NP licensing exam, with 2 steps before independent practice can be achieved.
It was Columbia. 50% failed a simplified version of Step 3.
 
I'm sorry, but you are incorrect. Please read the full article and review the evidence table so you can understand that each article is reviewed on its own merits with a two step criteria before being included in the meta-analysis for an aggregate review.

I've proven you objectively wrong. When you start your residency and your attending cardiologist tells you the patient is in Afib and to start Coumadin, are you going to reply you think he's wrong and his expertise is not as important as your individual assessment?

You may just continue arguing all night that you are an expert in research and the PhD's, biostaticians and MD's who created this meta analysis are wrong.

I know I accept the opinions of experts and admit to being wrong when I am. Looks like you may not.
You haven't proven me objectively wrong, the study was still garbage. How you think less than a panel full of people can be generalized to be good evidence is beyond me. I've taken courses in epidemiology and biostatistucs and have done research in the past- I know how to both craft good studies, and craft studies that will give me a desired outcome. That study was crafted poorly, if not blatantly to achieve an easily attainable outcome. Your ridiculous idea that the people behind the study are impartial and infallible is how we end up with providers that blindly prescribe overpriced new medication with negligible benefit because some drug rep points out a study that was paid for by the manufacturer that shows marginal benefit with poor methodology. Regardless, I've grown bored of you, and no longer want your posts to distract me from my clinical rotations. If you are so incapable of interpreting studies, no amount of all of us pointing t methodological flaws will convince you otherwise. NPs are inferior providers in my eyes, and always will be, unless we do a study of sufficient size and density to come to a conclusion that isn't a cobbled together mess of poor studies that look like something a third year medical student would cook up.

To spare us both any further wasted time and annoyance, welcome to my ignore list. This will serve us both in the best manner possible.
 
  • Like
Reactions: 1 user
To spare us both any further wasted time and annoyance, welcome to my ignore list. This will serve us both in the best manner possible.

I am with you. This guy is unteachable.
 
  • Like
Reactions: 1 user
You haven't proven me objectively wrong, the study was still garbage. How you think less than a panel full of people can be generalized to be good evidence is beyond me. I've taken courses in epidemiology and biostatistucs and have done research in the past- I know how to both craft good studies, and craft studies that will give me a desired outcome. That study was crafted poorly, if not blatantly to achieve an easily attainable outcome. Your ridiculous idea that the people behind the study are impartial and infallible is how we end up with providers that blindly prescribe overpriced new medication with negligible benefit because some drug rep points out a study that was paid for by the manufacturer that shows marginal benefit with poor methodology. Regardless, I've grown bored of you, and no longer want your posts to distract me from my clinical rotations. If you are so incapable of interpreting studies, no amount of all of us pointing t methodological flaws will convince you otherwise. NPs are inferior providers in my eyes, and always will be, unless we do a study of sufficient size and density to come to a conclusion that isn't a cobbled together mess of poor studies that look like something a third year medical student would cook up.

To spare us both any further wasted time and annoyance, welcome to my ignore list. This will serve us both in the best manner possible.

Your opinion of NP's was never going to change regardless of the evidence provided. Enjoy your rotation and the rest of your summer.
 
Top