I’m a midlevel

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

psychma

Full Member
2+ Year Member
Joined
Oct 3, 2022
Messages
217
Reaction score
228
Points
116
  1. Non-Student
Advertisement - Members don't see this ad
That’s it. I am. And I can’t be measured by that simple data point. When I come here, I read people talking in absolutes about masters level providers without considering variability in skills and training.

What you don’t know about me is that I have an MS in molecular biology first. After graduating with my MA in counseling specializing in autism and assessment (also my thesis topic), I trained with a neuropsychologist for 5 years in assessment. I expanded my knowledge by taking additional courses at the graduate level in autism studies and psychometrics.

I’m currently in a doctoral program in public health policy where my focus is mental health, autism, and assessment. Im not “just” a therapist. I might be an 85 on a standard scale in one area and you might be a 115, but in some areas I might be the 115 and you might be the 85. A distribution point measures variability in one domain. We are all multidimensional.

Online forums tend to amplify extreme experiences, which can unintentionally skew perception. R/therapists and even sdn are not your best sources of information about therapists and therapy training.

I work in private practice doing therapy and assessments. I am a doctoral student. I precept students through a university that hired me after a strict vetting process. I have been developing a new autism assessment tool. I’m great in my own way. So are you.
 
Last edited:
Psychologists are midlevels. Medicare requires our patients to have a physician referral, which is the entire basis for getting us the same status with CMS, as chiropractors. The DEA licenses us as midlevels.

We just adopted the physician argument, without understanding it.
 
I wish that more counselors and social workers would adopt this attitude and be more honest about what they do not know. Instead, we get confident assertions about the supremacy of sham treatments while evidence-based practices are derided as cold, unfeeling, and ineffective. I know this is against the standards of NASW and the ACA per se, but these standards are rarely, if ever enforced.

The problem, at least in counseling programs, is that they are fantastic money makers for colleges and universities so any modicum of quality control is highly disincentivized. This was as true in the small liberal arts college where I did my master's degree as much as it was at the R1 where I did my doctorate. And if prestigious R1/liberal arts college won't take the applicant, the Fielding/University of Phoenix/Capella Universities of the world will. Couple that with fear of litigation if a student is dismissed on grounds of incompetence and it's easy to see how a low bar to entry has resulted in a high degree of variability. Not to mention that the post-degree period from graduation to licensure is often informal, on-the-job training that itself varies a good deal state-by-state. Contrast this with psychiatry and professional psychology that has structured pathways to independent practice,

There are great counselors and social workers, no doubt, but there are limits to their training. ADHD is an example of this. There are many, many reasons why someone's attention may be impaired--anything from most classifications of psychiatric disorder to just plain individual differences strained by high environmental demands. Yet that does not stop the cottage industry of mostly master's level (and FPPS psychologists) online influencers from touting ADHD as the sole reason of executive dysfunction. Most of the referrals we get for ADHD evaluations come from master's level clinicians who have leveraged their therapeutic alliance on seeking on an ADHD evaluation. I know you're on the psychiatry board where you're seeing the same complaints. I chalk this up, partly, a failure to understand the very basics of cognitive development and how attention actually works, which is not something I got until I went to graduate school (and fleshed out further in postdoc) to be a psychologist. But, again, instead of owning what they do not know, many mid-level therapists continue to double down. I think that's what I, at least, am reacting to.

I could go on. IFS, EMDR, somatic experiencing, attachment theory-informed therapy are other examples. The argument for master's level training is that therapeutic procedures are simple enough to teach to someone without a doctorate, and you know, access is important. But this argument assumes these technicians will stay in their lane even when given a wide latitude. However, there is ample evidence to suggest otherwise. That being said, I know there are people out there who are willing to ask questions and avoid speaking outside the limits of their competence and training. I just wish there was more of them.
 
I wish that more counselors and social workers would adopt this attitude and be more honest about what they do not know. Instead, we get confident assertions about the supremacy of sham treatments while evidence-based practices are derided as cold, unfeeling, and ineffective. I know this is against the standards of NASW and the ACA per se, but these standards are rarely, if ever enforced.

The problem, at least in counseling programs, is that they are fantastic money makers for colleges and universities so any modicum of quality control is highly disincentivized. This was as true in the small liberal arts college where I did my master's degree as much as it was at the R1 where I did my doctorate. And if prestigious R1/liberal arts college won't take the applicant, the Fielding/University of Phoenix/Capella Universities of the world will. Couple that with fear of litigation if a student is dismissed on grounds of incompetence and it's easy to see how a low bar to entry has resulted in a high degree of variability. Not to mention that the post-degree period from graduation to licensure is often informal, on-the-job training that itself varies a good deal state-by-state. Contrast this with psychiatry and professional psychology that has structured pathways to independent practice,

There are great counselors and social workers, no doubt, but there are limits to their training. ADHD is an example of this. There are many, many reasons why someone's attention may be impaired--anything from most classifications of psychiatric disorder to just plain individual differences strained by high environmental demands. Yet that does not stop the cottage industry of mostly master's level (and FPPS psychologists) online influencers from touting ADHD as the sole reason of executive dysfunction. Most of the referrals we get for ADHD evaluations come from master's level clinicians who have leveraged their therapeutic alliance on seeking on an ADHD evaluation. I know you're on the psychiatry board where you're seeing the same complaints. I chalk this up, partly, a failure to understand the very basics of cognitive development and how attention actually works, which is not something I got until I went to graduate school (and fleshed out further in postdoc) to be a psychologist. But, again, instead of owning what they do not know, many mid-level therapists continue to double down. I think that's what I, at least, am reacting to.

I could go on. IFS, EMDR, somatic experiencing, attachment theory-informed therapy are other examples. The argument for master's level training is that therapeutic procedures are simple enough to teach to someone without a doctorate, and you know, access is important. But this argument assumes these technicians will stay in their lane even when given a wide latitude. However, there is ample evidence to suggest otherwise. That being said, I know there are people out there who are willing to ask questions and avoid speaking outside the limits of their competence and training. I just wish there was more of them.
Well stated and 100% agree. Could not have said it any better. This, at least to me, is what it means when psychologists talk about mid-levels (which would be defined as masters level education and training) and concerns related to the way some mid-levels present themselves and the "you don't know what you don't know" concept.
 
Honestly, who cares? Medicare pays out the worst PsyD 25% more than the best midlevel. An NP trained at an online school can prescribe independently in many states where I cannot prescribe meds at all. The public calls me "doctor" and that helps get some folks in the door compared to any midlevel. Some folks on this board worked way harder than I did on their PhD and make less than me.

This is a job, not an identity. Beyond getting paid as well as possible for what I do, none of it really matters.

That said, the bottom of the barrel midlevel program probably has at least slightly dumber students than the worst doctoral programs. The most prestigious clinical psych program in the country has folks smart enough that they should know they can do something more lucrative with their time.
 
I personally have not seen this "universal" denigration of master's-level clinicians. People certainly use the term "master's-level" a little loosely here and there when making complains on this forum, but my experience has been that it's almost always clearly in the context of arguing that certain issues are more prevalent in this group relative to doctoral-level psychologists, not that it's true of all folks in that group or never true of folks in the latter group.
 
The argument for master's level training is that therapeutic procedures are simple enough to teach to someone without a doctorate, and you know, access is important. But this argument assumes these technicians will stay in their lane even when given a wide latitude. However, there is ample evidence to suggest otherwise.
This is the major issue.

I'm a masters-level trained clinician and I know what I am. I defer complex cases to my psychologist and psychiatrist colleagues. Anyone comes in with a whiff of neuropsych? They're getting a referral. I'm not trained to do that, and I don't want to.

I'm active in a lot of masters-level clinician networking groups, both on and off social media, and I'd say most of the problems tend to fall into one of the following buckets:
1. Dunning-Kruger effect. Self explanatory. "I have a masters degree so I'm pretty sure I can figure this out on my own without training, how hard can it be?"
2. Chip on their shoulder/us against them mentality. "I earned a masters degree so I will damn well charge $180/hr as an associate because I worked for it. I'm as good as or better than a psychologist/psychiatrist/whatever because I ~listen to the client~ and ~validate their emotions~"
3. General distrust of EBP and mainstream medicine. I cannot deny that there are definitely some things that mainstream medicine has gotten wrong in the past, but folks will swing in the opposite direction. This is the appeal of things like TBKTS and bottom-up processing and primal screaming therapy and all that.
4. Simplification of theory without understanding detail, nuance, or both. Cue the "CBT doesn't affect lasting change", "EMDR is the only thing that works for PTSD", "psychologists care about EBPs too much", "PsyDs are for clinical, PhDs are for research", "we should always validate the client to build rapport no matter what" (no, we should not be validating OCD patients' obsessions what are you talking about?)

The problems of therapy midlevels is not unlike those of the actual midlevels (NPs and PAs). We are a huge asset for access to care, easier barrier to entry for people making career changes, less time in school = more quickly into the field = faster ROI, etc. Masters level clinicians do a huge lift in areas like community mental health, hospital case management (shout out to the LCSWs), facility-based treatment centers that literally just need bodies to monitor patients and run groups, like IOPs and the like. The issue arises when we are unable to stay in our own lanes, where we actually shine and can make the best impact, and try to encroach on duties and knowledge better served by our colleagues.
 
Last edited:
Another thought that I had recently: I know the psychology world is not without its own problems, but the quality of training in masters programs as of late has gone way down. It was already fairly bad when I was going through my masters (2015-2019 - I took a year break in the middle of that), but now that I'm actively supervising masters level practicum/internship students and associates I am shocked at how little they are learning. Not a single one of them can identify a theoretical orientation (one associate didn't know what that was). When I asked about treatment planning, my internship student this semester told me she had only had one worksheet in her masters program about treatment planning. When I mentioned discharge planning to one of my associates, for a patient who was high acuity discharging from her inpatient facility, she shrugged and said, "Can't he just go back to his outpatient therapist?" No, actually, he can't. That's what landed him here in the first place.

Maybe I'm just finally seeing the seedy underbelly of masters level training, but it just feels like such a stark difference to the training I received just a little less than a decade earlier. I don't know if COVID had anything to do with this (it seems like telehealth practica/internships were considered taboo before that, and now a lot of students are doing practica where some of their hours are via telehealth, and certainly a lot of them are doing online programs where their didactics and techniques classes are all via Zoom too). I supervise masters students in an online program and both their practica and internship can be done in their school's low-cost clinic which is entirely online. And a bunch of them don't record anymore! :O

I hate being the "arghhh back in my day" person because by all accounts I am a young sapling myself, but the caliber of training is steadily decreasing and I think that is a very real concern when folks start on their "ughhh midlevels" spiel.
 
Yeah, I think my concern is more research literacy. But that's not exclusive to Masters-level therapists, and it's also not true of all of them. I have some Masters-level coworkers who believe very strongly in evidence-based process and, if they aren't acquainted with the research, know enough to ask psychologists (often me, lol).
 
Yeah, I think my concern is more research literacy. But that's not exclusive to Masters-level therapists, and it's also not true of all of them. I have some Masters-level coworkers who believe very strongly in evidence-based process and, if they aren't acquainted with the research, know enough to ask psychologists (often me, lol).
I think you bring up a great point about asking. We are a valuable resource and the masters level folks that I know value that and use it. If we had a system that actually facilitated consultation and collaboration, it would be less of a problem.
 
Yeah, I think my concern is more research literacy. But that's not exclusive to Masters-level therapists, and it's also not true of all of them. I have some Masters-level coworkers who believe very strongly in evidence-based process and, if they aren't acquainted with the research, know enough to ask psychologists (often me, lol).
This goes back to an analogy I've used in the past. Mechanic v. Mechanic Tech (Doctoral v Masters). The Mechanic needs to diagnosis and know what to work on. The Mech Tech can do the tech part, but the Why & When makes a difference. The research training at the doctoral level largely addresses the Why & When.

I work with some great masters-trained folks, whether they are a counselor or an NP or PA. This same dynamic is present in each of those dyads too. The "you don't know what you don't know" is such a huge part of why mid-level folks are targeted and/or drawn to these junk science approaches (e.g. brain spotting et al.).
 
The research literacy difference part is hard to communicate without offending. I'll offer myself up as a sacrificial teaching example.

I distinctly recall coming into my graduate level statistics courses, thinking I knew it all already. I scoffed as I read the syllabus, and even commented about my advanced math classes (upwards through Calc III and differential equations) and research courses I had taken in undergrad. Our professor simply showed us a direct example of p-hacking. Literally, changed how I consume research. I now am critically assessing the specifics of what and how statistical analyses are run, whereas before I simply glossed over those parts of the papers I read.

It wasn't that I was uneducated or research illiterate. I just simply had never been exposed to something like that until a doctoral program.
 
Last edited:
There are stats, then there are STATS...and then there are the stats you use and *still* hire an expert stats consultant to double-check everything. I fall into that second category, and the consumption of research part is what has been impacted most since training. One thing I didn't fully appreciate at the time, on internship we had a weekly journal club and our DCT & mentors really made a point to prioritize these skills and the importance of incorporating research into our daily work. Critically evaluating studies, especially the methods and results, really helped crystalize how stats can be misleading, as the devil is in the details. We had journal clubs in grad school, but they varied in quality.

In my Pharma training we'd review white papers and other research, and learning that certain instruments were specifically preferred by Pharma companies bc they can show a larger effect size bc a smaller change was needed to achieve it; the HAM-D is the measure I always go back to to showcase this point. The HAM-D better captures behavioral and somatic complaints v. something like the BDI-II, so Pharma studies like to use the HAM-D, but psychotherapy research often will use the BDI-II or similar.

I believe this was one of the articles we reviewed, for those curious. I haven't dug into this lately, and I don't expect every clinician to dig into topics like this, but I bring it up as an example of how even a well meaning clinician can inadvertently push one thing over another without a full picture of the When and Why. Gotta run to my next eval, but I'll see if I can find the articles we reviewed back in the Olden Times. :laugh:
 
Last edited:
This goes back to an analogy I've used in the past. Mechanic v. Mechanic Tech (Doctoral v Masters). The Mechanic needs to diagnosis and know what to work on. The Mech Tech can do the tech part, but the Why & When makes a difference. The research training at the doctoral level largely addresses the Why & When.

I work with some great masters-trained folks, whether they are a counselor or an NP or PA. This same dynamic is present in each of those dyads too. The "you don't know what you don't know" is such a huge part of why mid-level folks are targeted and/or drawn to these junk science approaches (e.g. brain spotting et al.).

The surgeons will tell you they can train a monkey to actually do surgery. You need to be a surgeon to know when to operate and when not to operate.
 
The research literacy difference part is hard to communicate without offending. I'll offer myself up as a sacrificial teaching example.

I distinctly recall coming into my graduate level statistics courses, thinking I knew it all already. I scoffed as I read the syllabus, and even commented about my advanced math classes (upwards through Calc III and differential equations) and research courses I had taken in undergrad. Our professor simply showed us a direct example of p-hacking. Literally, changed how I consume research. I now am critically assessing the specifics of what and how statistical analyses are run, whereas before I simply glossed over those parts of the papers I read.

It wasn't that I was uneducated or research illiterate. I just simply had never been exposed to something like that until a doctoral program.
I had to look up p-hacking because it wasn’t called that when I was in the doctoral program. We did learn about most of the ways that could be done though. Being able to consume\understand direct research (as opposed to secondhand media reports) and critically analyze and apply it is a skill that we are some of the best at. The smart clinicians of all stripes recognize and utilize that.

Most of the regular posters on here, especially the more traditional PhD folks, have more knowledge of this and part of the reason I participate here is to steal their knowledge. On a fairly regular basis, I share the info that I glean here with the MA people I supervise. One thing to know is that it is hard to understand some of that without the foundational knowledge. In other words, when I get knew information from others here, I can sift through it readily and verify or evaluate the research myself and then arrive at my own conclusions as to how to apply that knowledge to my clinical practice. The folk I supervise have to pretty much take what I say at face value because they don’t have all of the years of education and immersion in the process of research evaluation.
 
Psychologists are midlevels. Medicare requires our patients to have a physician referral, which is the entire basis for getting us the same status with CMS, as chiropractors. The DEA licenses us as midlevels.

We just adopted the physician argument, without understanding it.
This. Psychology fails to understand history
 
3. General distrust of EBP and mainstream medicine. I cannot deny that there are definitely some things that mainstream medicine has gotten wrong in the past, but folks will swing in the opposite direction. This is the appeal of things like TBKTS and bottom-up processing and primal screaming therapy and all that.
4. Simplification of theory without understanding detail, nuance, or both. Cue the "CBT doesn't affect lasting change", "EMDR is the only thing that works for PTSD", "psychologists care about EBPs too much", "PsyDs are for clinical, PhDs are for research", "we should always validate the client to build rapport no matter what" (no, we should not be validating OCD patients' obsessions what are you talking about?)

This is really key. Master's level training often skews very heavily towards the common factors paradigm so many clinicians believe they can 'choose their theoretical orientation.' We can argue all day on whether that's valid or not, but master's level clinicians often only get exposure to varying paradigms and then are left with the impression they have sufficient training to have agency in making that choice rather than being adequately informed. Using T4C's mechanic analogy, this is like showing a mechanic videos on how cars are built and then saying: "ok, go design and build a car." Maybe some people will get it right because they are marshaling resources external to their training, but many will struggle. One way to offset this would be having similar entry requirements to that of psychology programs, but similar incentives towards keeping the entry point in the profession low would still apply.
 
To be fair, organizations develop clinical practice guidelines for this reason. The information is out there, but these clinicians either are unaware of it or think that they know better.
 
To be fair, organizations develop clinical practice guidelines for this reason. The information is out there, but these clinicians either are unaware of it or think that they know better.

Yeah, I think here is where the lack of research literacy really becomes key. Otherwise, why should I listen to APA drone on about evidenced-based practice when IFS speaks to my soul.
 
Not sure, it's a three way tie between my Hogwarts house, my Enneagram, and my Myers-Briggs type indicator.
My classmate in private practice, who actively draws astrological birth charts for her clients, would be enamored with your soul.
 
Top Bottom