EPPP Step 2: $$$

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
And Fielding, Alliant, Albizu, etc. Also, just work in the clinical realm for a few years and you'll see that there is indeed a quality control problem. But, as stated before, this move is likely to bring psychology into alignment with how many other advanced practice disciplines set their licensing process up. Long overdue, IMO.
Yeh, this is part of why I think the EPPP2 is silly at this point. I'm not opposed to what it professes to do but there are such easier, cheaper, and more straight forward ways to improve our outcomes that don't require all the stuff this does. I don't disapprove of clinical skills- but I think we can take other steps first (and should) to clean up our basic knowledge testing problem. Why try to tackle the hardest part of quality control first when we are unwilling to do the simple stuff?

Members don't see this ad.
 
Yeh, this is part of why I think the EPPP2 is silly at this point. I'm not opposed to what it professes to do but there are such easier, cheaper, and more straight forward ways to improve our outcomes that don't require all the stuff this does. I don't disapprove of clinical skills- but I think we can take other steps first (and should) to clean up our basic knowledge testing problem. Why try to tackle the hardest part of quality control first when we are unwilling to do the simple stuff?

What's the simple stuff? And, why is bringing our licensure process into sync with other advanced healthcare providers silly?
 
I am thinking I agree with the points that Wisneuro is making about this. I still think it would be better to have this done earlier in the process, but the boards are the ones who are ultimately responsible for ensuring adequate training and competency. When this is implemented, it would seem that there should be no reason why schools couldn't utilize it as part of their clinical competency exams. Of course, calling it step 2 kind of gets in the way of that concept.
 
Members don't see this ad :)
Since we are the experts at test making (and assessing the value of said tests), it still seems odd that the decision to have a step 2 is not based on current outcome data associated with step 1. Unless there truly is a gatekeeping aspect to step 2, and that gatekeeping can be demonstrated to be based on clinical outcomes, I fail to see the value in it. I agree that our profession is (and should) be moving toward higher accountability and boarding, but without outcome data it serves the same purpose of a vanity board - just taking money with no benefit to the field or the public. Since the ASPPB has a monopoly and stranglehold on the entire field of practicing psychology, I do think some healthy skepticism is warranted. We are psychologists. Where is the data saying that we need this and that step 2 is the appropriate solution?
 
  • Like
Reactions: 2 users
Since we are the experts at test making (and assessing the value of said tests), it still seems odd that the decision to have a step 2 is not based on current outcome data associated with step 1. Unless there truly is a gatekeeping aspect to step 2, and that gatekeeping can be demonstrated to be based on clinical outcomes, I fail to see the value in it. I agree that our profession is (and should) be moving toward higher accountability and boarding, but without outcome data it serves the same purpose of a vanity board - just taking money with no benefit to the field or the public. Since the ASPPB has a monopoly and stranglehold on the entire field of practicing psychology, I do think some healthy skepticism is warranted. We are psychologists. Where is the data saying that we need this and that step 2 is the appropriate solution?
Data Shmata :p!
 
Since we are the experts at test making (and assessing the value of said tests), it still seems odd that the decision to have a step 2 is not based on current outcome data associated with step 1. Unless there truly is a gatekeeping aspect to step 2, and that gatekeeping can be demonstrated to be based on clinical outcomes, I fail to see the value in it. I agree that our profession is (and should) be moving toward higher accountability and boarding, but without outcome data it serves the same purpose of a vanity board - just taking money with no benefit to the field or the public. Since the ASPPB has a monopoly and stranglehold on the entire field of practicing psychology, I do think some healthy skepticism is warranted. We are psychologists. Where is the data saying that we need this and that step 2 is the appropriate solution?

As I intimated earlier, if this is your argument, you must apply it across all of healthcare. Thus, the argument is that every advanced practice healthcare provider should be able to practice with no licensure exam. Is this what you believe? Are you willing to undergo surgery from a surgeon who passed no exams, or undergo general anesthetia from an anesthesiologist who never had to pass a knowledge exam post med school?
 
  • Like
Reactions: 1 user
As I intimated earlier, if this is your argument, you must apply it across all of healthcare. Thus, the argument is that every advanced practice healthcare provider should be able to practice with no licensure exam. Is this what you believe? Are you willing to undergo surgery from a surgeon who passed no exams, or undergo general anesthetia from an anesthesiologist who never had to pass a knowledge exam post med school?

That's a pretty ridiculous interpretation, and not at all what I am saying. I specifically said I think our field should be moving toward boarding. By that logic, why not have 1000 tests, since 1000>1?

With regard to licensing and boarding, the practice of psychology is not perfectly analogous with medicine for a few key reasons: we receive far fewer complaints than medical professionals, and we have far fewer patient deaths, dismemberment, or other objectively adverse results from inadequate/incompetent work (hence our FAR cheaper liability insurance rates). Because of this, we have to do some form of data collection to figure out how we should measure professional performance, and how to discriminate between the professionals who will or will not perform to an appropriate standard of competence. As far as I know, our field does not enjoy large, pre-existing, ongoing, government-funded data collection efforts regarding patient outcomes (unlike medicine). I do not think there is consensus within our field about how should even measure this (except for performance that falls at the extreme ends of the spectrum / is clearly unethical or illegal). How do we measure, objectively, good clinical practice? We can test for legal and ethical knowledge (the EPPP is supposed to do this, as well as state-level exams), we can examine complaints (hardly any, compared to medicine), and what else shall we do? This is what needs to be established: What we are looking for, How do we know when we see it (i.e. Does the EPPP phase 2 have the right sensitivity/specificity in discriminating between examinees?)

If we come to the conclusion that the EPPP is inadequate, it behooves us to make sure the EPPP2 isn't just more of the same.
 
That's a pretty ridiculous interpretation, and not at all what I am saying. I specifically said I think our field should be moving toward boarding. By that logic, why not have 1000 tests, since 1000>1?

If we come to the conclusion that the EPPP is inadequate, it behooves us to make sure the EPPP2 isn't just more of the same.

Not ridiculous in the least, just the fact of the situation taking into account logistics and how things actually work. As I said, I'm all about doing the research, as soon as we find the tens of millions of dollars that are necessary to do it the right way. Perhaps we jack up the exam cost X10 to make it happen. In the meantime, we do what other healthcare disciplines do. Set benchmarks that a panel of experts who currently serve in the field have agreed is necessary. It's the most ideal solution in a non-ideal, real world setting.
 
Not ridiculous in the least, just the fact of the situation taking into account logistics and how things actually work.
I was not saying that your points or ideas were ridiculous - I was saying that your interpretation of my earlier post to mean that I didn't think we (or any other medical profession) needed a licensure exam was ridiculous. It seemed to me to be an odd philosophical departure from my original point, which was that to justify or develop a new exam, we needed to do some basic data collection as a first step. Medicine already does this. It's a little embarrassing that, as scientists who specialize in creating tests, we don't do the necessary work to make sure the tests we give ourselves are appropriate.
 
I was not saying that your points or ideas were ridiculous - I was saying that your interpretation of my earlier post to mean that I didn't think we (or any other medical profession) needed a licensure exam was ridiculous. It seemed to me to be an odd philosophical departure from my original point, which was that to justify or develop a new exam, we needed to do some basic data collection as a first step. Medicine already does this. It's a little embarrassing that, as scientists who specialize in creating tests, we don't do the necessary work to make sure the tests we give ourselves are appropriate.

Do they, though? I am not aware of appropriate outcomes data that is tied to things like step exams.
 
Medicine may not be tying outcomes to licensing/boarding exams like they should (though some studies can easily be found on google scholar), but as I noted earlier, medicine has health database organizations, and psychology just doesn't have that kind of data lying around.

So we can create all kinds of new licensing/boarding tests if we want to, but we haven't even established that they add value. Crappy training programs abound that have APA accreditation, and all of us know horrible practitioners that are licensed. Without data, the next test could just become more of the same. If every new psychologist is going to be required to shell out hundreds of dollars more to get licensed, there should be some kind of demonstration of need and purpose. Boarding is obviously a better solution, but boarding is also voluntary. The phase 2 exam would be a requirement, which creates a moral hazard: The ASPPB has the ability to charge whatever they want and any new psychologist will have to pay it just to get their career started. Because of that moral hazard, there should be a clear demonstration of the need and value of the second exam PRIOR to deciding to implement it. It doesn't seem that this is the case.
 
  • Like
Reactions: 1 users
Medicine may not be tying outcomes to licensing/boarding exams like they should (though some studies can easily be found on google scholar), but as I noted earlier, medicine has health database organizations, and psychology just doesn't have that kind of data lying around.

So we can create all kinds of new licensing/boarding tests if we want to, but we haven't even established that they add value. Crappy training programs abound that have APA accreditation, and all of us know horrible practitioners that are licensed. Without data, the next test could just become more of the same. If every new psychologist is going to be required to shell out hundreds of dollars more to get licensed, there should be some kind of demonstration of need and purpose. Boarding is obviously a better solution, but boarding is also voluntary. The phase 2 exam would be a requirement, which creates a moral hazard: The ASPPB has the ability to charge whatever they want and any new psychologist will have to pay it just to get their career started. Because of that moral hazard, there should be a clear demonstration of the need and value of the second exam PRIOR to deciding to implement it. It doesn't seem that this is the case.

So, the answer is no, they do not as far as you know either. As I said, it is the ideal solution when pragmatics and the nature of the issue are taken into account in these non-ideal, real world contexts. It brings us into alignment with our peers in the advanced practice healthcare marketplace. I still have not seen a cogent argument for why this is not a good first step in implementation.
 
Members don't see this ad :)
So, the answer is no, they do not as far as you know either. As I said, it is the ideal solution when pragmatics and the nature of the issue are taken into account in these non-ideal, real world contexts. It brings us into alignment with our peers in the advanced practice healthcare marketplace. I still have not seen a cogent argument for why this is not a good first step in implementation.

They actually do: Physician Scores on a National Clinical Skills Examination as Predictors of Complaints to Medical Regulatory Authorities Association Between Licensing Examination Scores and Resource Use and Quality of Care in Primary Care Practice The Relationship Between Licensing Examination Performance... : Academic Medicine Exploring the Relationships Between USMLE Performance and... : Academic Medicine Correlation of United States Medical Licensing Examination and Internal Medicine In-Training Examination performance

I struggled to find any articles related to the predictive power of the EPPP, but please do share if you find something.
 
These articles actually bolster my argument of enacting the test and tracking outcomes. All of these studies are retrospective designs after the test was put into practice. Which is exactly what we should do as well. The EPPP step 1 is merely a general knowledge test, kind of like a miniature Step 1 equivalent. EPPP 2 would be more equivalent to something like Step 2 ck, I believe. If anything, I'm only more convinced that the EPPP 2 is necessary after seeing these.
 
Mark my words, licensing boards exist to sustain themselves and will continue to create new exams as revenue streams as quickly as they can justify doing so. The USMLE and MOCA should stand as warnings of what awaits you should you not stand your ground early on.
 
  • Like
Reactions: 1 users
Mark my words, licensing boards exist to sustain themselves and will continue to create new exams as revenue streams as quickly as they can justify doing so. The USMLE and MOCA should stand as warnings of what awaits you should you not stand your ground early on.

The MoCA is a screening instrument for dementia.
 
  • Like
Reactions: 1 user
These articles actually bolster my argument of enacting the test and tracking outcomes. All of these studies are retrospective designs after the test was put into practice. Which is exactly what we should do as well. The EPPP step 1 is merely a general knowledge test, kind of like a miniature Step 1 equivalent. EPPP 2 would be more equivalent to something like Step 2 ck, I believe. If anything, I'm only more convinced that the EPPP 2 is necessary after seeing these.
Just because these studies came out after implementation of the licensing exam for physicians does not mean that’s how it was first implemented. The older tests doctors used to take had no predictive validity. How do we know what the EPPP is even doing? It’s out now, where are all the studies on its validity?

https://pdfs.semanticscholar.org/ac5d/6af771c6b4179f02cd1c0c6b9822d2cf0852.pdf
 
  • Like
Reactions: 1 user
What's the simple stuff? And, why is bringing our licensure process into sync with other advanced healthcare providers silly?
The simple stuff is the front end qualification for program accreditation. The EPPP2 may fix some of the quality control (lets assume it does- we don't have a ton of valid on incremental validity of it... much less of the EPPP2), but it wont fix most (e.g., how many threads do we see about multiple failures of the EPPP? it costs money but doesn't stop folks in the long run most of the time). The truth is that we know where a lot of the quality control issue stems from (not all, but a lot comes from poor initial training), so why not address that first? I don't oppose coming into line with what others are doing with respect to having clinical quals, but lets start with the basic steps that we know work to improving quality control outcomes.

Oh right, APA doesn't want to....
 
  • Like
Reactions: 1 user
Just because these studies came out after implementation of the licensing exam for physicians does not mean that’s how it was first implemented. The older tests doctors used to take had no predictive validity. How do we know what the EPPP is even doing? It’s out now, where are all the studies on its validity?

https://pdfs.semanticscholar.org/ac5d/6af771c6b4179f02cd1c0c6b9822d2cf0852.pdf

Good question, we should look at some aspects of that for EPPP 1. I'm all for it. It is a test of knowledge, so it will have a lower predictive validity in theory than a clinically based test, a la EPPP 2. But, as you have pointed out, moving to an additional clinical knowledge application based test, such as proposed in EPPP 2, has a pre-existing model that has shown validity, the Step 2 ck. The EPPP 2 should incorporate that. It's a great argument for their proposal.
 
The simple stuff is the front end qualification for program accreditation. The EPPP2 may fix some of the quality control (lets assume it does- we don't have a ton of valid on incremental validity of it... much less of the EPPP2), but it wont fix most (e.g., how many threads do we see about multiple failures of the EPPP? it costs money but doesn't stop folks in the long run most of the time). The truth is that we know where a lot of the quality control issue stems from (not all, but a lot comes from poor initial training), so why not address that first? I don't oppose coming into line with what others are doing with respect to having clinical quals, but lets start with the basic steps that we know work to improving quality control outcomes.

Oh right, APA doesn't want to....

Exactly, they don't want to/have a vested interest in doing nothing. Which is another reason why it's important for states to assume quality control in ways that they see fit. If the supply side refuses to do anything about it on the front and middle end, they have every right to regulate the back end as they see fit for public interest.
 
Anyone know how EPPP pass rate is obtained when programs ask about that information? I assume this differs by program in the wording used (Have you passed the EPPP vs. Did you pass the EPPP the first time, etc.). Imagine two programs have equal 100% pass rates - in one everyone passes after the first exam and in the second everyone passes after the tenth exam. My point here is that if Program B has a 100% pass rate but everyone has to take the exam 10 times, there is zero gate keeping and that program appears equal to program A. You might call it gate delaying, but I'm not sure all this is worth a 6-12 month delay if the practice by those psychologists is not reflective of the quality level anyway for the remainder of their careers. If we are going to argue we need more gatekeeping, I don't want to see that the EPPP2 is agreed on by experts, I want to see that it effectively descriminates between two groups (good psychologists/crappy psychologists). If it isn't doing that, why do we need it?

This reminds me a bit of the oral exams they used in Texas to evaluate clinical skills. They are ditching it. Why? Because it doesn't descriminate quality of clinician or predict ethical issues with the board, so there isn't any incremental utility. Until we see that from the EPPP2, they are putting the cart before the horse. Thus, I don't see adding a test to keep up with the Jones as fixing the numerous issues with this process. We can blame supply side and say we need to develop better backends, but we don't even know how well those backend solutions are resolving the problem they are intended to fix. The problems, instead, remain consistent (crappy training models). If the 'backend' (the ASPBB isn't really backend in my opinion) can create an entirely new system/set of requirements for licensure.. maybe the backend isn't as powerless as imagined...
 
Mark my words, licensing boards exist to sustain themselves and will continue to create new exams as revenue streams as quickly as they can justify doing so. The USMLE and MOCA should stand as warnings of what awaits you should you not stand your ground early on.
I wasn't disagreeing. I was honestly confused. Actually glad that a physician chimed in since I know that there is a lot of dissatifaction with some of the testing requirements. Maybe we could actually learn something from that? Naw, we'll just reinvent the same old broken wheel or more likely worse. :mad:
 
  • Like
Reactions: 1 user
Why not standardize comps within grad programs before internship to contain a practice component like the questions to be asked on the EPPP 2? Some folks can tackle that if they want to advocate for it via APA. That way you're tested far before licensure time and the gate keeping isn't after the fact.

Our complaint rates in this field are low; that isn't to say we can't improve our practice, but I'm with the others in thinking we need more data AND there are things we can do prior to the point of taking licensure exams.

Also, Asppb makes a nice profit from the many folks taking the EPPP1 every year. That wasn't supposed to be the goal of the organization, but it's certainly a nice little bonus outcome...why not use this profit to fund the EPPP2 instead of creating another means (with no data support) to make twice the profit? Where exactly is that money going? I think these are fair questions to ask the body that controls our licensure exams.
 
I think there's a gatekeeping issue in the field and adding to the EPPP (which I think is a pretty low standard) would be good. The issue that constantly comes up is where to do the gatekeeping. It's unfortunate that this isn't done more at the level of the training programs and even at internship. It has been my experience that people just get passed on through every aspect of training even if they are not ready to enter the field. I'll use internship as example - being part of training programs, I've seen students being passed on internship who are not ready to enter the field and practice independently. The mentality always seem that no one wants to take the responsibility for preventing this individual and the attitude of "they got this far so let's just pass them on" seems quite common. This also becomes a problem for programs that have captive internships where there really is no check at all.

I'm not as read up on the EPPP2 as maybe I should be, but I am definitely in support of creating higher standards for quality control.
 
Since we are the experts at test making (and assessing the value of said tests), it still seems odd that the decision to have a step 2 is not based on current outcome data associated with step 1.

It’s a political and economic (revenue generating) decision...not one of science and supportive data.
 
Also, Asppb makes a nice profit from the many folks taking the EPPP1 every year. That wasn't supposed to be the goal of the organization, but it's certainly a nice little bonus outcome...why not use this profit to fund the EPPP2 instead of creating another means (with no data support) to make twice the profit? Where exactly is that money going? I think these are fair questions to ask the body that controls our licensure exams.

What percentage of the test fees go to "profit" vs administration and updating of the test? Is this backed up by anything, or baseless conjecture? I'm all about transparency, but we should probably stay away from conspiracy mongering, or just go over and join modernpsychologist in the tin foil hat brigade.
 
What percentage of the test fees go to "profit" vs administration and updating of the test? Is this backed up by anything, or baseless conjecture? I'm all about transparency, but we should probably stay away from conspiracy mongering, or just go over and join modernpsychologist in the tin foil hat brigade.
I only have access to 2016 IRS forms, but of the 12 directors/executives, $200k was spent on conferences and workshops alone and almost $900k out of $5 million revenue went to travel expenses in 2016, in addition to the $1.7 million expense for salaries and compensation. They break it down further if you'd like to dig some more.
 
  • Like
Reactions: 1 users
I only have access to 2016 IRS forms, but of the 12 directors/executives, $200k was spent on conferences and workshops alone and almost $900k out of $5 million revenue went to travel expenses in 2016, in addition to the $1.7 million expense for salaries and compensation. They break it down further if you'd like to dig some more.

Considering the size of the organization. None of this is surprising, or alarming in the least.
 
The thought of EPPP step 2 makes me sooooo anxious, and I've passed the EPPP and will be licensed before 2019.
Congratulations. Hopefully your licensing board doesn't implement it as a requirement before you get licensed. I know that if something like this was added happened in the midst of getting licensed, I would have been a tad frustrated to say the least.
 
Congratulations. Hopefully your licensing board doesn't implement it as a requirement before you get licensed. I know that if something like this was added happened in the midst of getting licensed, I would have been a tad frustrated to say the least.

2020 is the earliest roll out date. And, it's no real surprise, it's been talked about for years at this point.
 
2020 is the earliest roll out date. And, it's no real surprise, it's been talked about for years at this point.
I still wouldn’t have liked it and especially not to be the first year rollout. I remember how relieved I was that California got rid of the oral exam during my doctoral program which ended up moot since I didn’t get licensed there. The irony is that I still ended up having to do an oral exam in another state. It turned out to not be too bad since by that time I had already been licensed and the questions were fairly basic although I still didn’t like it. :yuck:

Edited to add: just to be clear, not liking something is not really an argument for whether or not something is a good idea. I didn’t like some of my vegetables when I was a kid either.
 
Congratulations. Hopefully your licensing board doesn't implement it as a requirement before you get licensed. I know that if something like this was added happened in the midst of getting licensed, I would have been a tad frustrated to say the least.
I'm skeptical but they will make those who have taken it previously retake it to get license after it is implemented. This would be a nightmare and a lot of frustration for anyone who was transferring between States or getting licensed in a second state. Pretty sure it'll have a grandfather clause.
 
Per ASPPB EPPP 2 FAQs:
Is the EPPP Part 2 going to be used for already licensed psychologists when they renew their licenses?
The EPPP Part 2 (skills) is being developed for entry-level licensure. It has not been conceptualized, nor is it being developed for use in assessing maintenance of competence for already licensed psychologists. Just as the EPPP is not used to assess maintenance of competence, the enhanced EPPP, including the EPPP Part 2 (skills), will assess entry-level competence to practice at the independent level. ASPPB recommends that the EPPP Part 2 not be administered to any psychologist licensed prior to January 1, 2020.
 
  • Like
Reactions: 1 users
So there is there any benefit to rushing to get licensed and take the Step 1 as soon as possible to avoid Step 2? Part of that idea feels 'icky' to avoid another test, but I am not particularly worried about my training or competency, and it really does come down to an additional financial burden and hurdle. If its the state of the field I am open to it, but if I can be licensed before the rollout, that seems beneficial as well.
 
Top