Confused

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

TXneuronerd15

Full Member
Joined
Jan 20, 2020
Messages
25
Reaction score
1
When the eppp is scored, do they shave off literally the last 50 consecutive questions? Or is it random? Say out of 225 I got 151 correct, but not within the first 175 consecutive questions, but overall; when they shave off the 50 do I now have a score of 151/175? This is a make it or break it interpretation. Please and thank you.

Members don't see this ad.
 
Are you talking about the pretest/experimental items? I believe that those are randomly scattered throughout the test. If it's like the old days, they are potential future test questions that they want to run IRT stats on to see if they should be included on future exams.
 
  • Like
Reactions: 1 users
Hi, and thanks for the prompt response, and yes. These are practice tests, but I want to make sure I know how I actually did so I know that I am ready for the real one
 
Members don't see this ad :)
Hi, and thanks for the prompt response, and yes. These are practice tests, but I want to make sure I know how I actually did so I know that I am ready for the real one
When I took the EPPP I looked at the % I passed on practice tests. Once I got upper 60s and I think like one 70% I sat for the test and then passed with a much higher percentage. Most people I talked to had the same experience of practice tests in that range being fine for passing the real thing.
 
  • Like
Reactions: 1 user
I am a bit concerned about this. Adding experimental questions to the test for research purposes may affect test takers in unknown ways and this possibility hasn't been examined as far as I know. Secondly, is informed consent obtained from test takers prior to them taking a test that contains experimental questions? Lastly, are people who take the test even competent to give or refuse consent for the experimental questions, given the obvious power differential between the Board and the test takers ? I would argue that they are not.
 
  • Like
Reactions: 1 user
I am a bit concerned about this. Adding experimental questions to the test for research purposes may affect test takers in unknown ways and this possibility hasn't been examined as far as I know. Secondly, is informed consent obtained from test takers prior to them taking a test that contains experimental questions? Lastly, are people who take the test even competent to give or refuse consent for the experimental questions, given the obvious power differential between the Board and the test takers ? I would argue that they are not.

Most standardized tests I’ve ever taken have had some kind of experimental questions on it (remember the extra experimental section on the GRE?). It’s pretty normal.
 
  • Like
Reactions: 2 users
It's been a while since I looked at the EPPP scoring, but results are essentially adjusted for every cohort taking the test, aren't they? I thought something like that occurred. If so, I would imagine that works to control for effects on test takers from including experimental questions. Also, as was mentioned above, multiple CAT measures include experimental items, as do non-CAT tests such as ABPP written exams.

Regarding informed consent, if power differential were enough in and of itself to invalidate consent, wouldn't we think there's the possibility that pretty much every medical and mental health procedure could then not be consented to, particularly if a procedure (e.g., organ transplant) or status (e.g., decisional capacity) rested on the outcome of the evaluation or intervention? I'm nearly positive the EPPP materials do inform test takers that experimental items are included on the test. And examinees do ultimately volunteer to take the test.
 
  • Like
Reactions: 1 user
To clarify everyone, here is my confusion. Since we don’t know which questions are experimental versus not you could pass or fail simply based on how many you got right or wrong in that entire section( i.e. I got 45/50 right in the experimental section, but only 100 right out of the scored 175, so I fail). If those 45 would have all been scored, I would have passed with a comfortable margin. As I study now, I’ve been randomly crossing out “ experimental” questions and then scoring myself out of the 175. That is valid to the testing situation, but leaves it harder to make a benchmark of my readiness for the real thing. Thoughts? P.S. I took the test in Jan and no, there wasn’t informed consent about the experimental section that I can recall. I read an article yesterday from someone making this claim that the section is arguably against the ethics code.
 
  • Like
Reactions: 1 user
I like the thought process here as a researcher, but as a person who just needs to pass a test I think you are thinking too hard about this.

The experimental questions are scattered throughout the test. It will not matter how you answer them.

Just do your best.

Most standardized tests do this (e.g., GRE). And anecdotally, with EPPP practice tests, I've seen people with consistent practice scores from 60% to 75+% correct take and pass the EPPP. This shouldn't be a major marker for assessing readiness.
 
  • Like
Reactions: 2 users
To clarify everyone, here is my confusion. Since we don’t know which questions are experimental versus not you could pass or fail simply based on how many you got right or wrong in that entire section( i.e. I got 45/50 right in the experimental section, but only 100 right out of the scored 175, so I fail). If those 45 would have all been scored, I would have passed with a comfortable margin. As I study now, I’ve been randomly crossing out “ experimental” questions and then scoring myself out of the 175. That is valid to the testing situation, but leaves it harder to make a benchmark of my readiness for the real thing. Thoughts? P.S. I took the test in Jan and no, there wasn’t informed consent about the experimental section that I can recall. I read an article yesterday from someone making this claim that the section is arguably against the ethics code.
Are you arguing that answering the uncounted experimental items is impacting your performance on the actual test items to the degree that it is causing you to fail the exam?
 
Are you arguing that answering the uncounted experimental items is impacting your performance on the actual test items to the degree that it is causing you to fail the exam?
I’m saying that’s possible. I’m not saying that the section is unfair, but rather it makes it harder to make an educated guess as to how ready you are. In a perfect world, I’ve gotten a 93 on a practice test, but only if the entire experimental section were all incorrect answers. Of the ones counted, I got a 75. My mistake was when adjusting to be scored out of 175, I was shaving off 50 incorrect answers and this made me think I was more ready than I actually was. Instead of a bunch of 80’s and one 90’s scores I actually had a bunch of mid 60’s- mid 70’s. Lesson learned.
 
Members don't see this ad :)
This is an easy test, don't overthink it.
Although I’ve been told that it is, considering that I did better on the practice ones than the real one and failed it, for me it’s not as easy as it was for previous test takers. I was amazed that everyone it seems did better on the one that matters than the practice ones. I had the exact opposite experience.
 
Although I’ve been told that it is, considering that I did better on the practice ones than the real one and failed it, for me it’s not as easy as it was for previous test takers. I was amazed that everyone it seems did better on the one that matters than the practice ones. I had the exact opposite experience.

I wonder if testing anxiety may be playing a role here.
 
  • Like
Reactions: 1 user
Test anxiety, switching answers, both are a contributory to negative outcomes, generally speaking. Peruse the other EPPP threads here to see some of the strategies some people used to overcome these things.
 
  • Like
Reactions: 1 user
Agree with the others...trying to cross out experimental questions on practice tests (let alone doing it incorrectly) is already overthinking things. Never heard of anyone doing things like that before.

I think its a very big stretch to raise ethical issues about things like this. Unless they have changed something, they disclose there are experimental questions when you sign up for the test. It is plastered all over their website that there are experimental questions. It is in every description of the test I have ever read. If you talk to any psychologist about the exam, there is a good chance the experimental questions will come up in conversation. Every other large-scale test or licensing exam I am aware of does something similar. It is questionable whether or not this meets the definition of "research" and testers are under no obligation to score every item when administering a test.

All that aside...I think virtually everyone would MUCH prefer this system to one where they are rolling out unvetted questions (or those only vetted by a small subset of volunteers or professional test-takers.
 
  • Like
Reactions: 3 users
Agree with the others...trying to cross out experimental questions on practice tests (let alone doing it incorrectly) is already overthinking things. Never heard of anyone doing things like that before.

I think its a very big stretch to raise ethical issues about things like this. Unless they have changed something, they disclose there are experimental questions when you sign up for the test. It is plastered all over their website that there are experimental questions. It is in every description of the test I have ever read. If you talk to any psychologist about the exam, there is a good chance the experimental questions will come up in conversation. Every other large-scale test or licensing exam I am aware of does something similar. It is questionable whether or not this meets the definition of "research" and testers are under no obligation to score every item when administering a test.

All that aside...I think virtually everyone would MUCH prefer this system to one where they are rolling out unvetted questions (or those only vetted by a small subset of volunteers or professional test-takers.
I agree, but also wonder whether these "ethics" issues are also being raised by people who are passing the exam without much difficulty.
 
  • Like
Reactions: 1 user
Agree with the others...trying to cross out experimental questions on practice tests (let alone doing it incorrectly) is already overthinking things. Never heard of anyone doing things like that before.

I think its a very big stretch to raise ethical issues about things like this. Unless they have changed something, they disclose there are experimental questions when you sign up for the test. It is plastered all over their website that there are experimental questions. It is in every description of the test I have ever read. If you talk to any psychologist about the exam, there is a good chance the experimental questions will come up in conversation. Every other large-scale test or licensing exam I am aware of does something similar. It is questionable whether or not this meets the definition of "research" and testers are under no obligation to score every item when administering a test.

All that aside...I think virtually everyone would MUCH prefer this system to one where they are rolling out unvetted questions (or those only vetted by a small subset of volunteers or professional test-takers.

I actually think there is an argument to be made that there is no real choice to opt out of the experimental questions. It's either deal with it or not take the test at all, the latter of which is not a real option for anyone who wants to actually be licensed.

I hated the experimental questions on the GRE more than anything.
 
I actually think there is an argument to be made that there is no real choice to opt out of the experimental questions. It's either deal with it or not take the test at all, the latter of which is not a real option for anyone who wants to actually be licensed.

I hated the experimental questions on the GRE more than anything.

The spirit of "informed consent" should provide a reasonable opt-out/alternative to participating in said research. There is is no reasonable opt-out or alternative to those seeking licensure in this situation. Seems...gamy to me?
 
Last edited:
  • Like
Reactions: 1 users
I wonder if testing anxiety may be playing a role here.
In response to everyone( thank you by the way for your prompt responses, you’re more prompt than my friends lol), I did get a crappy night’s rest the day before and felt a little tired during the exam. I also started to feel anxious at about question 75 cuz then it seemed like all of the questions weren’t anything I studied( which is why I grabbed old tests to learn the phrasing). The way they were phrased made me realize that knowing the concepts inside and out are what will save you from being confused by the questions; it took a bit just to understand what they’re looking for. I also studied for 59 days for 226 hours total( 6-8 hours/ day) , so perhaps cramming wasn’t my thing, never was in school. Lastly, I came up with this crossing out method because it’s valid to the real test; where the experimental questions are placed is random. If I base my progress out of 225 then my percent correct will more than likely be lower than out of 175. Trying to create a valid benchmark for progress as I take more practice tests.
 
The spirit of "informed consent" should provide a reasonable opt-out/alternative to participating in said research. There is is no reasonable opt-out or alternative to those seeking licensure in this situation. Seems...gamy to me?

Again though...this all takes for granted this constitutes "human subjects research" as we typically think of it. I'm extremely doubtful this would meet the "generalizable knowledge" criteria many IRBs use to determine if something qualifies for review. It is more analogous to how marketing research is done, how software is often tested, or many QI initiatives in organizations....none of which (for better or worse) are held to the same ethical standards as what we think of as research.

To me, this whole discussion just seems like nit-picking because people don't the fact that they have to take the EPPP. So they tack on some extra questions they don't score because they want to see item performance. Who cares? What alternatives might there be? Even if <in theory> one gets a version of the exam where the experimental questions are so wildly difficult everyone freaks out...pretty sure scores are being normed by exam (unless that has changed) so it shouldn't have a profound impact on the score anyways. Any potential ill effects of this seem pretty far-fetched....

TXneuronerd....perhaps I misunderstood but it sounded like when doing the cross-out method you were throwing out X questions you got wrong. Of course that would inflate your score, since there is no guarantee you would get all the experimental questions wrong. I'm also not even sure how you knew which ones were experimental questions since the practice exams I had never identified them, but maybe some sets do. The exam really shouldn't be a big deal. You studied more than enough (vastly more than I did), so I would refocus efforts on how you are studying. Whatever has worked for you in the past...do that. For me, that was reading. So I just read and re-read the some old books a few times, then did some practice exams. Other folks prefer audio, or JUST doing practice exams coupled with self-study, or whatever else.
 
  • Like
Reactions: 2 users
To me, this whole discussion just seems like nit-picking because people don't the fact that they have to take the EPPP. So they tack on some extra questions they don't score because they want to see item performance. Who cares? What alternatives might there be? Even if <in theory> one gets a version of the exam where the experimental questions are so wildly difficult everyone freaks out...pretty sure scores are being normed by exam (unless that has changed) so it shouldn't have a profound impact on the score anyways. Any potential ill effects of this seem pretty far-fetched....
And you'd think that people who ostensibly completed coursework in psychometrics and assessment and then used the knowledge in actual clinical work would understand this and not be looking for an excuse for not passing the exam....
 
So they tack on some extra questions they don't score because they want to see item performance. Who cares? What alternatives might there be?

Because that's how it starts...right?

If you can't say "no" without a reasonable alternative choice.....its not "informed consent."

Come on guys, we can't "tack on some extra questions they don't score because they want to see item performance" in a Psych 101 class, can we? Or do we? Either way...
 
Last edited:
  • Like
Reactions: 1 users
Because that's how it starts.

If you can't say no without a reasonable alternative choice.....its not "informed consent."
How long have they included experimental questions?

If that's where it started, where have those questions led in the years they have been including them?
 
Again though...this all takes for granted this constitutes "human subjects research" as we typically think of it. I'm extremely doubtful this would meet the "generalizable knowledge" criteria many IRBs use to determine if something qualifies for review. It is more analogous to how marketing research is done, how software is often tested, or many QI initiatives in organizations....none of which (for better or worse) are held to the same ethical standards as what we think of as research.

To me, this whole discussion just seems like nit-picking because people don't the fact that they have to take the EPPP. So they tack on some extra questions they don't score because they want to see item performance. Who cares? What alternatives might there be? Even if <in theory> one gets a version of the exam where the experimental questions are so wildly difficult everyone freaks out...pretty sure scores are being normed by exam (unless that has changed) so it shouldn't have a profound impact on the score anyways. Any potential ill effects of this seem pretty far-fetched....

TXneuronerd....perhaps I misunderstood but it sounded like when doing the cross-out method you were throwing out X questions you got wrong. Of course that would inflate your score, since there is no guarantee you would get all the experimental questions wrong. I'm also not even sure how you knew which ones were experimental questions since the practice exams I had never identified them, but maybe some sets do. The exam really shouldn't be a big deal. You studied more than enough (vastly more than I did), so I would refocus efforts on how you are studying. Whatever has worked for you in the past...do that. For me, that was reading. So I just read and re-read the some old books a few times, then did some practice exams. Other folks prefer audio, or JUST doing practice exams coupled with self-study, or whatever else.

For what it’s worth, I only have a master’s degree and have been out of school for four years, and from what I’ve read that can be a strong factor, as well as how well your program’s curriculum lines up with the test material. I did various methods; audio, note taking,reading, and took 11 practice tests. In the end, I think studying less hours/ day and therefore over a longer period of time wil give me my desired result. Thanks everyone for the input; this was just to get your “ two cents” on this matter. I think this is as insightful as it’s gonna get.
 
@TXneuronerd15
Not to complicate it, but are you also aware that there are multiple EPPP versions floating around, some with “easier” or “harder” questions (scored so as to be fair, however)? I have brought up the issue in the past of test anxiety and whether those who have much higher test anxiety + a more difficult form of the test might be throwing a small percentage of test-takers off balance. While people’s vastly different takes on the difficulty of the test could be due to individual factors, it could also be due to different forms.

Also, I think it’s fair to question whether the lack of choice to opt out of experimental questions in order to be licensed in your own field is ethical, and just because it’s accepted across different fields/exams doesn’t make it ethical. But this also depends on whether we are holding ASPPB to the same ethical standards as researchers in our field doing human subjects research, as mentioned earlier.

Either way, this broader debate, while stimulating, will not be advantageous in your own preparation. Ultimately it sounds like it it may not be the content, but focusing on the test-taking approach (don’t overthink, don’t change too many answers, go with what you generally know about the domain, flag a few tough ones and keep going) and perhaps self-regulation strategies that could be most helpful? There’s a subset of us who go pretty deeply into our heads under stress/strain and analyze everything (we’ve all done this at some point) so grounding in the present moment can be helpful, and being practical about the process. Some positive/compassionate thoughts & calming breaths during the test can’t hurt...especially if you can recreate the feel of the EPPP via practice test environment as much as possible to practice the skills and strategies.
 
  • Like
Reactions: 1 user
Per HHS, QI initiatives (e.g., using IRT to examine the difficulty/utility of novel items on the EPPP) do not constitute human subjects research.

On the topic of informed consent, I've definitely had instructors use IRT to determine which items to drop from an exam, and I find the argument that EPPP informed consent is lacking to be weak. From my perspective, the likelihood that the stimulus properties of one or more "experimental" items would meaningfully impact respondents' overall performance on the EPPP seems low.

It seems even less likely that the properties of these "experimental" items would impact respondents' overall performance in a systematically consistent and different way from the "non-experimental" items included on the exam.
 
Last edited:
  • Like
Reactions: 2 users
I'm gonna agree with @acclivity here, y'all are a touch too sensitive/ridiculous on this issue. Considering the overall pass rate being very high, despite the number of diploma millers also taking the exam, there is no real evidence that the inclusion of possible future test questions has any real impact here.
 
  • Like
Reactions: 3 users
To clarify everyone, here is my confusion. Since we don’t know which questions are experimental versus not you could pass or fail simply based on how many you got right or wrong in that entire section( i.e. I got 45/50 right in the experimental section, but only 100 right out of the scored 175, so I fail). If those 45 would have all been scored, I would have passed with a comfortable margin. As I study now, I’ve been randomly crossing out “ experimental” questions and then scoring myself out of the 175. That is valid to the testing situation, but leaves it harder to make a benchmark of my readiness for the real thing. Thoughts? P.S. I took the test in Jan and no, there wasn’t informed consent about the experimental section that I can recall. I read an article yesterday from someone making this claim that the section is arguably against the ethics code.

I'm confused about this point as well.

So, the status quo is that you have answered 100 scored questions correctly out of 175 total scored questions (i.e., ~57% correct). Hypothetically, if all 225 questions were scored, then you would have answered 145 questions correctly out of 225 questions (i.e., ~64% correct).

Neither of these scores seem to be passing with a "comfortable margin" -- Are you implying that if you answered 145 questions out of 175 questions correctly (i.e., 50 questions are still dropped, they just happen to be only the questions that you answered incorrectly) you would have passed comfortably? I'm admittedly not very familiar with the EPPP, but I've typically seen ~70% correct used as a proxy for what constitutes a passing score. Are you using a different threshold?

It also seems improbable that you would answer 90% of the experimental questions (i.e., 45/50) correctly and fewer than 70% of the non-experimental questions (i.e., 100/145) correctly.
 
Last edited:
  • Like
Reactions: 1 user
Per HHS, QI initiatives (e.g., using IRT to examine the difficulty/utility of novel items on the EPPP) do not constitute human subjects research.

On the topic of informed consent, I've definitely had instructors use IRT to determine which items to drop from an exam, and I find the argument regarding lack of informed consent regarding the EPPP to be weak. From my perspective, the likelihood that the stimulus properties of one or more "experimental" items would meaningfully impact respondents' overall performance on the EPPP seems low.

It seems even less likely that the properties of these "experimental" items would impact respondents' overall performance in a systematically consistent and different way from the "non-experimental" items included on the exam.

Thank you. Had made a note to dig up some HHS links when I had a minute and you beat me to it.
 
  • Like
Reactions: 1 users
I passed the EPPP easily on the first try, and I also found it obnoxious that I had to pay for the privilege of helping to test out experimental items.

Let's speculate that experimental items are more likely to be confusing, poorly worded, unreasonable, etc., by virtue of the fact that they haven't yet been included as regular items. If that's the case, we could also speculate that some test takers may experience an increase in anxiety during the test as a result of encountering subpar items. We also know that test anxiety can affect performance for some people. It's not a total stretch to guess that a certain number of test takers will be negatively impacted by items that do not factor into their actual score. Test takers have no way of knowing that the particular item they're stressing over has no impact on their score. There is no way to opt out of those items. If you want to take the EPPP, you have no choice but to assist the testing company in developing their items. There can be no informed consent if you can't decline to participate.

With that being said, I do think that our field tends to take an altruistic view on things like this (for better and for worse), and I suppose you could argue that you're helping future test takers by assisting the testing company in determining which items to use. And without the option to test out items using the current system, maybe the testing company would just decide to throw those items into the mix of scored items and hope for the best. I don't know.
 
  • Like
Reactions: 1 users
Even if it's not human subjects research, we can still condemn the practice and find it unethical as a whole. I'm pretty sure that I bombed the GRE the first time because of an experimental question section that wasn't labelled as such.
 
  • Like
Reactions: 1 user
I'm skeptical that the experimental questions influence much of anything either way. Seems to be a lot of hand wringing/pearl clutching to justify certain outcomes.

You really don't see how extra test questions, which haven't been vetted and may be overly difficult or confusing, could throw off someone's entire test performance? Especially if they have anxiety about test taking?
 
  • Like
Reactions: 3 users
You really don't see how extra test questions, which haven't been vetted and may be overly difficult or confusing, could throw off someone's entire test performance? Especially if they have anxiety about test taking?

They have been vetted, they are there now for additional verification. And, the difficulty is likely fairly close, or balanced out between easier and harder questions. In the vast majority of circumstances, I think this is just faux outrage and trying to justify performance due to non-contributory factors.
 
  • Like
Reactions: 2 users
You really don't see how extra test questions, which haven't been vetted and may be overly difficult or confusing, could throw off someone's entire test performance? Especially if they have anxiety about test taking?
If someone who has ostensibly received both didactics and clinical training in assessment and psychometrics/test development gets so "thrown off" by experimental questions that they fail the exam, maybe they need to work on their own anxiety and issues before becoming a licensed clinician?

Also, this is part of the "vetting" process, but surely isn't the beginning of it. It's not like someone was sitting in a room just writing down what questions popped in their head. These are items that have already gone through development, but more data is needed based on typical EPPP test takers.
 
  • Like
Reactions: 1 users
CONSTRUCTION OF THE EXAMINATION

The examination development process is intended to maximize the content validity of the EPPP.

The ASPPB Item Development Committee (IDC) is appointed by the ASPPB Board of Directors and charged to oversee the item writing process. Members of the IDC are chosen for their expertise and credentials in the specific domains that comprise the content areas of the EPPP.

The ASPPB Examination Committee (ExC), along with ASPPB’s test vendor, is responsible for the construction of the EPPP. ExC members are appointed by the ASPPB Board of Directors and are chosen for their outstanding credentials and exceptional achievements in their respective specialties. Members of both committees are listed in the "EPPP Exam Information" section of the ASPPB website at The Association of State and Provincial Psychology Boards.

A brief outline of the item development process follows: Individuals with expertise in specific domains of the EPPP write questions that are submitted for consideration. Members of the IDC train item writers on how to write questions for the EPPP and how to submit questions to be considered for the EPPP item bank.

1. Once an item is submitted for review, a process of validation occurs between the item writer and a subject-matter expert on the IDC. Items are evaluated for style, format, subject matter accuracy, relevance to practice, professional level of mastery, contribution to public protection, and freedom from bias.

2. Once judged by the IDC subject-matter expert to be of sufficient quality, items receive an additional level of editorial and psychometric review by ASPPB’s test vendor staff to ensure conformity to established psychometric principles and the EPPP Style Guidelines.

3. Items that are approved by IDC subject-matter experts are then entered into the EPPP Pretest Item Bank.

4. A draft Examination is constructed on the basis of a content outline derived from a job analysis and role delineation study of the profession of psychology (see below). At a meeting of the ExC, the preliminary draft is reviewed item-by-item. Items are reviewed, validated, and/or replaced with bank questions in accordance with the test specifications and the ExC’s expert judgment. This draft is taken from the Operational Item Bank and so is made up only of items with known psychometric properties.

5. ASPPB’s test vendor staff constructs a second draft of the EPPP in accordance with the ExC review of and comments on the first draft, and at the next meeting of the ExC, this second draft of the Examination is reviewed item-by-item. Committee members use their content expertise and the item statistics to draft a final form of the Examination.

A final form of the Examination is constructed on the basis of the ExC’s second review and comments, and is then uploaded into the Pearson VUE’s system. The finalized form of the EPPP is supplemented with 50 items for pre-testing. These pre-test items are randomly distributed throughout the test and are not counted as part of a candidate’s score. The total number of items on the EPPP will be 225, 175 of which are operational (and will be scored) and 50 of which are pretest items
 
  • Like
Reactions: 1 users
In the vast majority of circumstances, I think this is just faux outrage and trying to justify performance due to non-contributory factors.

As I said earlier in this thread, I passed by a quite a large margin on my first try. As did everyone in my grad school cohort. Within that small group of us who passed the test easily (and therefore have no need to justify our performances), we all felt annoyed by the concept of being required to help out the testing company in order to get licensed.

One my biggest challenges as an early career psychologist has been setting boundaries and limits around my work and my time. I already work more hours than I am paid to work each week because I like the work that I do, but I've started to set hard limits around taking on additional responsibilities that are not fulfilling and do not build towards my career goals. My time has value, at least to me.

I have no idea how much time I spent working through the experimental items, but on an individual level I'm sure it wasn't significant. If we sum the total number of minutes that every test taker has spent working through experimental items on the EPPP, however, that adds up to a large investment of time. That time has value, it's also a lot of unpaid work done by people who could not choose to opt out. I'm not going to stage a protest about it, but it's pretty annoying.
 
  • Like
Reactions: 1 user
As I said earlier in this thread, I passed by a quite a large margin on my first try. As did everyone in my grad school cohort. Within that small group of us who passed the test easily (and therefore have no need to justify our performances), we all felt annoyed by the concept of being required to help out the testing company in order to get licensed.

One my biggest challenges as an early career psychologist has been setting boundaries and limits around my work and my time. I already work more hours than I am paid to work each week because I like the work that I do, but I've started to set hard limits around taking on additional responsibilities that are not fulfilling and do not build towards my career goals. My time has value, at least to me.

I have no idea how much time I spent working through the experimental items, but on an individual level I'm sure it wasn't significant. If we sum the total number of minutes that every test taker has spent working through experimental items on the EPPP, however, that adds up to a large investment of time. That time has value, it's also a lot of unpaid work done by people who could not choose to opt out. I'm not going to stage a protest about it, but it's pretty annoying.
Of all the things to complain about the training and licensing process for becoming a psychologist, the minutes spent on EPPP experimental items seems pretty low on the priority list.
 
  • Like
Reactions: 1 users
Of all the things to complain about the training and licensing process for becoming a psychologist, the minutes spent on EPPP experimental items seems pretty low on the priority list.

Okay.

Like I said, I'm not going to stage a protest about it. That doesn't mean that it isn't annoying.
 
Okay.

Like I said, I'm not going to stage a protest about it. That doesn't mean that it isn't annoying.
An "annoyance" is far different from insinuating unethical conduct or claiming that one is failing the EPPP because they have to answer experimental items.
 
  • Like
Reactions: 1 users
Out of curiosity, can anyone think of a viable alternative to having these experimental questions on the exam? I have been thinking about this over the course of this thread and am coming up with zilch. You can't reuse questions indefinitely and at some point new questions have to get on the exam. There is a reason most (all?) standardized exams have experimental questions.

It is based on content knowledge so you can't really norm on volunteers who haven't prepared for the exam in a lower-stakes environment (e.g. paying current grad students to take it) the way you can with a typical assessment. You also can't just ask for volunteers at the EPPP since the pool will unquestionably differ from actual test-takers and then you have items that are solely vetted by people with extra time who don't mind tests. You can't have it vetted by current practitioners because there is no guarantee performance would generalize to less-experienced students learning from a book. All the solutions I am thinking of involve having actual SCORED but not-fully-vetted questions on the exam. They could then drop poorly performing items after enough data accumulates. Yet the questions are still on the exam and worse, we would not be getting an official score for 6 months (or worse...having your score flip from passing to failing or vice versa after 6 months)...it just sounds horrific.

If someone can present a scientifically-sound alternative I'm certainly open to it. I'm arguing in favor of the current method because I genuinely can't think of another one that works.
 
Last edited:
  • Like
Reactions: 6 users
What about the practice exam that some folks pay for that replicates the testing environment and requires studying.... or is that also prone to the same kind of sample bias?
 
What about the practice exam that some folks pay for that replicates the testing environment and requires studying.... or is that also prone to the same kind of sample bias?
Cohort effects. Most eppp test takers do not take the computerized practice exam. And, some people take it multiple times. It would be terrible for the psychometric analyses
 
  • Like
Reactions: 1 user
I would think heavily prone to bias. I do not know anyone who took it and would guess it is made up of an overall weaker pool of test-takers who feel they need more practice.
 
  • Like
Reactions: 1 user
Top