Diversity of Experiences

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

RxPsych

Clinical Psychology PhD Candidate
2+ Year Member
Joined
Mar 11, 2021
Messages
58
Reaction score
76
How does everyone view the importance of diverse training experiences at the cost of varied quality? I will be beginning a new neuropsych practicum at an AMC and none of the supervisors are board-certified, they don't adhere to AACN consensus standards with utilizing performance validity measures (none!), use of descriptive ranges, etc., the current postdoc did not attend an APA-accredited internship and the site in question does not have an APA-accredited postdoc. I feel like my previous practicum sites had training more aligned with AACN standards and expectations.

I'm trying to keep an open mind but I have concerns about the quality of training. If anyone here had similar experience, how did you deal with it? Just suck it up, learn what not to do, and move on when you had the chance? The advocate in me would love to be a force for change, yet I don't want to ruffle any feathers.

Members don't see this ad.
 
Realistically, you’re not going to change a lot of opinions of established folks as a practical student, though maybe approaching conversations with curiosity about why they don’t use PVTs would be interesting. I think most folks agree that certain standalone PVTs in some rare cases are not appropriate (I won’t comment on why since this is a public forum, but there were talks at INS this year about this, for example), but not having them in your possible toolbox to use seems bizarre to me.

Also, what do you mean by descriptive ranges? The AACN consensus paper? I know a lot of people haven’t adopted the exact terminology for a variety of reasons, but I agree it’s a good starting point. Or are you saying they aren’t describing test performances at all?

If you’re really concerned the experience will be poor quality, I would talk to your faculty mentor and/or DCT.
 
Last edited:
Realistically, you’re not going to change a lot of opinions of established folks as a practical student, though maybe approaching conversations with curiosity about why they don’t use PVTs would be interesting. I think most folks agree that PVTs in some cases are not appropriate (I won’t comment on why since this is a public forum, but there were talks at INS this year about this, for example), but not having them in your possible toolbox to use seems bizarre to me.

Also, what do you mean by descriptive ranges? The AACN consensus paper? I know a lot of people haven’t adopted the exact terminology for a variety of reasons, but I agree it’s a good starting point. Or are you saying they aren’t describing test performances at all?

If you’re really concerned the experience will be poor quality, I would talk to your faculty mentor and/or DCT.

Curious, as I don't think most would agree here. I think it may be more accurate to say that most folks agree that PVTs have to be considered and interpreted within the context of the patient and the potential disease process at work, but they should almost always be part of the evaluation.
 
  • Like
Reactions: 2 users
Members don't see this ad :)
I agree that it's unlikely you're going to change any minds, and trying to do so may not work out well for you. Besides that, I'm of the opinion that there's usually something you can learn from everyone. Sure, you can learn aspects of what not to do at this site, but just because the providers don't practice in a way consistent with all published guidelines doesn't necessarily mean there isn't anything you can learn from them. Maybe they have aspects to their interview style that are unique and helpful. Maybe they're great at establishing rapport with patients or collaborating with other providers. Maybe they have insights into interpretation (validity notwithstanding) and brain function you haven't heard before.

RE: the AACN descriptors, I myself am guilty of not always adhering to those. I actually don't provide any qualitative descriptions of results in my results tables, and when I'm describing findings in my summary section, I'm just as likely to say there's evidence of mild decline as I am to say that the results in X or Y domain are below average.

RE: PVTs, I can't think of many, if any, situations in which administering them would be wholly inappropriate. But I've had occasional supervisors who didn't use them, and I still learned a lot from those supervisors.
 
  • Like
Reactions: 1 users
In regard to PVTs , there are embedded measures in some pretty standard tests , so it’s possible that a) some just utilize those , b) they don’t know or understand them and claim they don’t use any , c) they utilize other data or insight to try to assess performance validity among other things.
 
Curious, as I don't think most would agree here. I think it may be more accurate to say that most folks agree that PVTs have to be considered and interpreted within the context of the patient and the potential disease process at work, but they should almost always be part of the evaluation.
I agree, and that’s a more accurate way of phrasing what I mean. However, I’ve seen more than my fair share of reports where a patient with a well-documented neurological disorder (e.g., basically confirmed with labs and imaging) has been told they have poor effort or are even malingering because of a PVT score. Most places I have trained don’t use the memory standalones in these cases for fear that an outside provider would misinterpret the score, even if the score is interpreted within the patient’s context in our report.

I do think at minimum 1-2 embedded measures should be a part of all evaluations, and that’s pretty easy to achieve with common batteries.
 
I agree, and that’s a more accurate way of phrasing what I mean. However, I’ve seen more than my fair share of reports where a patient with a well-documented neurological disorder (e.g., basically confirmed with labs and imaging) has been told they have poor effort or are even malingering because of a PVT score. Most places I have trained don’t use the memory standalones in these cases for fear that an outside provider would misinterpret the score, even if the score is interpreted within the patient’s context in our report.

I do think at minimum 1-2 embedded measures should be a part of all evaluations, and that’s pretty easy to achieve with common batteries.

Having worked in the VA, and similar settings, I can definitively tell you, and have had people admit to this. People with bonafide neurological diseases can, and do, still malinger. I've had more than one person say that they felt that they had to "prove" their issues on testing, and intentionally sandbagged things. So, PVT/SVT testing is still very important, even in these cases, to determine the validity of the data. Of course, we should be informed about how these tests work, and consider alternative cutoffs, depending on the context of the evaluation.
 
  • Like
Reactions: 8 users
Having worked in the VA, and similar settings, I can definitively tell you, and have had people admit to this. People with bonafide neurological diseases can, and do, still malinger. I've had more than one person say that they felt that they had to "prove" their issues on testing, and intentionally sandbagged things. So, PVT/SVT testing is still very important, even in these cases, to determine the validity of the data. Of course, we should be informed about how these tests work, and consider alternative cutoffs, depending on the context of the evaluation.
Agreed, I've also had folks with verified neurological conditions do abhorrently on PVTs (and other testing) in ways that could not be explained by said neurological condition. I don't think that's a reason to eschew well-established PVTs, especially if those PVTs are demonstrably robust to the neurological conditions in question. Outside providers can misinterpret things all the time, unfortunately. If I dropped tests based on fear of that, my evals would probably just consist of an interview.
 
  • Like
Reactions: 4 users
I agree that it's unlikely you're going to change any minds, and trying to do so may not work out well for you. Besides that, I'm of the opinion that there's usually something you can learn from everyone. Sure, you can learn aspects of what not to do at this site, but just because the providers don't practice in a way consistent with all published guidelines doesn't necessarily mean there isn't anything you can learn from them. Maybe they have aspects to their interview style that are unique and helpful. Maybe they're great at establishing rapport with patients or collaborating with other providers. Maybe they have insights into interpretation (validity notwithstanding) and brain function you haven't heard before.

RE: the AACN descriptors, I myself am guilty of not always adhering to those. I actually don't provide any qualitative descriptions of results in my results tables, and when I'm describing findings in my summary section, I'm just as likely to say there's evidence of mild decline as I am to say that the results in X or Y domain are below average.

RE: PVTs, I can't think of many, if any, situations in which administering them would be wholly inappropriate. But I've had occasional supervisors who didn't use them, and I still learned a lot from those supervisors.
I am definitely going to keep an open mind, it's a little odd to transition from a board-certified perspective to this, but this is helpful.
 
Having worked in the VA, and similar settings, I can definitively tell you, and have had people admit to this. People with bonafide neurological diseases can, and do, still malinger. I've had more than one person say that they felt that they had to "prove" their issues on testing, and intentionally sandbagged things. So, PVT/SVT testing is still very important, even in these cases, to determine the validity of the data. Of course, we should be informed about how these tests work, and consider alternative cutoffs, depending on the context of the evaluation.
When someone in their 40s (with subjective cognitive complaints attributed to "long COVID"; they weren't even hospitalized) fails the same PVTs administered to someone in their 80s with moderate dementia (that passed PVTs), it gives me hope they do what they're intended to.
 
  • Like
Reactions: 1 users
In regard to PVTs , there are embedded measures in some pretty standard tests , so it’s possible that a) some just utilize those , b) they don’t know or understand them and claim they don’t use any , c) they utilize other data or insight to try to assess performance validity among other things.
The worst part is they do have tests with embedded validity indicators. However, they either a) don't calculate them or even consider them, or b) don't administer the part of the test that is the embedded validity indicator. The only mention of overall validity is basically, "did it seem like they put forth good effort?" which is a dangerous way to solely report validity.
 
  • Like
Reactions: 1 user
The worst part is they do have tests with embedded validity indicators. However, they either a) don't calculate them or even consider them, or b) don't administer the part of the test that is the embedded validity indicator. The only mention of overall validity is basically, "did it seem like they put forth good effort?" which is a dangerous way to solely report validity.
In grad school I had the opportunity to help contribute to the research literature on embedded PVTs. If it’s a reassurance, there is at least one test that is in a very common test and it’s a part that can’t be skipped if you’re administering even the core test. I learned a lot about embedded measures and having first hand experience with it made me appreciate the importance of embedded PVT measures.

While it’s not wholly unreasonable to utilize clinical interviews, history, and observations to help get an idea of effort and performance ; it should be alongside the data from the tests.

Your attention to detail and commitment to quality assessments is a great step towards becoming a solid practitioner.

That said , I agree with other comments on here about being careful to ruffle feathers as frustrating as that might be. If it was internship or post doc I would be more concerned. If you bring it up to your school they may become concerned the practicum site isn’t good enough and could pull you. If you bring it up with your supervisors at the practicum I would take the one down approach and express a curiosity in learning more about PVTs… they may say they just use history , interview, and observations and provide some knowledge on this. Or they may surprise you and share more insight into PVTs they use or how they use them.

End of the day, as a student in a practicum many would say just get your hours and supervision done, learn what you can and move onwards. In other words it’s probably not a hill I’d take a stand on of “you must use PVTs at this site” as a practicum student. Learn what you can, take what you said yourself and apply that higher standard to your training and future career as you progress.
 
Last edited:
In grad school I had the opportunity to help contribute to the research literature on embedded PVTs. If it’s a reassurance, there is at least one test that is in a very common test and it’s a part that can’t be skipped if you’re administering even the core test. I learned a lot about embedded measures and having first hand experience with it made me appreciate the importance of embedded PVT measures.

While it’s not wholly unreasonable to utilize clinical interviews, history, and observations to help get an idea of effort and performance ; it should be alongside the data from the tests.

Your attention to detail and commitment to quality assessments is a great step towards becoming a solid practitioner.

That said , I agree with other comments on here about being careful to ruffle feathers as frustrating as that might be. If it was internship or post doc I would be more concerned. If you bring it up to your school they may become concerned the practicum site isn’t good enough and could pull you. If you bring it up with your supervisors at the practicum I would take the one down approach and express a curiosity in learning more about PVTs… they may say they just use history , interview, and observations and provide some knowledge on this. Or they may surprise you and share more insight into PVTs they use or how they use them.

End of the day, as a student in a practicum many would say just get your hours and supervision done, learn what you can and move onwards. In other words it’s probably not a hill I’d take a stand on of “you must use PVTs at this site” as a practicum student. Learn what you can, take what you said yourself and apply that higher standard to your training and future career as you progress.
I'm well aware of the embedded PVT you are describing, and unfortunately they don't calculate or consider it.

Thank you for the insight! I'll be as positive and open as I can and strive to learn as much as possible without forgoing the core neuropsych competencies I've learned at my other practicum placements. I have already discussed PVTs with the supervisors and they appear adamant in not using them, primarily justified by "we don't see why anyone would purposely fail the tests" when I've explained why my other supervisors included them and the myriad of reasons we saw. I think it's important to remember we don't always know why or really care why people fail, but just knowing that someone failed is enough info. But oh well.

Some people just stick to their guns, and since it's not my license or reputation on the line then I'll just put my head down and do my work.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
I'm well aware of the embedded PVT you are describing, and unfortunately they don't calculate or consider it.

Thank you for the insight! I'll be as positive and open as I can and strive to learn as much as possible without forgoing the core neuropsych competencies I've learned at my other practicum placements. I have already discussed PVTs with the supervisors and they appear adamant in not using them, primarily justified by "we don't see why anyone would purposely fail the tests" when I've explained why my other supervisors included them and the myriad of reasons we saw. I think it's important to remember we don't always know why or really care why people fail, but just knowing that someone failed is enough info. But oh well.

Some people just stick to their guns, and since it's not my license or reputation on the line then I'll just put my head down and do my work.

These people make me a lot of money :)
 
  • Like
Reactions: 1 users
These people make me a lot of money :)
Same. I just got another case to review that has some….interesting…data and conclusions. I love being able to work from wherever and get paid well to do it; all made possible by hacks and poorly trained “psychologists” who “dabble” in legal work.
 
These people make me a lot of money :)
Yep. Although unfortunately, it's also contributed to an at times significantly tainted research literature, particularly from decades past. Luckily, more recent work has remedied that most of the time.

The vociferous resistance to validity testing (by clinicians, not shills) can sometimes be head scratching, and is often very frustrating when you're the next neuropsychologist the patient sees and it falls on you to have to tell this person that they don't have dementia, but such is life. IME, it's often the folks who've never done forensic work or worked in a setting in which secondary gain is prominent (e.g., VA) who seem to be the most dead set against the need for validity testing in clinical cases.
 
  • Like
Reactions: 1 users
Yep. Although unfortunately, it's also contributed to an at times significantly tainted research literature, particularly from decades past. Luckily, more recent work has remedied that most of the time.

The vociferous resistance to validity testing (by clinicians, not shills) can sometimes be head scratching, and is often very frustrating when you're the next neuropsychologist the patient sees and it falls on you to have to tell this person that they don't have dementia, but such is life. IME, it's often the folks who've never done forensic work or worked in a setting in which secondary gain is prominent (e.g., VA) who seem to be the most dead set against the need for validity testing in clinical cases.

Yeah, I'm somewhat amazed by some of my colleagues change as they get into systems that financially benefit from not considering validity. I'm curious if they just slowly succumb to pressure to make sure patients stay within the system to see 4+ different specialists for years, or if they get some kind of financial benefit from it as well. Either way, shameful and harmful to the more somatic flavored patients.
 
  • Like
Reactions: 1 user
1) Keep in mind, at the end of the day, neuropsychological diagnosis is made by your interpretation of tests. You can see severely impaired patients whose impairments render them untestable; that doesn't mean you can’t say they have major neurocognitive disorder.

2) Uhhhh, you’re saying that you’re getting insight into the day to day work of your opposing side? And that’s a bad thing because…? Oh no! Coach, I found the other team’s playbook!”
 
  • Like
Reactions: 1 users
1) Keep in mind, at the end of the day, neuropsychological diagnosis is made by your interpretation of tests. You can see severely impaired patients whose impairments render them untestable; that doesn't mean you can’t say they have major neurocognitive disorder.

2) Uhhhh, you’re saying that you’re getting insight into the day to day work of your opposing side? And that’s a bad thing because…? Oh no! Coach, I found the other team’s playbook!”

Good for us in the forensic realm. We look good dismantling the work of our hack colleagues and calling out malingerers. I do feel bad about the smaller proportion of patients that I feel are legitimate somaticization disordered peeps. They're the ones getting screwed in the process.
 
Regular use of PVTs/SVTs is good practice, imo. Depending on which measure and study you look at, PVT fail rate within clinical populations can be as high as ~10%. I've also experienced too many, "by the way, can you send this to my lawyer?" mid-way through/after a clinical eval :cautious:
 
Update: the postdoc mentioned they plan to get involved in forensic work soon and went on a tangent describing the importance of projective testing
 
More work for the rest of us.
I don't know if it's a sign that I'm getting prematurely jaded but this is how I feel about my specialty area when I do intakes or chart review and see how bad things are for patients with other providers in the community. I'm not worried about job security at all.
 
Top