How to think about validity scale information...

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

BuckeyeLove

Forensic Psychologist
10+ Year Member
Joined
Mar 1, 2014
Messages
827
Reaction score
1,383
A recent thread on a forensic listserv has got me pontificating all night/day on this question: Do we have any data to suggest that validity scale data from one test reliably predicts self-presentation in other situations, such as interviews? If, say, we administered two or more tests with strong validity scale data, and response style is consistent across each of those tests (all faking good, for example), how does/should those data affect our view of litigant/defendant/person being evaluated data from interviews?
Be interested to hear some of your thoughts.

Members don't see this ad.
 
A recent thread on a forensic listserv has got me pontificating all night/day on this question: Do we have any data to suggest that validity scale data from one test reliably predicts self-presentation in other situations, such as interviews? If, say, we administered two or more tests with strong validity scale data, and response style is consistent across each of those tests (all faking good, for example), how does/should those data affect our view of litigant/defendant/person being evaluated data from interviews?
Be interested to hear some of your thoughts.

Are you asking if they are deceptive on the practitioners tests, are they also likely being deceptive when answering the practitioners interview questions? I would have to think yes?

I think I'm a little confused though, as I thought the whole point of using testing data was that it helps predict/represent the patient's behavior better than interview data alone? I thought this was already fairly well established?
 
Research has consistently found that invalidated validity scales (I'm most familiar with over-reporting and the MMPI specifically) predict inflated performance on other criteria measures at future time points in addition to concurrently administered PVT/SVT. How well does that predict an interview? More poorly I would assume, but largely due to the variability inherent to interview performance (well, unstandardized at least) rather than because of testing error.

My recommendation would be to measure assess style with multiple instruments and use the performance-based outcomes validated for those to identify a consensus determination about validity. From there, my decision about trusting the interview becomes substantially easier in practice and court defense, if need be.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
If you read enough testimony, you'll see attorneys asking something like, "So they were lying there, but not here? And you have no way of proving that they are not lying here?".

Edit: in hindsight, perjury might be a relevant analogy
 
Last edited:
  • Like
Reactions: 1 user
This approach is generally used for comp and pen exams in the VA. If they invalidated the MMPI or PAI, all of the information obtained during that appointment (including during the interview) is suspect.
 
Research has consistently found that invalidated validity scales (I'm most familiar with over-reporting and the MMPI specifically) predict inflated performance on other criteria measures at future time points in addition to concurrently administered PVT/SVT. How well does that predict an interview? More poorly I would assume, but largely due to the variability inherent to interview performance (well, unstandardized at least) rather than because of testing error.

My recommendation would be to measure assess style with multiple instruments and use the performance-based outcomes validated for those to identify a consensus determination about validity. From there, my decision about trusting the interview becomes substantially easier in practice and court defense, if need be.
That presents an interesting question. Could performance on testing be affected by the prior interview itself and lead to one being valid, but not the other?

For example, could a patient or client have been more or less honest during interview, but then, based on the reactions of the interviewer (e.g., questioning how disabled the patient/client really is by their problems, not reacting strongly enough when the patient behaviorally conveys honest distress about their problems), malinger during testing to emphasize how impaired they feel, obtain whatever their motivating secondary gain was, etc.?

Conversely, could they be malingering or exaggerating during the interview, and then put forth max effort during testing after detecting the skepticism of the interviewer, because they don't want the assessor to convey that they were malingering to the referring provider or the court?
 
  • Like
Reactions: 1 users
To me, it is just another piece of data to be integrated into the whole. I do not think an elevated validity measure automatically undermines all interview content, though an evaluator needs to think about and be able to articulate what they consider reliable and why. It really depends on the specific type of validity concern (e.g. faking good vs. faking bad), the hypothesis about the purpose of the individual's dishonest response style, the available collateral (including behavioral observations), the overall clinical profile, and the specific validity measures used.

My population frequently skews quite low on intellectual functioning, so for example, I would give consideration to the fact that it is far easier to fake bad on a test of cognitive malingering than to lie verbally during the course of a long interview. Further, depending on your population, they may not realize the extent of your observations of them until the testing is underway. Asking follow-up interview questions after a testing session can also help provide insight into their overall presentation.
 
  • Like
Reactions: 1 user
That presents an interesting question. Could performance on testing be affected by the prior interview itself and lead to one being valid, but not the other?

For example, could a patient or client have been more or less honest during interview, but then, based on the reactions of the interviewer (e.g., questioning how disabled the patient/client really is by their problems, not reacting strongly enough when the patient behaviorally conveys honest distress about their problems), malinger during testing to emphasize how impaired they feel, obtain whatever their motivating secondary gain was, etc.?

Conversely, could they be malingering or exaggerating during the interview, and then put forth max effort during testing after detecting the skepticism of the interviewer, because they don't want the assessor to convey that they were malingering to the referring provider or the court?
Yes.
 
Well, if you hadn't answered your own questions already, I might/could've said more ;)
 
Research has consistently found that invalidated validity scales (I'm most familiar with over-reporting and the MMPI specifically) predict inflated performance on other criteria measures at future time points in addition to concurrently administered PVT/SVT. How well does that predict an interview? More poorly I would assume, but largely due to the variability inherent to interview performance (well, unstandardized at least) rather than because of testing error.

My recommendation would be to measure assess style with multiple instruments and use the performance-based outcomes validated for those to identify a consensus determination about validity. From there, my decision about trusting the interview becomes substantially easier in practice and court defense, if need be.
This makes sense, and is perhaps illustrative of why the clinician has to use judgment in how they interpret a case. A pattern of over-reporting on personality tests is probably more likely to reflect characterological issues (and therefore be slow to change), whereas other forms of invalid responding are motivated by highly dynamic issues such as malingering mental illness in a forensic setting due to external stressors/perceived gains (and therefore change rapidly). I would certainly not feel comfortable testifying to an assumption that someone who has malingered in the past is necessarily (or even likely) malingering in the present.
 
Members don't see this ad :)
If you read enough testimony, you'll see attorneys asking something like, "So they were lying there, but not here? And you have no way of proving that they are not lying here?".

Edit: in hindsight, perjury might be a relevant analogy
That's a bit of a trap to answer. Inferring motivation, rather than relative performance and the patterns that emerge using multiple highly validated instruments, opens up all sorts of things I don't want to sit on the other side of defending.
 
That's a bit of a trap to answer. Inferring motivation, rather than relative performance and the patterns that emerge using multiple highly validated instruments, opens up all sorts of things I don't want to sit on the other side of defending.

My assessment professor would threaten to automatically fail us if we ever said that someone was malingering based on test results alone.
 
My assessment professor would threaten to automatically fail us if we ever said that someone was malingering based on test results alone.
I'm guessing the professor meant infer completely faking. As in failing the b test very strongly indicates malingering, but malingering doesn't mean the presenting concern is being faked (e.g., failing the SVT could be cry-for-help).
 
I'm guessing the professor meant infer completely faking. As in failing the b test very strongly indicates malingering, but malingering doesn't mean the presenting concern is being faked (e.g., failing the SVT could be cry-for-help).

A little different than what you are saying here, but the "cry for help" things that people will argue for on say, the MMPI, has no real empirical support.
 
  • Like
Reactions: 1 users
That's a bit of a trap to answer. Inferring motivation, rather than relative performance and the patterns that emerge using multiple highly validated instruments, opens up all sorts of things I don't want to sit on the other side of defending.

Of course it's a trap. I said it's testimony.
 
  • Like
Reactions: 1 users
There are circumstances on tests alone in which you can say that with a high degree of certainty.

He always said that "malingering" specifically infers a motive. Testing alone cannot inform you about someone's motive for over-reporting.

What you said about the "cry for help" idea not having any real data is very interesting, though.
 
He always said that "malingering" specifically infers a motive. Testing alone cannot inform you about someone's motive for over-reporting.

What you said about the "cry for help" idea not having any real data is very interesting, though.

If someone is below statistical chance on a measure, I can definitively say that they are intentionally choosing the wrong answer. We can't say the motice for sure, but we can definitely say that they are malingering for some reason. This is something we can prove statistically in certain situations. At times, I'll even lay out the numbers to incredulous referral sources. It usually hits home when I show them that their patient had a better chance of winning the powerball and mega millions jackpot on the same day, than they would in getting that outcome even if randomly guessing.
 
He always said that "malingering" specifically infers a motive. Testing alone cannot inform you about someone's motive for over-reporting.

What you said about the "cry for help" idea not having any real data is very interesting, though.

I agree about using the power of statistics here. Prosecutors "infer" motive all the time with less evidence.

If someone is that really that rigid about malingering, then they would probably never reasonably make the malingering call. And it needs to be made if its reasonable. People pay you for your opinions. Being overly rigid or obtuse or academic about the concept, so much so that they him-haw around the obvious, is an abrogation of their duty.
 
Last edited:
  • Like
Reactions: 1 users
He always said that "malingering" specifically infers a motive. Testing alone cannot inform you about someone's motive for over-reporting.

What you said about the "cry for help" idea not having any real data is very interesting, though.
I mean, I wouldn't infer malingering with a test alone... mainly because I wouldn't infer anything based entirely on testing results. He has a point about malingering that largely lines up with the general suggestions for diagnostic practice (e.g., Slick). That said, I would certainly interpret testing results in a manner that would demonstrate that they are consistent with comparative performance of groups identified as malingering.
 
  • Like
Reactions: 1 user
Would we say that malingering is the motive or the behavior itself? Wouldn't the motive be the reason why they are malingering, e.g., whatever the primary or secondary gain is?
 
  • Like
Reactions: 1 user
Would we say that malingering is the motive or the behavior itself? Wouldn't the motive be the reason why they are malingering, e.g., whatever the primary or secondary gain is?
Malingering is a motive-based decision. Failure / invalidity doesn't equate to malingering for a number of reasons, including that many (particularly embedded indicator scales such as those on personality inventories) are sensitive to a variety of factors outside of intentional efforts to elevate scores (e.g., minority status, education, psychopathology, gender, etc.) and those things are important in determinations of interpretation.
 
  • Like
Reactions: 1 users
Malingering is a motive-based decision. Failure / invalidity doesn't equate to malingering for a number of reasons, including that many (particularly embedded indicator scales such as those on personality inventories) are sensitive to a variety of factors outside of intentional efforts to elevate scores (e.g., minority status, education, psychopathology, gender, etc.) and those things are important in determinations of interpretation.
Yeah, that's kind of what I was getting at. Malingering is the action and the motive for why they chose to malinger needs to be inferred from the data (if possible), much like the overall interpretation that malingering occurred does.
 
  • Like
Reactions: 1 user
He always said that "malingering" specifically infers a motive. Testing alone cannot inform you about someone's motive for over-reporting.
This is true in the sense that you need to consider the context and purpose of the testing when determining the potential motive. In correctional settings, sometimes an invalid response style is a person's way of saying "f___ you" to the psychologist testing them (i.e. no external gain). Evidence of invalidity is not the same as evidence of malingering. Caution is also warranted when using measures designed to look at exaggerating/feigning of psychotic symptoms, particularly with people who do have some potential psychosis and/or lower cognitive functioning (and therefore higher suggestibility).

That said, I would feel pretty confident testifying that a plaintiff in a personal injury lawsuit was malingering if they claimed cognitive deficits from the injury and invalidated validity scales of such.
 
  • Like
Reactions: 1 users
We can get into the deep academic word play...but my point was more about academics training students in a very academic way, as opposed to the realities of practice.

The reality is sometimes we need to make calls without being 100% confident about it, or endlessly himming and hawing about all the philosophical issues of the construct, specific motivation, etc. Again, you are paid for an opinion. More common sense and less academic word-play about the construct is what I would advise when actually practicing.
 
  • Like
Reactions: 2 users
It always seems like psychiatrists have a much easier time making an unequivocal call on this than we do, or seem to appear far less distressed about making the call. At least the one's I know. Whereas, the forensically trained psychologists (at least some of them I know and have been trained by) perseverate and hem and haw all day about saying someone is malingering and then get into academic debates regarding semantics. Hell, I have colleagues/friends in certain states [cough cough North Carolina cough cough] where the evaluators are actively discouraged from even addressing feigning. All i know is, I had a dude last week who 5, 4. and 1'd a TOMM. I didn't need any other data points. He was malingering.
 
  • Like
Reactions: 1 user
Couldn't agree more that psychiatrists don't hesitate. But for psychologists on the criminal side of forensics, we do tend to be cautious because being trigger-happy on the "malingering" label can have very real and very negative consequences for the person you're applying it to.
 
It always seems like psychiatrists have a much easier time making an unequivocal call on this than we do, or seem to appear far less distressed about making the call. At least the one's I know. Whereas, the forensically trained psychologists (at least some of them I know and have been trained by) perseverate and hem and haw all day about saying someone is malingering and then get into academic debates regarding semantics. Hell, I have colleagues/friends in certain states [cough cough North Carolina cough cough] where the evaluators are actively discouraged from even addressing feigning. All i know is, I had a dude last week who 5, 4. and 1'd a TOMM. I didn't need any other data points. He was malingering.

This goes waaaay back. Physician notes generally suck, but at least they don't write endlessly to justify their diagnostic opinion. Atkins death penalty cases, ok, different story. But I'm sure you guys know what I am getting at.

I remember when I was a student on neuropsych rotation and the report template had an introductory paragraph for each test describing what it measured and how. I remember being like: What...what the **** is this? What am I am i doing an interview for ****ing NPR?! WTF? Who does this?!

 
Last edited:
  • Like
Reactions: 1 users
This goes waaaay back. Physician notes generally suck, but at least they don't write endlessly to justify their diagnostic opinion. Atkins death penalty cases, ok, different story. But I'm sure you guys know what I am getting at.

I remember when I was a student on neuropsych rotation and the report template had an introductory paragraph for each test describing what it measured and how. I remember being like: What...what the **** is this? What am I... teaching a class for physicians and patients here?! WTF?
My neuro rotation had the same template and we were required to use it and it was so horrendous. Patients have no idea what to do with a 30 page report, especially when so much of it is unnecessary.
 
Couldn't agree more that psychiatrists don't hesitate. But for psychologists on the criminal side of forensics, we do tend to be cautious because being trigger-happy on the "malingering" label can have very real and very negative consequences for the person you're applying it to.

Again with the word play, right?

"Suboptimal effort resulting in an invalid profile.... Which means/suggests....X Y, Z." Whatever. Any number of ways to express a solid opinion without using the proverbial F word. We are also paid for subtle opinions..
 
Last edited:
Well, again with the word play, right?

"Suboptimal effort resulting in an invalid profile.... Which means/suggests....X Y, Z." Whatever. Any number of ways to express a solid opinion without using the proverbial F word. We are also paid for subtle opinions..
Hmmm... I am not saying that I wouldn't give a firm opinion. Just that I wouldn't rely on a single data point to do so, considering the stakes and the number of variables involved in the types of cases I see. This was in response to other posters talking about whether testing alone is sufficient. The gold standard in assessing malingering in the types of cases I deal with is behavioral observations. Testing is great, but typically insufficient to assess response style in psychotic and/or extremely impaired populations.
 
Just that I wouldn't rely on a single data point to do so, considering the stakes and the number of variables involved in the types of cases I see.

No (reasonable) psychologist does/would do this. I assumed that goes without saying?

Being overly obtuse or academic about the issue in practice is a "thing" (clearly)...and I think we need to get over it.
 
Last edited:
No (reasonable ) psychologist does/would do this. I assumed that goes without saying.

Being overly obtuse or academic about the issue in practice is a "thing" (clearly)...and I think we need to get over it.
I agree that no reasonable psychologist would do this. But I also see it happen all the time, especially the ilk of psychologists who apparently got into forensics late in their career, easily identified by their awful reports and frequent padding of their CVs with a litany of meaningless vanity board initials after their name. I'm less concerned about the overly obtuse academic types who aren't likely to get hired again after some mealy-mouthed testimony than I am about the small-town cowboy hired gun types, who literally think a psychoanalytic measure of the permeability of interpersonal boundaries is a sufficient measure of someone's competence for trial. Or who think that digit-span can tell you about the person's sense of self. Or that a somewhat elevated validity scale automatically means an acutely psychotic person is malingering. These are real examples.
 
  • Like
Reactions: 1 users
I agree that no reasonable psychologist would do this. But I also see it happen all the time, especially the ilk of psychologists who apparently got into forensics late in their career, easily identified by their awful reports and frequent padding of their CVs with a litany of meaningless vanity board initials after their name. I'm less concerned about the overly obtuse academic types who aren't likely to get hired again after some mealy-mouthed testimony than I am about the small-town cowboy hired gun types, who literally think a psychoanalytic measure of the permeability of interpersonal boundaries is a sufficient measure of someone's competence for trial. Or who think that digit-span can tell you about the person's sense of self. Or that a somewhat elevated validity scale automatically means an acutely psychotic person is malingering. These are real examples.

Ok. I concede defeat, good Sir/Mame.

I have examples that are ridiculous, but these take the cake. Sad trombone...
 
Last edited:
  • Like
Reactions: 1 user
Ok. I concede defeat, good Sir/Mame.

I have examples that are ridiculous, but these take the cake. Sad trombone...
I will hear sad trombones in my mind the next time I read one of these horrifying reports. :headphone:
 
Top