ADHD testing?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
First off the defendant wasn't found anything, it notes the case was settled. The document is just the report of the plaintiff's expert witness. I am seeing more of an emphasis on not only #2 but also just no indication that meaningful questions were even asked about the possibility of a mood disorder. It is interesting that medmalreviewer thinks this expert witness was going way too far in his assertions about negligence.
Good points. You're right. There was not findings. I'll need to be more careful about that.

I read this as more emphasis on the lack of collateral records. Interesting that we read this differently.

Members don't see this ad.
 
Good Lord, can you imagine the backlash if VA clinicians started subjecting PTSD related complaints to validity testing?
Wouldn't want to see those headlines!!
Some of us do a version of this on a select number of cases. You have to be very careful how you conceptualize it, communicate it, or write it up. But with certain clinical presentations that are extremely inconsistent, extreme, and/or unusual bordering on the absurd (especially with respect to the symptomatic traumatic impact being severely out of proportion to the reported stressor(s)), I find it indispensible (if a bit politically risky). It is certainly best to avoid assignment of motive or use of 'the M word.' However, on certain objective tests (e.g., the MMPI-RF), scores that are beyond a certain extreme threshold indicate overreporting (for example, of psychopathological symptoms).
 
Some of us do a version of this on a select number of cases. You have to be very careful how you conceptualize it, communicate it, or write it up. But with certain clinical presentations that are extremely inconsistent, extreme, and/or unusual bordering on the absurd (especially with respect to the symptomatic traumatic impact being severely out of proportion to the reported stressor(s)), I find it indispensible (if a bit politically risky). It is certainly best to avoid assignment of motive or use of 'the M word.' However, on certain objective tests (e.g., the MMPI-RF), scores that are beyond a certain extreme threshold indicate overreporting (for example, of psychopathological symptoms).

My clinical assessment professor in grad school said that we should never, ever say that someone is malingering based on the results of a test alone.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
My clinical assessment professor in grad school said that we should never, ever say that someone is malingering based on the results of a test alone.

Depends on the test, and the performance. If someone achieves a score that, if they were purely guessing/randomly pointing at things, their chances of getting a score that low were about 1 in a billion, I'm pretty confident of that being an intentional act of malingering.
 
  • Like
Reactions: 1 users
My clinical assessment professor in grad school said that we should never, ever say that someone is malingering based on the results of a test alone.
Makes sense. Can't determine that the overreporting is due to that motivation and--in outpatient psychotherapy practice--I don't have to. What I can say, however, is that along the spectrum of trauma- and stressor-related disorders, there is a gradient of complexity and diagnostic specificity (from low to high) running through the following diagnostic categories/options: no diagnosis --> adjustment disorder --> other specified trauma- and stressor-related disorder --> posttraumatic stress disorder and that--given the present level of respose bias (and associated distortion of the clinical picture) I can opine that I am unable to render a diagnosis of PTSD at the present time.

This is, of course, based on ALL pertinent info including chart review, interview/observation, self-report measures, objective testing, etc.

On a slightly-related topic, there's been some recent empirical work on patterns of overreporting on the PCL-5 (in terms of extremely high total scores and/or high levels of endorsement of certain combinations of (relatively) infrequently-endorsed items and their association with PVT failure.

I've had a handful of clients circle all '4's' for a perfect 80/80 and even one client who displayed the bold initiative to write in a '5' (extrapolated) column/option to all the items on the instrument to achieve an unprecedented score of >80 on the PCL-5 by circling several '5's.

I think his name was PFC James T. Kirk.

I heard he went on to attain the rank of captain in the reserves despite being 100% service-connected for PTSD.
 
Last edited:
  • Like
Reactions: 1 users
My clinical assessment professor in grad school said that we should never, ever say that someone is malingering based on the results of a test alone.

The argument I have heard is that malingering is suggestive of intent and should not be used. However, you can say that results may not be valid due to low effort, a pattern of responding that is inconsistent, etc.
 
  • Like
Reactions: 3 users
The argument I have heard is that malingering is suggestive of intent and should not be used. However, you can say that results may not be valid due to low effort, a pattern of responding that is inconsistent, etc.

Perfectly fine to use malingering, if you can justify it.

 
  • Like
Reactions: 2 users
Good Lord, can you imagine the backlash if VA clinicians started subjecting PTSD related complaints to validity testing?
Wouldn't want to see those headlines!!
It was standard practice for my C&P’s to always have an MMPI and an additional standalone brief symptom validity measure if the MMPI was invalid (e.g., MFAST) and they were reporting unusual symptoms or endorsed a lot of severity without a lot of detail or examples. Digging, the PCL5 embedded validity measure development by Shura et al. 2023. If it’s standard practice and not treated as unusual, the Veterans rarely complained. Man, though those legal firm videos about MH or PTSD C&P exams are bonkers, they’ll talk about the MMPI, the MENT, all sorts of other coaching.
 
  • Like
Reactions: 1 user
Hey everyone, anyone use the Vienna Test System (VTS): Neuropsychological Test Battery for the Assessment of Cognitive Functions in Adult ADHD (CFADHD)? Just been reading about it as a battery of neruosych tests that is claimed to be sensitive to discriminating those with ADHD, between normal controls, and simulators. Much of the research appears to be on how to this battery will generate 17 scores all of which can be used as embedded performance measures. Thoughts? Experience with the CFADHD? I e listed two examples below for reference.

Becks et al., 2023
Dong et al., 2023
 
Hey everyone, anyone use the Vienna Test System (VTS): Neuropsychological Test Battery for the Assessment of Cognitive Functions in Adult ADHD (CFADHD)? Just been reading about it as a battery of neruosych tests that is claimed to be sensitive to discriminating those with ADHD, between normal controls, and simulators. Much of the research appears to be on how to this battery will generate 17 scores all of which can be used as embedded performance measures. Thoughts? Experience with the CFADHD? I e listed two examples below for reference.

Becks et al., 2023
Dong et al., 2023

Considering that the TOMM was the only validity test administered in the first, and assuming that they used manual norms, these samples likely included a lot of PVT failures in the ADHD groups.
 
  • Like
Reactions: 1 users
This is such an interesting topic to me. I do ADHD testing in adults and every person I test endorses enough symptoms on interview and questionnaire to meet criteria. Diagnosing clinically means diagnosing 95% of my presenting clients with adhd. They all also report significant childhood symptoms on the CAT-A.

Same with ASD. I just tested a young woman who gave me a 15 page letter linking her symptoms to the DSM. She attached 22 files to her portal of all of her multiple psych hospitalizations and IOPs. Stayed up all night and read every word. Her current clinical interview, collateral interview, statements made during autism testing and PAI scream 7 out of 9 criteria of bpd. Obvious bpd. Yes, beating your head until it’s bruised is self harm. So, she didn’t get her ASD dx even though she reported all of the symptoms from the DSM and had an srs-2 in the severe range. She got a bpd diagnosis. I’ll need therapy after that feedback session.
 
  • Like
Reactions: 1 user
Hey everyone, anyone use the Vienna Test System (VTS): Neuropsychological Test Battery for the Assessment of Cognitive Functions in Adult ADHD (CFADHD)? Just been reading about it as a battery of neruosych tests that is claimed to be sensitive to discriminating those with ADHD, between normal controls, and simulators. Much of the research appears to be on how to this battery will generate 17 scores all of which can be used as embedded performance measures. Thoughts? Experience with the CFADHD? I e listed two examples below for reference.

Becks et al., 2023
Dong et al., 2023

It's difficult for me to understand what the results of the second mean in the face on low/no association with symptom reports and their explanation about 'help seeking behavior' is really unsatisfying since it's equally possible that these indicators have little relevance to symptom reports, which is one reason why neuropsych testing is for ADHD is highly criticized. Also, the CIs for some of the EVI estimates are enormous, which means to me that the effects may be too unreliable for clinical practice.
 
  • Like
Reactions: 1 user
It's difficult for me to understand what the results of the second mean in the face on low/no association with symptom reports and their explanation about 'help seeking behavior' is really unsatisfying since it's equally possible that these indicators have little relevance to symptom reports, which is one reason why neuropsych testing is for ADHD is highly criticized. Also, the CIs for some of the EVI estimates are enormous, which means to me that the effects may be too unreliable for clinical practice.
Ok
 
Is testing ever going to work for these non-unitary constructs that are based on symptom clusters and not common underlying factors or etiology? Mix in some secondary gain and corporate interests and you have a mess when it comes to diagnosis and treatment. I swear the longer I work in this business, the more I realize that most of what we are basing our reasoning on is useless information. It goes way beyond diagnosis and testing. One aspect is the limited attention on the impact of social and physical environment and stressors on all DSM illnesses. Did we forget that monkeys without cozy moms or rats in a crowded cage or dogs in an inescapable situation all have significant problems? We act like it’s the rats fault for not being “normal” and if we just gave them a medication then they will resume functioning appropriately without any external intervention. I doubt it.
 
  • Like
Reactions: 3 users
Is testing ever going to work for these non-unitary constructs that are based on symptom clusters and not common underlying factors or etiology? Mix in some secondary gain and corporate interests and you have a mess when it comes to diagnosis and treatment. I swear the longer I work in this business, the more I realize that most of what we are basing our reasoning on is useless information. It goes way beyond diagnosis and testing. One aspect is the limited attention on the impact of social and physical environment and stressors on all DSM illnesses. Did we forget that monkeys without cozy moms or rats in a crowded cage or dogs in an inescapable situation all have significant problems? We act like it’s the rats fault for not being “normal” and if we just gave them a medication then they will resume functioning appropriately without any external intervention. I doubt it.
I've seen the following trends over the course of a 30 year career in professional psychology:

(1) a de-emphasis on 'theory' (and individualized clinical case formulation) in favor of a preoccupation with 'the correct (categorical) diagnosis' which is then mechanically matched up with 'the correct manualized 'EBP' protocol treatment'
(2) a de-emphasis on critical evaluation of clinical data (in the form of patient self-report in interview/session, in the form of patient self-report on questionnaires); the reliance on self-report questionnaires is a particular problem where you have overly concrete clinicians equating scores on the PHQ-9 or scores on the PCL-5 with the clinical diagnoses of Major Depressive Disorder and PTSD, respectively; I have had clients who were not at all suicidal and only mildly depressed circle 'all 3's' on the PHQ-9 for a 'perfect score' of 27 out of 27...as you say, pretty meaningless
(3) an increasing emphasis (which is a good thing) on more sophisticated dimensional/hierarchical-taxonomic models of psychopathology (e.g., HiTOP) in the academic/research literature which has, alas, failed to make it into the implementation sphere all that much
(4) we forgot how to doubt our clients' self-report (appropriately); symptom overreporting is very real (and not necessarily indicative of 'malingering'); there's also the poor practice of 'connecting the dots' that aren't there in order to make one's life simpler/easier as a clinician; I can't tell you how many times I've had to 'deconstruct' a military/trauma history via simply asking the veteran to describe, in detail, his/her direct experience of what happened before, during, and after the supposed traumatic/stressful incident only to find, in the end, that what they describe in no way/shape/form matches the two word or one phrase descriptor of their 'traumatic experiences' listed in the consultation request by the referring provider
 
Last edited:
  • Like
Reactions: 1 users
Wow. Those last two posts really resonate with me. First of all, we do pathologize the rat and the monkey. Sometimes in very pejorative ways that can be more harmful than helpful. But we are quick to justify ourselves. I am very cautious about giving certain diagnoses and will do a thorough psychological assessment using clinical interview, collateral information, and assessment tools, particularly for things like bpd because it is so stigmatizing. I know therapists who give this diagnosis to people because they are difficult. A lot of therapists I know are poorly trained in making this diagnosis. They hear someone say they think think their husband is having an affair and is going to leave them, and they check the box “fear of abandonment”. I just had a big discussion about that in a collaborative group I am in. I asked if the client was making frantic attempts to avoid abandonment. I mean, I only check that box if people are so fearful of being left, abandoned, rejected, that they cry hysterically, have meltdowns, hurt themselves, threaten suicide etc etc in order to try to manipulate the person or prevent them from leaving. I’m sure that my take on that criteria is more on the extreme end. But like I said, I’m very cautious.

I see people interpret the Phq-9 without considering the clients presentation and gathering more information. Someone might give all 3’s for answers, but when you talk with them about it, you find out that their 3 isn’t really a clinical 3. It’s kind of like the pain scale. Some people rate their pain as 10 out of 10 and are sitting comfortably eating a sandwich, contemplating going to Walmart after the session. That’s not a 10 out of 10. But also, I have had people fill out the phq-9 before a session completely within the normal range. They look like hell and when I start talking with them, I realize that they are in significant distress due to depressive symptoms. Also, the phq-9 measures some things that can be related to depression but also can fit other things. For example, attention and concentration difficulties can be caused by a lot of things, not just depression. And finally on my monologue about diagnosing depression, almost no one that is my peer also evaluated for symptoms of hypomania or evidence of a previous manic episode. If they do, they ask yes no questions, but don’t ask for examples. So sometimes I get patients with bipolar diagnoses and when I interview the patient and ask detailed questions about their hypomanic episodes they tell me things like “I went into Walmart and was in the checkout line. I was so excited that so happy that I started jumping up and down. Then I got to my car and I felt depressed.” Actually. Ummm. No. Tell me about a hypomanic episode. I very unapologetically wrote that report and did a review of all symptoms and in my summary detailed why they do not qualify for a bipolar diagnosis. Might be bad style to do that, but I sent that report to the diagnosing and prescribing physician and I hope that patient isn’t still taking lithium, Risperdal, and Lamotragine. But I bet they are.

Also, the PCL-5 is like a screener for ptsd. There are much more appropriate and accepted instruments to evaluate for the presence of ptsd. But I see patients regularly who come for an assessment with diagnoses of ptsd and cptsd and when I start inquiring about their trauma it’s something like “I had to move and change schools in the ninth grade” or “my fiancé left me at the alter” and I’m sitting their like I’m sure that was very difficult for you, but I’m looking for a criterion A trauma. When I read therapist notes or assessments that give out ptsd diagnoses like stubbing your toe is a trauma, I want to get back to them and tell them to do some continuing education.

Last week, I had a client present for clinical interview for an autism evaluation. They have a diagnosis of ADHD. I had her upload all psychological assessments to my platform. She had 2 adhd evaluations done in a 6 month time period. On her first assessment, she was not diagnosed with ADHD because her cat-a childhood symptoms index was normal, she reported not having symptoms until college, and her mother reported no childhood symptoms of adhd. So she just made a new assessment appointment with someone else, did not use collateral, and reported childhood symptoms in the very significant clinical range. So she got her adhd diagnosis. I just wondered if the second clinician had asked to look at her previous assessment, hadn’t been told about it? It was a referral from her psychiatrist and I just couldn’t help myself. When I wrote the report, I reviewed previous assessments in detail like I always do, but this time, I made comparisons between her two assessments, pointed out the discrepancy between self and collateral reports between the sessions, intimating that basically she was diagnosis shopping and that the diagnosis should be scrutinized.

I sound like I think I know everything right now, but I know I don’t. I make mistakes and that I have to learn too. I’ve just these few frustrating evaluations and reports Ibhad to write and I’ve been on bedrest for 3 days, so I’m here talking to myself. If someone actually read this, thanks for listening to the very self indulgent ramblings of a very bored person!
 
(4) we forgot how to doubt our clients' self-report (appropriately); symptom overreporting is very real (and not necessarily indicative of 'malingering');

On this point, it seems obvious to me as a clinician that momentary patient distress would inflate scores on severity measures, but there is so little literature on it that we can't even begin to talk about in clinical practice. Any doubt on the PHQ-9, for instance, are met with quotes about sensitivity and specificity or scale reliability or poorly understood claims of construct validity.
 
Top