Neuropsychological Assessment Predicts "Strengths" and "Weaknesses"-True?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

PETRAN

Full Member
10+ Year Member
Joined
Dec 28, 2010
Messages
187
Reaction score
2
Hello,


So, an academic-clinical thread instead of a career-oriented one.

I do lots of neuropsych. assessments for the purposes of diagnostic contribution (mostly early dementia-MCI) etc. but a substantial amount is for identifying so-called cognitive "strengths and weaknesses". Most of work with TBI and ABI in general, Stroke, MS and children (neuro and learning disabilities) is done for the purposes of recognising the clients weak and strong points and planning a rehabilitation plan ( also using on-task functional analysis).


My question, how valid is neuropsychological testing in actually identifying cognitive strengths and weaknesses in terms of everyday errands, occupation, academics, socialization etc. ? Can a battery of tests comprised of TMT-A/B, AVLT, R-O Picture and some WAIS subtests among others accurately predict the everyday functioning and performance of individuals with those problems in everyday settings?

I personally don't think it can anymore. Neuropsychology has all this "hard-science-y" air around it but in practice i feel it has huge real-life applicability problems due to vast ecological-validity issues. If one things the time and money needed for all these examination processes one thinks twice if it is worth doing them at all. The majority of those tests were designed for the pupose of detecting "organicity" or exploring psychological contructs such as "word recall" rather than predicting e.g. job or educational success. It seems that old-good functional analysis and on-task observations can do a fine job without the need of running a battery of time and money-consuming tests. It sounds all doom and gloom but this is the feeling i get from my practice lately.

I have to say that from the academic side of things, neuropsychology is still extremely valuable.

Recently i got interested in human factors and I/O psychology and i can see that studies exploring specific cognitive abilities (rather than "G") in relation to specific jobs and tasks are just beginning to emerge and the correlations still are not higher than 0.40 (maximum 0.50). It seems that there is a lot of work to be done in terms of connecting cognition to everyday performance.

Just a thread to think, write your thoughts and why not...flame :p

http://www.ncbi.nlm.nih.gov/pubmed/11790904

Members don't see this ad.
 
  • Like
Reactions: 1 user
Part of the problem is coming up with good ways of measuring functioning in the areas you've mentioned (e.g., everyday errands, occupational functioning). Neuropsych testing does predict outcome in TBI, for example, above and beyond other factors such as pre-injury demographic data and injury characteristics, but the outcome variables are often measured via self- and collateral-report. If someone had the time and money to develop and use a psychometrically-valid functional analysis-type measure for, say, job performance, I'd be surprised if neuropsych testing didn't predict/correlate with the results. And it's also useful for the ubiquitous, "hey, does this person actually have a memory problem? If not, what's going on?"

The direct one-to-one ecological validity of neuropsych testing has always been an area of some contention, though, in part due to the reasons I mentioned above.
 
  • Like
Reactions: 1 user
I don't particularly buy into the "strengths" and "weaknesses" argument at a core. Basically because it assumes that one is at a pure average across cognitive domains. We know that is not necessarily the case. IQ and memory on co-normed instruments correlates at roughly .5.

On the clinical side we look more for patterns in functioning. Deficits that we know exist within a certain clinical population. As a neuropsychologist, my tests are the weakest part of my assessment. My best instrument is my clinical interview and knowledge of neuroanatomy and neuropathology.

As for the benefits of the neuropsych eval, there is a paper that looks at cost utilization following an assessment and found them to be quite cost effective. I'll try to track down the cite.
 
Members don't see this ad :)
Part of the problem is coming up with good ways of measuring functioning in the areas you've mentioned (e.g., everyday errands, occupational functioning). Neuropsych testing does predict outcome in TBI, for example, above and beyond other factors such as pre-injury demographic data and injury characteristics, but the outcome variables are often measured via self- and collateral-report. If someone had the time and money to develop and use a psychometrically-valid functional analysis-type measure for, say, job performance, I'd be surprised if neuropsych testing didn't predict/correlate with the results. And it's also useful for the ubiquitous, "hey, does this person actually have a memory problem? If not, what's going on?"

The direct one-to-one ecological validity of neuropsych testing has always been an area of some contention, though, in part due to the reasons I mentioned above.



Do you remember the study about TBI? I think that part of the problems stems from the loose connection between those specific neuropsych constructs and the cognitive processes in which everyday tasks are based on. Especially the construct of "Executive Functions" which is a loose construct in itself (although an extremely interesting one). I always had a problem connecting bad performance of say...WCST with everyday tasks. WCST seems to be complicated as well, it partly taps rule induction (which we know very little in terms of brain areas and cognitive processes) and rule shifting based on feedback (somehow we know a bit more but what it actually predicts in terms of everyday tasks?). But even some memory measurements could have problems, mostly because every-day behaviours are more routine and vastly based on more complicated overlearned procedural/implicit processes/"scripts" (and neuropsych tests are not known for tapping those). I would suggest that more ecologically-valid dynamic tasks are needed rather than the more traditional tests, which are based on tradition and convention rather than empirical evidence. What do you think?


I don't particularly buy into the "strengths" and "weaknesses" argument at a core. Basically because it assumes that one is at a pure average across cognitive domains. We know that is not necessarily the case. IQ and memory on co-normed instruments correlates at roughly .5.

On the clinical side we look more for patterns in functioning. Deficits that we know exist within a certain clinical population. As a neuropsychologist, my tests are the weakest part of my assessment. My best instrument is my clinical interview and knowledge of neuroanatomy and neuropathology.

As for the benefits of the neuropsych eval, there is a paper that looks at cost utilization following an assessment and found them to be quite cost effective. I'll try to track down the cite.


Yes, but how knowledge of neuroanatomy and neuropathology contributes to assessment and rehabilitation? I love that stuff and i have done research on this type of stuff, but i can't see how it adds to my practice say...the knowledge that delayed recall probably taps on hippocampal processes since i'm not going to write a neuropathological report based on the tests (e.g. on the brain areas damaged) and even if i do make suggestions on possible areas i can't see the gain (maybe rarely so, in early stages of some neurodegenerative process but still...).

This is the problem, that neuropsychology is found in the grey area between academics and clinical practice and it is not sure of where it wants to head. It still has an air of academic elitism which doesn't contribute to the clients everyday problems as it should IMO. I think that if it wants to be more "clinical" it should become more "ecological/real-life-applicable", my 2 cents. Maybe i'm a bit exaggerating but thats my impression lately.
 
  • Like
Reactions: 1 user
Do you remember the study about TBI? I think that part of the problems stems from the loose connection between those specific neuropsych constructs and the cognitive processes in which everyday tasks are based on. Especially the construct of "Executive Functions" which is a loose construct in itself (although an extremely interesting one). I always had a problem connecting bad performance of say...WCST with everyday tasks. WCST seems to be complicated as well, it partly taps rule induction (which we know very little in terms of brain areas and cognitive processes) and rule shifting based on feedback (somehow we know a bit more but what it actually predicts in terms of everyday tasks?). But even some memory measurements could have problems, mostly because every-day behaviours are more routine and vastly based on more complicated overlearned procedural/implicit processes/"scripts" (and neuropsych tests are not known for tapping those). I would suggest that more ecologically-valid dynamic tasks are needed rather than the more traditional tests, which are based on tradition and convention rather than empirical evidence. What do you think?

Yes, but how knowledge of neuroanatomy and neuropathology contributes to assessment and rehabilitation? I love that stuff and i have done research on this type of stuff, but i can't see how it adds to my practice say...the knowledge that delayed recall probably taps on hippocampal processes since i'm not going to write a neuropathological report based on the tests (e.g. on the brain areas damaged) and even if i do make suggestions on possible areas i can't see the gain (maybe rarely so, in early stages of some neurodegenerative process but still...).

This is the problem, that neuropsychology is found in the grey area between academics and clinical practice and it is not sure of where it wants to head. It still has an air of academic elitism which doesn't contribute to the clients everyday problems as it should IMO. I think that if it wants to be more "clinical" it should become more "ecological/real-life-applicable", my 2 cents. Maybe i'm a bit exaggerating but thats my impression lately.

I certainly haven't had the experience that neuropsych has an air of academic elitism, but that might just be me. It's been working fairly diligently, particularly of late, to focus on the effects of neuropsych assessment on clinical treatment and outcomes, diagnostic contributions (e.g., positive and negative predictive values and post-test probabilities), etc. Our training and background in research is what allows us to apply these principles to our daily practice setting, and to develop these metrics for the specific clinics in which we work (given that they can vary substantially by setting for the same tests).

As for neuroanatomy/neuropathology, both are very important to day-to-day practice in neuropsych. I've found, particularly over the past 1.5-ish years on fellowship, that the tests themselves are only a small portion of what I do. The knowledge of underlying neuropathology and of psychometrics allows me to determine if, for example, the deficits I'm seeing on testing are consistent with what I would expect based on suspected and/or known pathology. And the aforementioned knowledge is what allows me to tailor my interview and test battery to explore my hypotheses, and which allows me to develop said hypotheses based on the referral question and records review. And finally, all that stuff gets factored into my final integration of the various data points when arriving at my conclusions.

As for the link between neuropsych constructs and cognitive processes, those are perpetually in flux. Daily clinical work and subsequent research is what allows us to continually test and refine our definitions in these areas. Executive functioning, for example, is hardly ever viewed as a unitary process anymore; it's very often referred to as an umbrella term, much like "memory" or "attention." It might be that different/more assessment measures are needed, it might be that our interpretation of current measures will continue to involve, or (perhaps most likely), combinations of both.

As for the links to the studies looking at outcomes in TBI, there are too many to post here, but searching for work by M. Sherer, C. Boake, A. Sander, T. Novack, Malec, and Nakase-Richardson (who, I believe, are all involved to varying degrees in Model Systems work), among others, could be a place to start. David Cifu (a physiatrist) has also done quite a bit of outcomes-related research.
 
Last edited:
Civil Capacities in Clinical Neuropsychology by Demakis may also be a helpful resource related to this topic. Agree that knowlege of neuroanatomy and neuropathology to guide the evaluation is the key here.
 
Ecological validity is a problem for any area of assessment and in fact for all of psychology. Neuropsych is the one area that has less of a problem than other aspects, but that doesn't mean it is not still a problem. It's like a relative strength vs relative weakness thing, kind of ironic.

A good example of how it is a problem in other areas would be to look at sports medicine. Various exams can tell that a knee injury has healed to a certain degree and the doc will clear the athlete to perform, but until they get in the game and put it to the real-world test, we just won't know.
I think the better question would be: how do we increase the ecological validity of our assessments?

A final point to add, I think that this is a bigger problem with treatment outcome research that tends to focus on symptom reduction as opposed to improved real-world function. I work with a lot of school-aged kids and to me the real measure of successful treatment can be seen in improved academic performance. That is where the rubber meets the road.
 
  • Like
Reactions: 1 user
A final point to add, I think that this is a bigger problem with treatment outcome research that tends to focus on symptom reduction as opposed to improved real-world function. I work with a lot of school-aged kids and to me the real measure of successful treatment can be seen in improved academic performance. That is where the rubber meets the road.

Indeed, although that unfortunately also can then add additional error variance into the equation. Using a therapy example, if a psychotherapy technique/protocol reduces the number/severity of reported symptoms but doesn't improve various areas of "real world" functioning (e.g., occupational functioning, social re-engagement), is that because the therapy didn't work per se, or is there some other confounding factor (either pre-existing or that arose during or after therapy)?
 
Indeed, although that unfortunately also can then add additional error variance into the equation. Using a therapy example, if a psychotherapy technique/protocol reduces the number/severity of reported symptoms but doesn't improve various areas of "real world" functioning (e.g., occupational functioning, social re-engagement), is that because the therapy didn't work per se, or is there some other confounding factor (either pre-existing or that arose during or after therapy)?
Don't forget directionality too. So even if we do see an improvement in functioning correlated with decrease in symptoms, which causes which? Am I getting better grades because I am less depressed, or am I less depressed because I am getting better grades?

It all boils down to psychology being one of the more difficult sciences which is one reason why I love it. As I would often answer a variety of questions from students in my Intro to Psych classes, "That is a great question and the answer is that we don't really know. The human mind is incredibly complex and we need more research! If you are really interested register for my Research Methods class and I'll teach you how to help answer those questions." A few students actually took me up on that challenge.
 
Top