DKEFS-RBANS-CTVL-Boston Naming

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

docma

Full Member
15+ Year Member
Joined
Oct 27, 2007
Messages
791
Reaction score
261
I am interested in opinions of these tests in terms of which would be most valuable to gain experience with on internship--general opinions or ranking are welcome. I know they have overlapping functions but don't have recent experience working with them myself.
Don't want and can't have all of them, but want the most versatile for a community mental health site and valuable for training experience.

Members don't see this ad.
 
You probably mean CVLT?

I like the RBANS and the DKEFS because they are batteries and thus somewhat more flexible/comprehensive... BUT you want something that it makes sense for the interns to use in context, as well. If you get a lot of referrals for memory screening, then the CVLT would be the most useful.

In my experience at a general geriatric clinic, of these options the most commonly used tests in order would be:

CVLT
DKEFS
RBANS
Boston Naming

I hope that's at least a little helpful.
 
Last edited:
Members don't see this ad :)
Jeez, hard to rank because they are all used a lot, but in different ways. I think they are all great to know, honestly, but I will try to rank them.

In my grad student opinion


RBANS - the most comprehensive of all the tests you listed, so it has that advantage. It is often used as a re-test of a patient that got a more comprehensive battery at my site, or for a brief test.
DKEFS - One of the best and most comprehensive executive functioning tests, used it with almost every neuropsych eval I did on my practicum. I saw the verbal and design fluency used the most, in addition to the tower test.
CVLT - used a lot with TBI and dementia evals, also used this with almost every neuropsych eval I did on my practicum
Boston Naming - Used a lot for strokes, aphasia, dementia evals

I would definitely put the Boston naming last, but as far as the others, I think RBANS has the advantage of being the most comprehensive, and giving you a more global view of different areas of functioning as compared to the others. They are all really good to know though.
 
Last edited:
Jeez, hard to rank because they are all used a lot, but in different ways. I think they are all great to know, honestly, but I will try to rank them.

In my grad student opinion


RBANS - the most comprehensive of all the tests you listed, so it has that advantage. It is often used as a re-test of a patient that got a more comprehensive battery at my site, or for a brief test.
DKEFS - One of the best and most comprehensive executive functioning tests, used it with almost every neuropsych eval I did on my practicum. I saw the verbal and semantic fluency used the most, in addition to the tower test.
CVLT - used a lot with TBI and dementia evals, also used this with almost every neuropsych eval I did on my practicum
Boston Naming - Used a lot for strokes, aphasia, dementia evals

I would definitely put the Boston naming last, but as far as the others, I think RBANS has the advantage of being the most comprehensive, and giving you a more global view of different areas of functioning as compared to the others. They are all really good to know though.

The DKEFS has no norms for clinical populations...um, not even frontal TBI patients. Makes it kinda leap, psychometrically...IMHO
 
RBANS: good brief screener for broad cognitive functioning. Well researched and validated. Better to interpret at the subtest level as the Language index is garbage and the Attention index merges two distinct constructs (coding and digit span do not go together). A solid measure to include in flexible batteries where you might tailor additional measures based on performance, and also useful for inpatient or demented groups. Not as useful for higher functioning individuals with a known neurological etiology (e.g. seizure, stroke, etc) as more indepth measures might better address presenting questions.

D-KEFS: conceptually well-designed, good variety of executive processes. Lack of research on clinical validity makes it questionable but there are certain subtests that are helpful. Design Fluency is unique as a measure of nonverbal fluency. Tower and Sorting can address basic executive questions of set-switching, problem solving, planning, and impulsivity. Process scores are an interesting addition but research on them hasn't really been too exciting. In my opinion, simpler substitutes (COWAT, TMT) can suffice and so you don't really need it to answer your question, but it's a nice addition.

CVLT-2: Big fan of this one. learning curves, proactive and retroactive interference, semantic/serial clustering, memory trace, semantic cueing vs. free recall, etc. A wealth of information and strong literature base supporting this rather simple and elegant task. A nonverbal equivalent is needed and welcome (Biber exists but not widely used for some reason, maybe not good norms or reliability)? I would include it in any comprehensive neuropsychology battery.

Boston Naming: Skewed normative data makes this kind of a psychometrically clunky test - personally, the percentiles aren't even worth reporting. But, well researched and useful for identifying aphasias. The semantic and phonemic cueing items are informative. Brief too, so not a bad test to include - I almost always would.
 
It really depends on what the question of interest is. If you want a quick screening battery for older adults, do MMSE, Clock, FAS/Animals, Trails, and HVLT.
 
from an application aspect I would think experience with any would be fine.
 
You should get experience with all of them if you can. I have given batteries that included all of those tests before. CVLT and BNT give you more ways to test limits and error types than the RBANS, but the RBANS is a good basic screen. DKEFS is a different animal and a good one to know.
 
You should get experience with all of them if you can. I have given batteries that included all of those tests before. CVLT and BNT give you more ways to test limits and error types than the RBANS, but the RBANS is a good basic screen. DKEFS is a different animal and a good one to know.

Pretty much this. In part, it's going to depend on what you'll want to do with the experience later...if you're planning on going to a formal neuropsych postdoc, then having experience with the CVLT is going to be a must, as would likely the BNT. RBANS, as has been said, has its strengths and weaknesses. Given that it taps multiple domains, if you could only choose one of the listed tests, that'd probably afford you the broadest exposure; it can be particularly handy with inpatient evals where the ability to tolerate a standard-length battery is likely compromised.

DKEFS is the trickiest on the list to administer. If you have the opportunity, though, going through it at least once or twice even if you never use it again could be very informative.
 
I am interested in opinions of these tests in terms of which would be most valuable to gain experience with on internship--general opinions or ranking are welcome. I know they have overlapping functions but don't have recent experience working with them myself.
Don't want and can't have all of them, but want the most versatile for a community mental health site and valuable for training experience.

If you are more of a generalist, it would be best to learn the RBANS as it is the highest level screening measure and allows for some differential diagnosis depending on your experience level/knowledge base. CVLT would be second. Boston naming would be unnecessary for you unless you are neuropsych/rehab psych. For the DKEFS, just pretend is does not exist, which is my recommendation for everyone. The best tasks on it exist in better forms with better normative data, quicker administration, and volumes of research (e.g., trails, COWAT, Stroop, design fluency). Several of them are public domain.
 
If you are more of a generalist, it would be best to learn the RBANS as it is the highest level screening measure and allows for some differential diagnosis depending on your experience level/knowledge base. CVLT would be second. Boston naming would be unnecessary for you unless you are neuropsych/rehab psych. For the DKEFS, just pretend is does not exist, which is my recommendation for everyone. The best tasks on it exist in better forms with better normative data, quicker administration, and volumes of research (e.g., trails, COWAT, Stroop, design fluency). Several of them are public domain.

No one ever seems to to want to engage me in the discussion of its psychometric shortcomings. If I ever mention it around here (my university department, not SDN), its like I have slandered poor Edith's name in the mudd or something.
 
For the DKEFS, just pretend is does not exist, which is my recommendation for everyone. The best tasks on it exist in better forms with better normative data, quicker administration, and volumes of research (e.g., trails, COWAT, Stroop, design fluency). Several of them are public domain.

I am unaware of an equivalent for design fluency...I always thought that was a novel aspect for the D-KEFS. I like the Sorting test too, though it's a pain to administer. I do think that the battery as a whole is pretty superfluous because, as mentioned, there are substitutes for most of its subtests that are better validated. I can't imagine a scenario where the entire D-KEFS would be worth administering given its length and the rather incertitude nature of the "executive functions" construct. That said, if the research catches up I don't think it's a complete waste of time.

No one ever seems to to want to engage me in the discussion of its psychometric shortcomings. If I ever mention it around here (my university department, not SDN), its like I have slandered poor Edith's name in the mudd or something.

I think it has its problems (largely, the lack of clinical validation out there which may reflect practitioners' reluctance to use it), but at least there is a single and large normative sample. One could argue that piecemealing together an EF battery with the standalone measures is more problematic because many of those subtests are derived from different norms.

Overall, I'm not a complete advocate of the D-KEFS but neither would I throw out the baby with the bathwater. Of course, I respect that there are differing opinions here.
 
Members don't see this ad :)
I am unaware of an equivalent for design fluency...I always thought that was a novel aspect for the D-KEFS.

Ruff Figural Fluency Test. Been around for years.
 
My point was that is it/was marketed as test to ferret out EF problems. If I dont have any EF clinical patients in my norms (showing that they perform deferentially worse on said tasks), then how do I know its sensitivity for this purpose?
 
I am unaware of an equivalent for design fluency...

Completely out of my area, but wouldn't the RFFT qualify?

EDIT: Apparently while I had this box pulled up ERG decided to beat me to the punch.
 
Completely out of my area, but wouldn't the RFFT qualify?

EDIT: Apparently while I had this box pulled up ERG decided to beat me to the punch.

Yup. And there are other versions of design fluency tests that are public doman and require a blank piece of paper. Schretlen also has one as part of his CNNS normative battery sold by PAR.

As far as the DKEFS norms, they are not particualry good. No edu correction? We won't even get into the psychomtric problems. I have never seen a reputable forensic neuropsychologist use it. Why? No one wants to have to defend it in court.
 
Yup. And there are other versions of design fluency tests that are public doman and require a blank piece of paper. Schretlen also has one as part of his CNNS normative battery sold by PAR.

As far as the DKEFS norms, they are not particualry good. No edu correction? We won't even get into the psychomtric problems. I have never seen a reputable forensic neuropsychologist use it. Why? No one wants to have to defend it in court.

Huh, haven't seen the Ruff in my travels. Odd but not impossible.

If we're criticizing education correction, we might as well go after IQ and memory batteries too. The Heaton norms are great but the regression estimations for missing cells have their own interpretative problems.
 
I don't use the DKEFS, mostly because it is clunky to administer, but I like some of the subtests. Norms aren't a huge concern to me. I mean, obviously, it is important to keep track of how they are derived and holes in them. But, I think functional data is more important. For example, I'll include tests like inhibition of saccadic eye movement or testing for myerson's sign. I use clock drawing, but I don't score it. I just look for disorganization or neglect and describe it qualitatively, similar approach with the rey complex figure. I generally strive to have a psychometrically solid battery, but I supplement it with functional exploration. You'll find people who try to make psychometric arguments about comparing normative score differences in say BVMT (visual memory) versus HVLT (verbal) as being advisable because of the sampling issues, but they aren't really functionally equivalent. I think we have to be careful with more subtle interpretations of functional differences and it must be done so in the context of the the patient's history. It's all about pattern clusters to me. How do things hold together? E.g., if I have someone making source memory errors, doing terribly on list b (CVLT-2), perseverating on the WCST, and with report of abulia/personality changes, I'm thinking strongly about fronto-subcortical dysfunction (provided they do okay on other tests). An isolated impaired performance statistically isn't going to get me to worked up unless it is really loud and then I am going to try to triangulate and replicate it somehow with a different instrument.

Well said. It's been said before (maybe by you) that neuropsychology should move away from an overfocus on specific scores and rely more on patterns/clusters that have functional and behavioral correlates.
 
I don't use the DKEFS, mostly because it is clunky to administer, but I like some of the subtests. Norms aren't a huge concern to me. I mean, obviously, it is important to keep track of how they are derived and holes in them. But, I think functional data is more important. For example, I'll include tests like inhibition of saccadic eye movement or testing for myerson's sign. I use clock drawing, but I don't score it. I just look for disorganization or neglect and describe it qualitatively, similar approach with the rey complex figure. I generally strive to have a psychometrically solid battery, but I supplement it with functional exploration. You'll find people who try to make psychometric arguments about comparing normative score differences in say BVMT (visual memory) versus HVLT (verbal) as being advisable because of the sampling issues, but they aren't really functionally equivalent. I think we have to be careful with more subtle interpretations of functional differences and it must be done so in the context of the the patient's history. It's all about pattern clusters to me. How do things hold together? E.g., if I have someone making source memory errors, doing terribly on list b (CVLT-2), perseverating on the WCST, and with report of abulia/personality changes, I'm thinking strongly about fronto-subcortical dysfunction (provided they do okay on other tests). An isolated impaired performance statistically isn't going to get me to worked up unless it is really loud and then I am going to try to triangulate and replicate it somehow with a different instrument.

I am all for assessing function too, but with such ecological validity problems in our instruments, what do I do with a tower, fluency and 20 questions scores of 5, or 6? Does this tell me anything about how they are functioning out in the world. Maybe, I dont know really. I dont know what patients with documented/validated EF problems typically score on these tests...
 
Last edited:
I don't use the DKEFS, mostly because it is clunky to administer, but I like some of the subtests. Norms aren't a huge concern to me. I mean, obviously, it is important to keep track of how they are derived and holes in them. But, I think functional data is more important. For example, I'll include tests like inhibition of saccadic eye movement or testing for myerson's sign. I use clock drawing, but I don't score it. I just look for disorganization or neglect and describe it qualitatively, similar approach with the rey complex figure. I generally strive to have a psychometrically solid battery, but I supplement it with functional exploration. You'll find people who try to make psychometric arguments about comparing normative score differences in say BVMT (visual memory) versus HVLT (verbal) as being advisable because of the sampling issues, but they aren't really functionally equivalent. I think we have to be careful with more subtle interpretations of functional differences and it must be done so in the context of the the patient's history. It's all about pattern clusters to me. How do things hold together? E.g., if I have someone making source memory errors, doing terribly on list b (CVLT-2), perseverating on the WCST, and with report of abulia/personality changes, I'm thinking strongly about fronto-subcortical dysfunction (provided they do okay on other tests). An isolated impaired performance statistically isn't going to get me to worked up unless it is really loud and then I am going to try to triangulate and replicate it somehow with a different instrument.

Agreed. I have never had a patient with frontal dysfunction (e.g., frontotemporal dementia, ACoA aneurysm etc...) where I thought " gee, a co-normed battery of tests would help me figure this out." The behavioral presentation typically speaks for itself, and there are less clunky instruments as well as patterns on others, as you mention, that give you the needed information in more subtle cases, assuming you have the appropriate background knowledge. Not to mention collateral information in cases of dementing disorders. My gripe with the norms is that this is always offered as a "major" strength of the DKEFS.
 
As for this topic...all of those assessments pop up pretty regularly in charts, but for different reasons. I think the RBANS is fine strictly as a way to collect some objective data to recommend a full assessment, but the norms are "meh." The DKEFS is interesting to learn, but again..the norms...they may or may not be useful depending on the population being assessed. For qualitative reasons I like their versions of Trails and the Stroop, but some of the other measures are fall less useful. Boston Naming...eh, it is part of any full neuropsych battery, though it doesn't provide a ton of additional data that you probably didn't already know. Lastly, the CVLT...which I love. It has some limitations, but there is a great deal of support for it in the literature, it fits well w. other measures, and it provides a ton of good data...if used correctly. I think it offers some of the most interesting opportunities for learning, as people have used it in all sorts of patient populations.
 
Could some of you more experienced with neuro speak to the BNT vs the NAB?
 
Could some of you more experienced with neuro speak to the BNT vs the NAB?

-It always nice not to show a racist symbol to an African-American.

-None under 30 knows what the very last item is.
 
-It always nice not to show a racist symbol to an African-American.

-None under 30 knows what the very last item is.

This is the Boston?

I know very few people get "Shoe" on the NAB.

Our office joke is that we always test for Axis II when they get shoe.
 
Could some of you more experienced with neuro speak to the BNT vs the NAB?

Big colorful pictures that are easier to see. NAB has 31 items and may be quicker to admin unless the person does not meet criteria for reverse admin rules on the BNT. Also BNT is timed for each limit.
 
The BNT has been hijacked by SLPs...while the NAB is firmly w. psychology. I know the BNT better, but the NAB version is fine...and shorter.

Have you seen a lot of scope creep from SLPs?
 
Last edited:
Top