Rough IQ estimates

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Ollie123

Full Member
15+ Year Member
Joined
Feb 19, 2007
Messages
5,658
Reaction score
3,985
Asking here just because I know we have a strong assessment contingency on the board.

Developing a screening battery for use across multiple projects and would like to include an IQ estimate. Clinical populations - mostly addiction, some other comorbid psychopathology, but won't be used for true neuropsychological disorders. Assessment needs to be (very) brief and something that is easily administered by an RA under minimal supervision. Its mostly just to have a quantitative index to screen out anyone extremely low functioning and for assessing major group differences between patients/controls.

NAART seems to be standard in the literature I read, but the psychometric data seem mediocre at best. Most alternatives I'm finding seem to be something like X subtests of the WAIS, etc. that is just infinitely more time intensive. Any suggestions? The overall sample correlations seem okay, suggesting NAART does reasonably well categorizing into broad clusters (below average, average, above average) - its just the more nuanced correlation within clusters that is extremely weak. If so I can probably live with that, but curious what alternatives you folks might know of that I haven't found.
 
I guess it depends on how much time you have for it, and how rough (e.g., how wide of confidence intervals) do you want? Additionally, do you want to get an estimate that uses both verbal and non-verbal estimates in case you have a low education sample?
 
Options:

It all depends on how you view IQ and how low you think you're sampling. If you're just looking for g, and g is g, then screening is okay.

1) RIsT- takes no time at all. Designed to screen for low Iq. Huge CIs because it's a screening measure.

2) Shipley 2- one of the three letter agencies uses this. Almost no supervision required for administration.

3)naart- John Meyers has some data correlating the naart with the wais-4. Contact him. Super duper nice guy and smart as hell.

4) the topf from the acs- I hate Pearson products

5) ward-7 of the wais. Yes there is data on the 4. Takes about 30 min if you are pro at test admin

6) you could do a demographic estimate, but those are pretty not okay with your needs.

7) wasi-2: probably 20 min to admin.

8) unit/Toni: I've seen this used in juvenile justice settings. I'm not a fan, but I have seen records where the person has been tested multiple times across decades and the Toni has lined up with later wais test scores.
 
All are tradeoffs - just looking for options. Would definitely need to be max 15-20 minutes, but faster is better. Not expecting perfect estimation (otherwise why bother with long forms?), but would be thrilled if estimates fell within 5 points (either direction) of actual IQ a reasonable amount of the time (NAART only did this around 25% of the time based on the papers I've found). Would want to see strong test-retest, but I assume that is likely to go along with small CIs. Frankly, even if I can bucket people into below, average and high its probably fine. This would all be super tertiary - I just want to head off any token "Patient and control samples may differ in IQ" reviewer comments.

Obviously non-verbal would be ideal, but don't sacrifice the above to get me there. Don't anyone go and do too much work on this - feel free to just throw out some names and I can do the legwork myself.

NAART is obviously considered OK given the literature, just wondering if there are better options.

Edit: Thought the WASI-2 was too long, but forgot they also have a two form subtest. That might be the answer right there...thanks Psydr!
 
Last edited:
Agreed with others that I would go towards the WASI.

otherwise, could just ask what they would estimate how large of a win the Brown's will have in the super bowl and judge IQ based on the length of time they laugh.
 
Last edited:
I also would go with the wasi-2 camp. But, for an even quicker estimate, I have just admin the vocab subtest to gain an idea of FSIQ. However, this was for research and not for clinical purposes.

I believe if you are ever faced with having to give someone one test from the Wechsler scales you are probably best going with either vocab or matrix reasoning because of their high loadings on the overall FSIQ.
 
WASI. From my experience, the Shipley heavily under-reports intelligence.
 
Assessment needs to be (very) brief and something that is easily administered by an RA under minimal supervision. Its mostly just to have a quantitative index to screen out anyone extremely low functioning and for assessing major group differences between patients/controls.

A Shipley-2 is probably the least demanding in terms of RA training and consistency of administration. It might be more useful for screening out on the low end, but I don't know how well it would reliably show differences between groups since it's not going to be as precise as a WASI-II. If it were me I'd probably use the TOPF or Shipley-2 because they probably fit your criteria best (but it's a fast, reliable, precise -- pick two kind of situation).
 
WASI (2 or 4) is too long for this. It's also unnecessary given the goal.

I like the WTAR. I think this is the same as the TOPF that PsyDR reported. I use this for neurological disease populations because of the crystallized int issue. You could do a barona plus a TOPF and be done with it for your purposes.

Just read the followup (5-10 points of full).

WTAR/TOPF his fairly highly correlated with verbal IQ. Best WAIS subtests are vocab and matrix reasoning. Both can be a slag to get through depending on the patient and vocab can be difficult to score for an RA.

I would take issue with the goal of being within 5 points of full IQ. What's the point?

WTAR/TOPF definitely seems like a good option. I assume your final comment is meant to indicate 5 points is too idealistic. To be fair, I said I'd be "thrilled" if I got that. Frankly, even if I can get "low/average/high" bins with reasonable accuracy, that should be fine.

I'm reluctant to use the Shipley. Seems like much more of a screening measure.
 
I'd assume nonverbal is more relevant, but I honestly don't know that literature well enough to comment.

Just picture a simple quasi-experimental design (clinical vs. control). They will be completing various cognitive tasks in the scanner. (Go/NoGo, feedback learning, etc.). Just want to be able to head off any reviewer concerns that specific differences are due to global IQ differences between groups. Basically, this just goes in the sample characteristics table along with age, race, etc. as what is basically a nuisance variable. We hope there are no differences between groups and it never gets mentioned again. Simple, fast, easy. Rough is fine. Something close to the NAART, I'm just trying to figure out if there is something of about equal in intensity but with better support.
 
2 Things:
1) I'm also in the WASI 2 camp.
2) @PSYDR why the pearson product hate? I just switched over to qglobal for my assessments and LOVE it. I'm mostly using it for MCMI and MMPI right now.
 
@bmedclinic

1) when the acs came out, the head of development refused to provide the regression formula for premorbid estimates. We all knew it was because it was reproducible. Pearson said they wouldn't for two reasons:

A) the formula was too complicated for the profession to understand. The head is a psychologist. Same education. This let me know that even when they are lying they will use insults to get you to back down. I don't do business with people like that.

B) it would cost too much to mail or include. But they don't print that stuff anymore; they have a cd. We are talking about a thousand dollar product, which doesn't cost dick to print. Again, they lied and it wasn't even a good lie.

2) If you look at the Pearson phone tree, you'll notice they focus on institutional purchasers. Makes sense. A school will order more than I ever will. Their focus lends itself to administration and interpretation by untrained people. See their parent companies.

3) they are moving from a product model to a per use model, using technology. Combined with the black box, "you're too stupid to understand that" and their associated company pushing iPads in most classrooms, they are moving to cut psychologist out of the testing game.
 
In addition the testing model removing psychologists, don't forget that their new testing model also means they own the data. That has some interesting implications for research on their instruments and the development of cognitive theory.... which they already stifle when they feel the need/that their instruments might not come out ahead. For instance, I know they haven't been a big fan of bifactor research on their instruments and have blocked folks who are doing that from getting data when possible.
 
Top