Feedback on Q Interactive administration of WISC-V

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

aly cat

Medical Science Liaison
10+ Year Member
Joined
Feb 3, 2009
Messages
229
Reaction score
217
Has anyone used the electronic WISC-V (Q-Interactive) administration? I've just started a small side practice and need to decide today whether to order the traditional test kit or whether I should make the jump. I still need to review the psychometric stuff but wanted to a) generally hear about experiences/impressions and b) figure out what is included in the "starter kit." I'm wanting to do the free trial to see how I like it but they've indicated that the free trial does not include use of the starter kit... but two different sales people have been unable to tell me what is included in the starter kit. I've deduced the blocks for BD (which I have from my WAIS kit), but am unsure about what else I would need. Thanks!

Members don't see this ad.
 
Are you talking about the xbass excel system?

ETA: no, I don't think you are talking about the xbass. Oops.

Has anyone used the electronic WISC-V (Q-Interactive) administration? I've just started a small side practice and need to decide today whether to order the traditional test kit or whether I should make the jump. I still need to review the psychometric stuff but wanted to a) generally hear about experiences/impressions and b) figure out what is included in the "starter kit." I'm wanting to do the free trial to see how I like it but they've indicated that the free trial does not include use of the starter kit... but two different sales people have been unable to tell me what is included in the starter kit. I've deduced the blocks for BD (which I have from my WAIS kit), but am unsure about what else I would need. Thanks!
 
I am vehemently against on screen admin. I believe:

1) They'll make seeing the norms harder and harder. Eventually this will be a near impossibility. (Congratulations, you're no longer a professional, you're a technician)

2) They'll start selling administration credits in bulk.

3) They'll publish some bad research that indicates bulk administration by teachers is valid.

4) Students will then be given IQ tests, and a print out will be given with some bad CYA measure like "These results need to be interpreted by a licensed professional".

5) Now the bulk of peds referrals are gone.

6) Because you can't see the norms anymore, you're stuck with how to re-test when parents want a second opinion.

7) Forensics is over. You can't testify in any legal cases, because you can't explain what happens between "patient selects X" and "here are the scores".

8) You'll start seeing data mining from real clinical data. Which is suspect, unless you know all the details of the cases. An older test had a similar problem because they recruited a bunch of people with specific problems, and included them in the normal controls.
 
  • Like
Reactions: 2 users
Members don't see this ad :)
I'd be more ok with having to subscribe for norms than I would on-screen administration, especially if the funds are actually used to continually update the norms, for the reasons PSYDR mentioned. Also, I don't know if anyone's actually done research to support the equivalence of some of the on-screen subtest administrations, which is potentially problematic.

I'd be ok with just recording performances via electronic means rather than on a written protocol (e.g., to save paper and possibly cut down on simple scoring mistakes), but only if I remained in control of the data and scoring at all times. But as of now, I still do everything by paper.
 
  • Like
Reactions: 1 user
I'd be ok with just recording performances via electronic means rather than on a written protocol (e.g., to save paper and possibly cut down on simple scoring mistakes), but only if I remained in control of the data and scoring at all times. But as of now, I still do everything by paper.

I'm very hesitant for all computer scoring, even though people do it all the time. Remember that software glitch with the CVLT-II in the VA system? Systematic error that took a while to find. How many blindly scored, incorrect protocols slipped through?
 
Was it a CVLT-II glitch, or MMPI-2-RF? I think there might've also been one with the SLUMS.

Either way, I agree. At the very least, get used to scoring them by hand so you know how to do it and how the scores are derived, and review all computer scoring printouts to be sure they make sense. Although that's a lot harder to do with the longer self-report inventories. I'd sure hate to score all my RFs by hand.
 
Also, I don't know if anyone's actually done research to support the equivalence of some of the on-screen subtest administrations, which is potentially problematic.

It's been done, but (surprise!) mostly by Pearson (at least in what I found in quick lit review) - http://www.helloq.com.au/userfiles/830351450329565.pdf

I know there's research more broadly suggesting differences in paper v. computer testing outcomes for similar constructs (e.g. SAGE Journals: Your gateway to world-class journal research).
 
  • Like
Reactions: 1 users
I think the system is still too glitchy and sensitive. It’s so easy to make a mistake and so difficult to correct them.
 
  • Like
Reactions: 1 user
I think the system is still too glitchy and sensitive. It’s so easy to make a mistake and so difficult to correct them.

It's worth adding that while it's a separate program, Q-Global is a glitchy, unreliable mess. I've often ended up scoring things by hand rather than using it because it was constantly crashing. Not sure how much Q-Interactive needs to be able to "interact" with it, but if Q-Global is down and you can't administer and/or have to wait to get results, I could see that being a major problem.
 
  • Like
Reactions: 2 users
Heck no. There is not sufficient proof it is equivalent to the existing measures. I also have significant concerns about how the data are handled. Oh, and the continuance of “new/updated” assessment measures with minimal support; it’s just bad form.

PSYDR also laid out some other concerns that I wholly support.
 
  • Like
Reactions: 1 users
Heck no. There is not sufficient proof it is equivalent to the existing measures. I also have significant concerns about how the data are handled. Oh, and the continuance of “new/updated” assessment measures with minimal support; it’s just bad form.

PSYDR also laid out some other concerns that I wholly support.

I have not seen them myself but my understanding is that there are some potentially HUGE differences in the tasks that,even if the data somehow works out in support of consistency, are worrisome. Take WAIS coding, I’ve heard that it’s multiple choice (select which symbol should be paired) rather than writing. That alone completely changes the nature of the test. For other measures, might be less of an issue, but I’m very wary of claims of equivalence.

Take this comment with a grain of salt as I’ve only heard this info second-hand. But still, frustrating to hear Pearson and other players claim equivalence without solid supportive evidence.
 
  • Like
Reactions: 1 user
You all are confirming my hunches... ordering a traditional kit. Thanks for the feedback.
 
Top