Thanks for interpreting what he was trying to say, I've noticed a few people have to do that on here to his replies. As others have noted too, it seems some have differing opinions.
So if I'm understanding correctly, you're implying he's trying to say they're all "not good." However, isn't it a bit reckless to make comments on a professional forum implying "eh you could just go code your own," as if we don't have enough people just going around "winging it" and using actual snake oil nonsense. Let's not get that mixed up. Sure maybe
@MCParent has more experience and understanding with the underpinnings of the measures and thus thinks since he can code his own, why can't others? That's the message he's presenting, which is why I laughed because that's quite absurd to suggest while also suggesting others should also consider doing the same. He's of course welcome to do that.
They are, you don't need to repeat yourself. It's not being commercial that makes it valid, it's that it's been designed and based on established tests. It sounds like you assume everyone knows or should know how to take the underpinnings of established tests and make their own? Yeah I could build my own sports car in my backyard or a doctor could build their own MRI machine in their garage, you going to take that car on public roads or put real life patients in that machine? Big difference between playing around with software and seeing what you come up with vs taking it from your laptop to clinical use. You're right commercial product doesn't always equal best product, but there's a range of quality and utility, it's not black and white. I'd rather use an established product and keep an eye out for something improved down the line.
There's a difference between academia/research vs real world use. We all know this. Why would a day to day practitioner email some lab at some research school to get this information to try to code their own program....to then turn around and use in the wild. Different then lab research and test development in controlled settings. Most clinicians aren't doing this.
On my end, I know most of the measures exist because I trained under and worked alongside neuropsychologists and psychologists using these CPTs. Board certified just like each of you. So they're all wrong and using "not diagnostically useful" materials? Yes I asked for thoughts and apparently I got thoughts and replies which is great. When
@MCParent designs his own test measure that's more diagnostically useful and has much lower false positives than existing measures, I'll be in line to purchase it.
I'm sure they would. It's a business. So who are the arbitrators of valid clinical measures? Last I checked, Pearson among other companies have neuropsychologists and other practitioners and researchers they have both on staff and consult with when designing and developing tests. The Columbia is a suicide rating scale and the Diamond is what an anxiety rating scale? Companies like money but I'm guessing monetizing, especially a suicide rating scale would not be a great look. Good attempt, bad analogy.
My takeaway from
@WisNeuro and
@MCParent 's views here is that they find most CPTs have a false positive rate that's uncomfortably high. Fair enough. However, many use these measures alongside other assessment tools without issue day in and day out. If there's a better one out there lets hear them.