ChatGPT

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

AbnormalPsych

Board Certified Psychologist
10+ Year Member
Joined
Dec 8, 2014
Messages
1,217
Reaction score
1,871
What do you all think? Apparently as my college everyone from the President on down is trying to figure out how this impacts education. Informal student polling is showing a majority of students have used it at this point. In addition to writing papers, it can even correctly answer multiuple choice questions.

In a recent meeting a STEM colleague also made a comment about AI taking away therapists jobs in the coming years. I chuckled.
 
What do you all think? Apparently as my college everyone from the President on down is trying to figure out how this impacts education. Informal student polling is showing a majority of students have used it at this point. In addition to writing papers, it can even correctly answer multiuple choice questions.

In a recent meeting a STEM colleague also made a comment about AI taking away therapists jobs in the coming years. I chuckled.
[competent] psychotherapist will be the last profession 'replaced' by AI.

Edit: But if society collapses due to AI taking over everyone else's jobs...we're plenty boned.
 
I think some elements of psychotherapy (mainly cognitive-behavioral approaches), like psychoeducation and tracking/monitoring thoughts and behaviors, are already being done via printouts or mobile apps that could easily be replaced by AI, and potentially made more effective or more engaging as a result. I think other approaches to psychotherapy (like psychodynamic) are significantly less likely to be impacted.
So I think the work we do might change. We might spend less time teaching new concepts and more time checking comprehension and correcting misunderstanding. A little less time delivering interventions directly, but still constantly assessing to ensure good intervention fit, and adjusting treatment approaches as needed. I'm curious to hear other people's thoughts on the ways out work might change if augmented (not replaced) by AI.
 
I think some elements of psychotherapy (mainly cognitive-behavioral approaches), like psychoeducation and tracking/monitoring thoughts and behaviors, are already being done via printouts or mobile apps that could easily be replaced by AI, and potentially made more effective or more engaging as a result. I think other approaches to psychotherapy (like psychodynamic) are significantly less likely to be impacted.
So I think the work we do might change. We might spend less time teaching new concepts and more time checking comprehension and correcting misunderstanding. A little less time delivering interventions directly, but still constantly assessing to ensure good intervention fit, and adjusting treatment approaches as needed. I'm curious to hear other people's thoughts on the ways out work might change if augmented (not replaced) by AI.
I hear what you're saying about the algorithmic/linear application of specific cognitive-behavioral worksheets/techniques, but...

In my experience (granted mostly in a VA setting these past years), protocols hardly apply themselves to people with any level of significant psychopathology let alone co-morbidity.

There are engagement/motivational issues to address with most patients...these will be heavily resistant to automation (I suspect). Anyone who has tried to incorporate 'self-directed' learning/therapy with this population (or, likely, any other) involving workbooks, worksheets, algorithms, websites, or phone apps I think has likely had at best 'mixed success' with them.

There are important case formulation tasks that involve integration of a) test data, b) self-report, c) direct observational data, d) years of clinical experience + experience being a human being trying to navigate human life tasks, motivational slumps, tragedies, interpersonal betrayal, etc..., and e) the basic and applied empirical literature---all of which have to be compared, weighed, cross-checked, and synthesized into a coherent treatment plan and then negotiated and explained to patients. Ongoingly. Often in real-time.

Ever since 'Wanted: A Good Cookbook' we've been seeking a 'holy grail' of {input: patient and diagnosis, testing data} ---> {output: optimal treatment plan} but beyond broad strokes and certain modular sub-protocols (e.g., the sleep restriction algorithm in the CBT-i protocols/phone apps), I haven't seen anything too impressive in clinical practice that doesn't need a clinician 'pilot' to bring it in for a successful landing. Even Meehl was outspoken about the need to include in our actuarial models 'inputs' that come straight from the clinician (e.g., how to actually detect bona fide psychotic symptoms [basically a rating or determination from a clinician [clinical judgment]] which is then fed into the algorithm.

Ultimately, it's an empirical question. We should have a decent approximation of an answer in 10-30 years. Meantime, the patients continue to queue up for effective psychotherapy.
 
I hear what you're saying about the algorithmic/linear application of specific cognitive-behavioral worksheets/techniques, but...

In my experience (granted mostly in a VA setting these past years), protocols hardly apply themselves to people with any level of significant psychopathology let alone co-morbidity.

There are engagement/motivational issues to address with most patients...these will be heavily resistant to automation (I suspect). Anyone who has tried to incorporate 'self-directed' learning/therapy with this population (or, likely, any other) involving workbooks, worksheets, algorithms, websites, or phone apps I think has likely had at best 'mixed success' with them.

There are important case formulation tasks that involve integration of a) test data, b) self-report, c) direct observational data, d) years of clinical experience + experience being a human being trying to navigate human life tasks, motivational slumps, tragedies, interpersonal betrayal, etc..., and e) the basic and applied empirical literature---all of which have to be compared, weighed, cross-checked, and synthesized into a coherent treatment plan and then negotiated and explained to patients. Ongoingly. Often in real-time.

Ever since 'Wanted: A Good Cookbook' we've been seeking a 'holy grail' of {input: patient and diagnosis, testing data} ---> {output: optimal treatment plan} but beyond broad strokes and certain modular sub-protocols (e.g., the sleep restriction algorithm in the CBT-i protocols/phone apps), I haven't seen anything too impressive in clinical practice that doesn't need a clinician 'pilot' to bring it in for a successful landing. Even Meehl was outspoken about the need to include in our actuarial models 'inputs' that come straight from the clinician (e.g., how to actually detect bona fide psychotic symptoms [basically a rating or determination from a clinician [clinical judgment]] which is then fed into the algorithm.

Ultimately, it's an empirical question. We should have a decent approximation of an answer in 10-30 years. Meantime, the patients continue to queue up for effective psychotherapy.

People who think that CBT can be automated much more easily than other therapeutic methods, have never been trained in actual CBT. These are the people who think that you can read a manual and get right into therapy without much else in terms of training. That's simply not CBT. It's what poorly trained people think CBT is.
 
People who think that CBT can be automated much more easily than other therapeutic methods, have never been trained in actual CBT. These are the people who think that you can read a manual and get right into therapy without much else in terms of training. That's simply not CBT. It's what poorly trained people think CBT is.

I don't think that psychotherapy is in principle any more immune to replication by AI than any other complex human activity.

The problem is acquiring a high quality training dataset. The vast majority of psychotherapy on offer out there is very poor quality. I can't imagine how one would go about ensuring that only examples of good psychotherapy made it into one's training set.

And if you train your robot therapist on bad data, you are going to get a bad result. GIGO, as the computer people used to say.
 
I don't think that psychotherapy is in principle any more immune to replication by AI than any other complex human activity.

The problem is acquiring a high quality training dataset. The vast majority of psychotherapy on offer out there is very poor quality. I can't imagine how one would go about ensuring that only examples of good psychotherapy made it into one's training set.

And if you train your robot therapist on bad data, you are going to get a bad result. GIGO, as the computer people used to say.

Very expensively. This is why very high quality RCTs are so expensive to run. Multiple treatment arms, blinded independent evaluators, trained raters who are reviewing sessions to rate fidelity, etc, etc. I've been on a couple that used gold standard RCT methodology and these were high 6 to 7 figure grants just to get group sizes large enough to meet power requirements.

From working out in the real world, exactly as you said, I wouldn't trust any data that is using general chart data. The variability in treatment is huge. It is exceedingly clear that some people are trained at the very basic surface level of some treatments and really have no idea what they are doing.
 
I’m most curious about how this is going to impact higher ed. I teach undergrad stats and I am pretty sure one student’s entire project was generated by chatgpt last semester, though it was impossible to tell (that’s often the case with cheating anyway). What could I even do about that without asking every single student to come in and explain their work one on one? I mean maybe that’s one answers to this but it’s not that economical. We are all going to have to adapt in some way to this. I can’t imagine we will go back to a time without it, like we’re probably not going back to a time where people don’t have access to all the information in the world on their phones. Not sure what this will look like a decade down the line.
 
Very expensively. This is why very high quality RCTs are so expensive to run. Multiple treatment arms, blinded independent evaluators, trained raters who are reviewing sessions to rate fidelity, etc, etc. I've been on a couple that used gold standard RCT methodology and these were high 6 to 7 figure grants just to get group sizes large enough to meet power requirements.

From working out in the real world, exactly as you said, I wouldn't trust any data that is using general chart data. The variability in treatment is huge. It is exceedingly clear that some people are trained at the very basic surface level of some treatments and really have no idea what they are doing.
And I'd say that even a golden robot algorithm trained merely on RCT data would have a hard time in the real world with the average VA patient presenting for care. Anyone working on the 'front lines' knows the truth. Probably <15-20% (and this is likely being VERY generous) of VA mental health patients show up to intake ready, able, and willing to just be immediately 'plugged in' to a CPT, PE, CBT-i, or even CBT-D (or other) manualized protocol treatment. RCT data are great (for what they are) but, at the end of the day, it's obvious that their findings only apply/generalize to the veteran population who are ready and willing to at least begin/try to meaningfully participate in a protocol therapy.
 
And I'd say that even a golden robot algorithm trained merely on RCT data would have a hard time in the real world with the average VA patient presenting for care. Anyone working on the 'front lines' knows the truth. Probably <15-20% (and this is likely being VERY generous) of VA mental health patients show up to intake ready, able, and willing to just be immediately 'plugged in' to a CPT, PE, CBT-i, or even CBT-D (or other) manualized protocol treatment. RCT data are great (for what they are) but, at the end of the day, it's obvious that their findings only apply/generalize to the veteran population who are ready and willing to at least begin/try to meaningfully participate in a protocol therapy.

Yeah it is night and day doing PTSD treatment inside and outside of the VA system. Luckily, I got a lot of experience with PTSD treatment prior to my VA days. Overwhelmingly, people presenting ready for treatment, generally compliant, and made steady and expected progress, even the handful of combat vets I saw outside of the VA. Once I got to the VA, completely different story. I think if I had started in the VA population, I would have quickly become jaded and left the PTSD realm entirely early on.
 
1) If we are just talking about manualized CBT: The UK's CBT app has not changed demand for in person CBT.
2) Truly skilled therapists reinforce behaviors through nonverbal techniques, which cannot be performed via text.
3) Text is not HIPAA complaint, absent substantial costs.
4) Scenario: Something super horrible happens. You need a dispassionate and uninvolved person to talk it through. An app is not going to help. 5) AI is going to be limited, because we are dealing with infrequent abnormalities. The variety of things I've heard patients go through remains beyond my imagination. Some have not understood how messed up something is. Those infrequent and unidentified problems will be a barrier to AI.
 
What could I even do about that without asking every single student to come in and explain their work one on one? I mean maybe that’s one answers to this but it’s not that economical.
Maybe random "audits" where selected students are chosen to come in a demonstrate that they have reasonably similar in-person abilities as to their less personal submissions? Sounds like a PITA, but this may be the world we live in. If I had such suspicions in my classes, that's what I would do. Initially I would do this purely to get a sense of what is going on, with no grade penalties. If it turns out there is questionable correspondence between assessment methods, more significant steps would be necessary.

Several years back we added a 1-credit "Fluency" course requirement to our ABA graduate program. It's the last course they take, have to take a proctored test (online or in-person) and have to get a grade of 92% or better to pass. Material covers everything from the 6-course ABA sequence, and we have multiple versions of the test. Student can retake until they pass. They have to meet with the instructor (typically via zoom) and review every item they got wrong. I hang this over their head throughout the program (which is hybrid with many non-proctored online quizzes)- cheating is not going to help you with fluency, and it's not going to help you out come time for your BCBA certification exam. It's relatively time intensive- there's typically about 10 students per course per year. Not necessarily scalable to a large undergrad section of intro to stats. However, instructional methods (including methods of evaluation) need to adapt to changing realities.
 
Personally, I think the hype around ChatGPT precedes its actual capabilities. It's like talking to an overconfident first year graduate student. I asked it basic questions about well-known theories of psychology and it provided a lot of misleading information. I'll admit the some of the stuff it can do with data visualization is pretty impressive, but we've had bots for a while that write poetry like Hemingway, for instance.
 
Personally, I think the hype around ChatGPT precedes its actual capabilities. It's like talking to an overconfident first year graduate student. I asked it basic questions about well-known theories of psychology and it provided a lot of misleading information. I'll admit the some of the stuff it can do with data visualization is pretty impressive, but we've had bots for a while that write poetry like Hemingway, for instance.
One of the most common criticisms of ChatGPT I've seen is that it can be "confidently incorrect" about a good bit of factual information.
 
One of the most common criticisms of ChatGPT I've seen is that it can be "confidently incorrect" about a good bit of factual information.
Yeah, I was trying to see if it would help me generate some templated language to tinker with. It was not helpful for this reason.
 
I've been trying out ChatGPT over the past few days and am really impressed by it, but do not trust its sources. It literally makes up citations, and they read as totally legitimate. Apparently, this is called 'hallucinating' in the AI world.
 
I've been trying out ChatGPT over the past few days and am really impressed by it, but do not trust its sources. It literally makes up citations, and they read as totally legitimate. Apparently, this is called 'hallucinating' in the AI world.
I tried checking it out a week or so ago. Do you still have to provide it with your cell phone number and disable ad-blocking on your browser and/or anti-malware software? I balked at that.
 
I've always wanted to make a therapy training video game that's a lot like the dialogue trees in old role playing games. You're sitting with a patient, listening to them, and some responses come up and you choose a feel based on a certain orientation or manualized treatment.

Sounds pretty boring, right?
 
I opened an account and entered in a question from my classes first assignment. Basically looking to identify behavioral measurement strategies for recording on-task behaviors in a classroom setting. It gave a decent, by the book list of definitions of a few terms/techniques. It was about C to C+ level work- technically accurate, but missing some of the nuances I was looking for. Basically, it performed like the student who can skillfully identify and- maybe- paraphrase the correct answer from the text, but not really understand what that answer means or how to apply it outside the context of the assignment. Still- did better than I would've thought. I'll be keeping an eye on things, for sure.
 
I opened an account and entered in a question from my classes first assignment. Basically looking to identify behavioral measurement strategies for recording on-task behaviors in a classroom setting. It gave a decent, by the book list of definitions of a few terms/techniques. It was about C to C+ level work- technically accurate, but missing some of the nuances I was looking for. Basically, it performed like the student who can skillfully identify and- maybe- paraphrase the correct answer from the text, but not really understand what that answer means or how to apply it outside the context of the assignment. Still- did better than I would've thought. I'll be keeping an eye on things, for sure.
I’ve been playing with it with some client questions. It’s kind of interesting.
 
100% real output:


Moderator: Good evening, and welcome to tonight's debate on the best way to treat Borderline Personality Disorder. We have two leading experts in the field, Dr. Peter Fonagy and Dr. Otto Kernberg who will be presenting their approaches and explaining why they believe their methods are superior.

Dr. Fonagy: Thank you for the opportunity to present my approach to treating BPD. I believe that the most effective way to treat this complex and challenging disorder is through a combination of mentalization-based therapy and psychoanalytic therapy. Mentalization-based therapy focuses on helping the client develop a better understanding of their own thoughts, feelings, and behaviors, as well as those of others. By improving mentalization, or the ability to think about mental states, the client is better equipped to regulate their emotions and form stable relationships.

Moderator: I apologize for the sudden change, but it seems that Dr. Kernberg is unavailable for the rest of the debate. Instead, we have a house cat who will be presenting their approach to treating BPD.

House Cat: Meow. (The house cat proceeds to lay down and take a nap)

Moderator: I'm sorry, it seems that the house cat is not able to participate in the debate. Dr. Fonagy, would you like to continue your presentation?

Dr. Fonagy: Of course, thank you. As I was saying, mentalization-based therapy is based on the idea that by improving mentalization, the client will be better equipped to regulate their emotions and form stable relationships. This approach is supported by a strong body of empirical research, providing solid evidence of its effectiveness.

Moderator: Thank you, Dr. Fonagy, for your presentation. Unfortunately, we are unable to hear the perspective of the house cat, but I hope that this debate has provided valuable insight and sparked further discussion on the best way to treat BPD. Thank you, everyone, for joining us tonight.
 
I see what other mean by saying that it puts out C+ work when asked direct technical questions, but I continue to be surprised by the extent to which it can riff on a situation or style.

Prompt:

Write an argument between Noam Chomsky and B.F. Skinner in the style of Shakespeare

Output:

Chomsky:
Good morrow, Skinner, well met in this place, Where thoughts and words may freely interlace.

Skinner:
Good morrow, Chomsky, it is a fair day, And we, two thinkers, shall engage in play.

Chomsky:
Play? Nay, good sir, I come to argue here, With thee, who says that man is void of peer.

Skinner:
And I, dear Chomsky, to prove you wrong, For man is naught but animal, and strong.

Chomsky:
You err, good Skinner, man is far above, His mind a marvel, powered by love.

Skinner:
Your words are fine, but science holds the key, And it is through behavior that we see.

Chomsky:
Your science blinds you to the mind within, The very essence of the human being.

Skinner:
Nay, the mind is simply stimuli and response, The workings of the body and its sense.

Chomsky:
Your theory lacks the depth of human thought, And ignores the soul that cannot be bought.

Skinner:
Your thoughts are grand, but unsupported still, By science, reason, and the workings of the will.

Chomsky:
I stand firm, my words I do defend, For man is more than just a creature's end.

Skinner:
And I, good Chomsky, do not bend my knee, For science holds the truth, and sets us free.

Chomsky:
So be it, Skinner, let our words be heard, For truth, in time, shall always come to word.
 
And last one, promise - this is what happens when you tell it to do the same but pretend to be a speaker of English in the 17th century:

Verily, 'twould be a grand debate, Between Chomsky and Skinner, both most great, On matters of the mind and how 'tis made, And shaped by forces in and out of shade.

Chomsky, a man of reason, wit, and might, With tongue so sharp and eyes that shine so bright, Would argue for a language innate, A universal grammar that doth wait.

But Skinner, of the learned and measured kind, With gentle voice and wisdom of the mind, Would counter that our words and thoughts are wrought, By habits learned through punishment and naught.

And so they'd argue, back and forth they'd go, With each new point and counterpoint to show, That neither man was wrong or right alone, But each had part of truth within their own.

Thus would they battle, as the hours roll, With passion, wit, and learning, soul to soul, For each held firm to what they thought was right, And sought to win the argument with might.
 
Funny, I came here to post the question of ChatGPT and it's impact on our work and the field of psychology.

Regarding academia and the classroom :

I'm not in academia but I'd say schools need to use that program someone created to detect ChatGPT created essays, answers, etc. Not to sound too old school but schools should treat use of it for things like assignments as grounds for an F or suspension. There's a time and place for the study of ChatGPT and perhaps even courses on it's usage. But for most courses and most assignments, ChatGPT shouldn't be permitted. Barring that ability to detect or stop it's use for that, well time to get creative and find other ways to assess and grade students.

Regarding practice:

It depends. For things like neuropsychology and assessment, I think it'll likely replace psychometrician work mostly and also replace most of the work of interpreting data and report writing, and well neuropsychologists and assessment based psychologists. Although I'd hope our field would highlight and protect the ethics and benefits of having a real human being with expertise and experience in those areas to then take the AI's data and report, review it, and provide applicable findings, feedback, recommendations, and further services as needed. I mean I think neuropsychology is fascinating but much like neurology and radiology, AI will likely do the diagnostics better but a human expert reviewing it I think needs to be protected and assured.

For therapy: Probably minimal impact. It might be useful as a type of "Web MD" for therapists and psychologists as much as "Web MD" is for urgent care and family doctors but I doubt it will replace talk therapy, family therapy, couples therapy. If anything it might "aid" diagnostics and then offer treatment modality recommendations (much as it will offer psychiatrists likely medication prescribing recommendations) but I think the human element of talk therapy is it's most enduring factor. Imagine an AI trying to run a family therapy session. 🤣

It's biggest use potential is probably assessment and also screening. Maybe offering up treatment recommendations based on diagnostics. Then again many assessment tools already do that, and most training programs and intervention protocols provide much of this. It's role in practice is likely diagnostics.

Academia has it's work cut out facing ChatGPT, assessment to some extent but can still be a tool rather than a replacement.

As for overall society, ironically it appears best at writing code and programming. So it's likely impact on the job market is making many tech jobs redundant and unnecessary. The irony, it replacing it's creators. I'm sure there's something from Freud that could be an analogy for that. It also appears to be poised to disrupt content creation like writing.
 
Funny, I came here to post the question of ChatGPT and it's impact on our work and the field of psychology.

Regarding academia and the classroom :

I'm not in academia but I'd say schools need to use that program someone created to detect ChatGPT created essays, answers, etc. Not to sound too old school but schools should treat use of it for things like assignments as grounds for an F or suspension. There's a time and place for the study of ChatGPT and perhaps even courses on it's usage. But for most courses and most assignments, ChatGPT shouldn't be permitted. Barring that ability to detect or stop it's use for that, well time to get creative and find other ways to assess and grade students.

Regarding practice:

It depends. For things like neuropsychology and assessment, I think it'll likely replace psychometrician work mostly and also replace most of the work of interpreting data and report writing, and well neuropsychologists and assessment based psychologists. Although I'd hope our field would highlight and protect the ethics and benefits of having a real human being with expertise and experience in those areas to then take the AI's data and report, review it, and provide applicable findings, feedback, recommendations, and further services as needed. I mean I think neuropsychology is fascinating but much like neurology and radiology, AI will likely do the diagnostics better but a human expert reviewing it I think needs to be protected and assured.

For therapy: Probably minimal impact. It might be useful as a type of "Web MD" for therapists and psychologists as much as "Web MD" is for urgent care and family doctors but I doubt it will replace talk therapy, family therapy, couples therapy. If anything it might "aid" diagnostics and then offer treatment modality recommendations (much as it will offer psychiatrists likely medication prescribing recommendations) but I think the human element of talk therapy is it's most enduring factor. Imagine an AI trying to run a family therapy session. 🤣

It's biggest use potential is probably assessment and also screening. Maybe offering up treatment recommendations based on diagnostics. Then again many assessment tools already do that, and most training programs and intervention protocols provide much of this. It's role in practice is likely diagnostics.

Academia has it's work cut out facing ChatGPT, assessment to some extent but can still be a tool rather than a replacement.

As for overall society, ironically it appears best at writing code and programming. So it's likely impact on the job market is making many tech jobs redundant and unnecessary. The irony, it replacing it's creators. I'm sure there's something from Freud that could be an analogy for that. It also appears to be poised to disrupt content creation like writing.
'Branched' versions of objective personality assessments (hello, 'MMPI-3-RF-CGPT') are definitely in our future. 100,000 item pool and branched assessment strategies where earlier responses/elevations on scales lead to optimized and automatic prompting with items with maximal validity to 'hone in' on identified clinical constructs.

Also, the future of education will be compulsory in-person (verified by blood draw/ DNA analysis) oral examinations.
 
Academia has it's work cut out facing ChatGPT, assessment to some extent but can still be a tool rather than a replacement.

I think ChatGPT might force educators into less expedient evaluation methods, which is probably ultimately beneficial for student learning, but difficult for implement for a group of already extremely busy people. As for research, I've been experimenting with it to see if it will automate the most basic functions of my job (i.e., put the data points in this paragraph into a table) and have been pretty disappointed with the results. I've also noticed the further away from the initial request it gets, like when I try to add corrections, the hallucinations tend to increase.
 
Top