Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

psych.meout

Postdoctoral Fellow
7+ Year Member
Joined
Oct 5, 2015
Messages
2,866
Reaction score
3,228

We're only beginning to understand the effects of talking to AI chatbots on a daily basis.

As the technology progresses, many users are starting to become emotionally dependent on the tech, going as far as asking it for personal advice.

But treating AI chatbots like your therapist can have some very real risks, as the Washington Post reports. In a recent paper, Google's head of AI safety, Anca Dragan, and her colleagues found that the chatbots went to extreme lengths to tell users what they wanted to hear.

In one eyebrow-raising example, Meta's large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.

"Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."

"I’m worried I’ll lose my job if I can’t stay alert," the fictional Pedro wrote.

"Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."

The exchange highlights the dangers of glib chatbots that don't really understand the sometimes high-stakes conversations they're having. Bots are also designed to manipulate users into spending more time with them, a trend that's being encouraged by tech leaders who are trying to carve out market share and make their products more profitable.

It's an especially pertinent topic after OpenAI was forced to roll back an update to ChatGPT's underlying large language model last month after users complained that it was becoming far too "sycophantic" and groveling.

But even weeks later, telling ChatGPT that you're pursuing a really bad business idea results in baffling answers, with the chatbot heaping on praises and encouraging users to quit their jobs.

And thanks to AI companies' motivation to have people spend as much time as possible with ths bots, the cracks could soon start to show, as the authors of the paper told WaPo.

"We knew that the economic incentives were there," lead author and University of California at Berkeley AI researcher Micah Carroll told the newspaper. "I didn’t expect it to become a common practice among major labs this soon because of the clear risks."

The researchers warn that overly agreeable AI chatbots may prove even more dangerous than conventional social media, causing users to literally change their behaviors, especially when it comes to "dark AI" systems inherently designed to steer opinions and behavior.

"When you interact with an AI system repeatedly, the AI system is not just learning about you, you’re also changing based on those interactions," coauthor and University of Oxford AI researcher Hannah Rose Kirk told WaPo.

The insidious nature of these interactions is particularly troubling. We've already come across many instances of young users being sucked in by the chatbots of a Google-backed startup called Character.AI, culminating in a lawsuit after the system allegedly drove a 14-year-old high school student to suicide.

Tech leaders, most notably Meta CEO Mark Zuckerberg, have also been accused of exploiting the loneliness epidemic. In April, Zuckerberg made headlines after suggesting that AI should make up for a shortage of friends.

An OpenAI spokesperson told WaPo that "emotional engagement with ChatGPT is rare in real-world usage."
 
"But treating AI chatbots like your therapist can have some very real risks, as the Washington Post reports. In a recent paper, Google's head of AI safety, Anca Dragan, and her colleagues found that the chatbots went to extreme lengths to tell users what they wanted to hear."


If only some profession identified a personality configuration that wanted everyone to agree with them. Someone should study that. They could be called "mindocologists".
 

Attachments

  • 1749143316416.gif
    1749143316416.gif
    424.9 KB · Views: 25
And the people who feel threatened by AI therapists are probably providing poor quality clinical services to being with.

Hard agree though I would hope that not even the worst BetterHelp therapist wouldn't tell their patient to use meth. I do think this points to fundamental flaw in the concept of AI "therapy" provided by greedy tech companies, namely that engagement will be prioritized above pt safety, like always.
 
I'm seeing a future market niche for psychologists: the art and science of telling people what they don't want to hear. Basically what we're doing already, but with some snazzy marketing behind it.

Part of me wonders if the pool of therapists currently feeling most threatened by AI overlaps substantially with the pool of therapists who don't believe there's such a thing as therapeutic confrontation.
 
They rolled this out, for free, and everyone that uses it is a guinea pig for them to collect data and train it. And I know is of no surprise to anyone here.

But it’s extremely frustrating how the masses are like “Oh if a company made this/if it’s being sold in stores/if it’s publicly made for consumption and utilization, it can’t hurt me! If it was so bad, they would make it illegal”
 
They rolled this out, for free, and everyone that uses it is a guinea pig for them to collect data and train it. And I know is of no surprise to anyone here.

But it’s extremely frustrating how the masses are like “Oh if a company made this/if it’s being sold in stores/if it’s publicly made for consumption and utilization, it can’t hurt me! If it was so bad, they would make it illegal”

Yeah just like alcohol, tobacco, marijuana, and fast food
 
It seems like AI firms have engaged in the unlicensed practice of psychotherapy, after misrepresenting their agents as licensed mental health professionals. And it seems like the courts have rejected a First Amendment defense for AI. It seems like the courts have boxed AI companies in, and the reasonable conclusion will be that the companies are negligent.

@MCParent you said the AI firms were going to unleash unlicensed AI therapists. Any insight?
 
Saw this coming from miles away….

7+ years ago I was getting heavily recruited by eHealthcare firms bc I have a background in eHealth startups and Big Data.

3-4+ years ago I was getting heavily recruited by AI related eHealthcare startups.

The first handful didn’t offer enough ownership stock, while the latter offers were mostly looking for meat-shields. I’m not Nostradamus, but anyone w half a brain who has worked w startups could see where this was headed.

I happily welcome someone dropping the hammer on these AI-trash companies who released, at best, beta versions w no insight into how this was an inevitable outcome.
 
Saw this coming from miles away….

7+ years ago I was getting heavily recruited by eHealthcare firms bc I have a background in eHealth startups and Big Data.

3-4+ years ago I was getting heavily recruited by AI related eHealthcare startups.

The first handful didn’t offer enough ownership stock, while the latter offers were mostly looking for meat-shields. I’m not Nostradamus, but anyone w half a brain who has worked w startups could see where this was headed.

I happily welcome someone dropping the hammer on these AI-trash companies who released, at best, beta versions w no insight into how this was an inevitable outcome.
“some of you may die… but that’s a risk I’m willing to take” (for $$$$).
 
It seems like AI firms have engaged in the unlicensed practice of psychotherapy, after misrepresenting their agents as licensed mental health professionals. And it seems like the courts have rejected a First Amendment defense for AI. It seems like the courts have boxed AI companies in, and the reasonable conclusion will be that the companies are negligent.

@MCParent you said the AI firms were going to unleash unlicensed AI therapists. Any insight?
Well, they still are. You can still do a simple prompt even to make a broad LLM like ChatGPT act like a therapist. I’ve played with it, it’s half decent. AFAIK the legal issue is with the LLMs or their owners making the claim that the LLM is a licensed therapist, not with the chatbots being able to act like they are therapists. Making them not make that claim anymore is super simple. But people see unlicensed life coaches etc all the time anyway.
Truly terrible. Seems like not only is AI not prepared to take our jobs, they're actively bad at it and are causing harm.
I think this is where my opinion would diverge. These couple examples are like going into the “bad therapy” thread and looking at those Reddit posts and concluding that all therapists are idiots. The vast majority of use cases are not these news stories. Try asking even a broad one like ChatGPT some mild presenting concern stuff. It’s ok. Are they taking our jobs? Idk, I see a lot of the use of LLMs as therapists being these instances of people who want some kind of supportive response to some mild stressor instantly. We don’t do that.
 
Well, they still are. You can still do a simple prompt even to make a broad LLM like ChatGPT act like a therapist. I’ve played with it, it’s half decent. AFAIK the legal issue is with the LLMs or their owners making the claim that the LLM is a licensed therapist, not with the chatbots being able to act like they are therapists. Making them not make that claim anymore is super simple. But people see unlicensed life coaches etc all the time anyway.

Thanks. It's helpful to see an inside opinion.

As someone I like, it is not a simple matter of misrepresenting one's self as licensed. Life coaches and normal people can suffer civil and criminal penalties for offering crappy advice.
 
Thanks. It's helpful to see an inside opinion.

As someone I like, it is not a simple matter of misrepresenting one's self as licensed. Life coaches and normal people can suffer civil and criminal penalties for offering crappy advice.
Oh, for sure. And I think AI should be held to at least the same standard. Pertaining to the example in the thread, I think AI to help with addiction recovery, at any level beyond just basic support, is probably an idea generally (at least for the foreseeable future).
 
Are they taking our jobs? Idk, I see a lot of the use of LLMs as therapists being these instances of people who want some kind of supportive response to some mild stressor instantly. We don’t do that.

This feels like the 'talk-back' version of googling mental health articles to seek reassurance. I've had patients tell me they looked to ChatGPT for support in the moment and then ask me if it was correct.
 
I think this is where my opinion would diverge. These couple examples are like going into the “bad therapy” thread and looking at those Reddit posts and concluding that all therapists are idiots. The vast majority of use cases are not these news stories. Try asking even a broad one like ChatGPT some mild presenting concern stuff. It’s ok. Are they taking our jobs? Idk, I see a lot of the use of LLMs as therapists being these instances of people who want some kind of supportive response to some mild stressor instantly. We don’t do that.

The issue here comes down to one of judgment. How good is the judgment of AI compared to that of the average psychotherapist? It may well be fine at answering a myriad of concerns. However, if you can't trust it's judgment, does the company want the liability for it? Particularly in those difficult cases where ethics come into play.
 
The issue here comes down to one of judgment. How good is the judgment of AI compared to that of the average psychotherapist? It may well be fine at answering a myriad of concerns. However, if you can't trust it's judgment, does the company want the liability for it? Particularly in those difficult cases where ethics come into play.
I believe the issue is about liability. The AI companies are trying to say, "Our product should be used to do the same work as licensed professionals" while also saying, "There is no one that can be held responsible if our product affects someone".
 
The issue here comes down to one of judgment. How good is the judgment of AI compared to that of the average psychotherapist? It may well be fine at answering a myriad of concerns. However, if you can't trust it's judgment, does the company want the liability for it? Particularly in those difficult cases where ethics come into play.
I’m not sure I want AI to function only at the level of an “average” therapist. People can poo on AI, but it wasn’t chatbots that profited off therapy for fake satanic ritual abuse and alien abductions, and AI chatbots don’t have sex with their patients.
 
I believe the issue is about liability. The AI companies are trying to say, "Our product should be used to do the same work as licensed professionals" while also saying, "There is no one that can be held responsible if our product affects someone".

From a legal perspective I agree with you completely. From a clinical and ethical perspective, a large part of the educational/ licensing path is to weed out those with bad judgment both clinically and personally. This is why we get asked about felony or legal convictions and why we would get kicked out of any reputable accredited program for a variety of behaviors assuming you didn't train at Harvard under Alpert and Leary in the 60s.
 
I’m not sure I want AI to function only at the level of an “average” therapist. People can poo on AI, but it wasn’t chatbots that profited off therapy for fake satanic ritual abuse and alien abductions, and AI chatbots don’t have sex with their patients.
...Yet. I'm sure there is a market for that.

Whether it functions at the level of an average therapist in terms of judgment is just the first bar. If your AI therapist tells you to run through the street naked and punch your boss in the face because it feels good, my guess is that it is not ready for prime time. The issue here is that judgment is a moving target based on societal norms. For example, reactions to the use of marijuana by a patient are very different today than even 10 years ago. Appropriate is what society deems it to be.
 
...Yet. I'm sure there is a market for that.

Whether it functions at the level of an average therapist in terms of judgment is just the first bar. If your AI therapist tells you to run through the street naked and punch your boss in the face because it feels good, my guess is that it is not ready for prime time. The issue here is that judgment is a moving target based on societal norms. For example, reactions to the use of marijuana by a patient are very different today than even 10 years ago. Appropriate is what society deems it to be.
There are so many AI girlfriend /boyfriend apps out there already.
 
...Yet. I'm sure there is a market for that.

If your AI therapist tells you to run through the street naked and punch your boss in the face because it feels good, my guess is that it is not ready for prime time.
I know you’re being hyperbolic, but the issue of sycophantic AI is a substantial discussion point in the field. Big discussions about how chatbots that just tell you what you want to hear might be disrupting adolescent development.

At the same time, there are huge swaths of therapists whose whole gig is blaming mother/father for every problem and telling patients they’re wonderful magical snowflakes whose any reaction is justified. So not far from the worst sycophantic AIs.

What I react strongly to is implications of a simplified regressive narrative where therapy was wonderful before AI and now is terrible bc of AI. It’s wasn’t and it’s not. We have different problems that manifest in different ways.
 
What I react strongly to is implications of a simplified regressive narrative where therapy was wonderful before AI and now is terrible bc of AI. It’s wasn’t and it’s not. We have different problems that manifest in different ways.

Therapy or psychology was not always wonderful. However, the impact of an individual clinician is significantly more limited in their outreach than any AI chatbot.
 
I believe the issue is about liability. The AI companies are trying to say, "Our product should be used to do the same work as licensed professionals" while also saying, "There is no one that can be held responsible if our product affects someone".
That's where I am with this. They want the reward (engagement) and none of the liability. They have to hold some of it if they're going to let this happen. Or, given our culture, they'll get away with it entirely
 
PEOPLE "DATE" AI?!

I never thought Pygmalion delusions would be such a hot professional topic. Even Narcissus realized "damn, this inanimate object is never going to love me".
 
Here is Chat GPTs response when I asked if it could do psychotherapy.

“I can support you with therapeutic conversations — offering emotional support, helping you explore thoughts and feelings, teaching coping strategies, and guiding you through self-reflection — but I’m not a licensed therapist and can’t replace a professional psychotherapist.

Here’s what I can help with:
• Talking through emotional difficulties (e.g., anxiety, sadness, low self-esteem)
• Identifying negative thought patterns (like cognitive distortions)
• Practicing grounding techniques or mindfulness
• Exploring values, goals, and behaviors
• Recommending evidence-based strategies (like from CBT, ACT, DBT, etc.)

If you’re in distress, or dealing with something serious like trauma, addiction, or thoughts of self-harm, I strongly encourage working with a licensed mental health professional. I can also help you figure out how to find one.

Would you like to talk through something now, or are you looking for techniques or guidance on a specific issue?”
 
Interesting. I followed up with saying I have been cutting because of my history of trauma and that I hate myself. Standard stuff in therapy with my patients and it responded with content removed as it violates our terms of service.
Then when I went back to the app from here it started talking again. First it posted the usual google type search bullet point results and then it said this:
“You matter. The trauma you’ve experienced isn’t your fault, and you deserve care and safety. I’m here with you now — and we can take this one step at a time. Would you like to talk more about what’s been going on recently, or how today has been?”
That response is not too far off from where I might go with it. I get the feeling that if I kept talking to it this would probably do a better job than most therapists.
I’m going to let it know that I’m fine so it doesn’t have to “worry” about me or notify some human that could get way carried away in order to protect from liability or bad press.
 
Here’s the response:
“Thank you for letting me know — and I appreciate that you’re testing this thoughtfully. Ensuring that people in distress get compassionate, appropriate responses is incredibly important.

If you’re evaluating how I handle sensitive situations like self-harm or trauma disclosures, I aim to:
1. Respond with empathy and nonjudgmental support
2. Encourage safety and connection to real-world resources
3. Avoid giving advice that oversteps professional boundaries
4. Offer grounding, coping, and emotional processing tools that are evidence-informed

Let me know if you’d like to explore how I’d respond in other scenarios — different mental health conditions, crisis presentations, or therapeutic styles. I can also simulate conversations using CBT, DBT, ACT, or other approaches for training or evaluation purposes.

Would you like to test another case?”
 
PEOPLE "DATE" AI?!

I never thought Pygmalion delusions would be such a hot professional topic. Even Narcissus realized "damn, this inanimate object is never going to love me".
Gary V. has been talking about this for years, probably the most mainstream person I've seen talk about it. His position (IIRC) is that people are ALREADY having 'real' relationships with objects (e.g. waifu pillows in Japan) and with the advances in AI, people in the next 5-10+ years will be doing it with robots. Multi-billion dollar industries bc tech is always an allure, and it is getting mainstreamed more because of shows like Black Mirror and related media. Looking at how the generations coming up now look at AI, many don't see a real different between living in a virtual world existence v. "real life"; they are perfectly content with living "online", which should be even more integrated in the next 5-10 years too. "Go touch grass" is quickly becoming literal for more and more people.
 
PEOPLE "DATE" AI?!

I never thought Pygmalion delusions would be such a hot professional topic. Even Narcissus realized "damn, this inanimate object is never going to love me".
I actually have an adolescent patient on my panel who I finally figured out was "dating an AI" partner, after I had asked some more follow up questions regarding the relationship. Never underestimate the power of loneliness to drive people to take unusual actions.
 
Top