OpenAI Says It's Hired a Forensic Psychiatrist as Its Users Keep Sliding Into Mental Health Crises

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
🧐

Is anyone really surprised by this? At the same time, I’m more curious about this story:

“Adults are vulnerable to this sycophancy, too. A 35-year-old man with a history of mental illness recently died by suicide by cop after ChatGPT encouraged him to assassinate Sam Altman in retaliation for supposedly killing his lover trapped in the chatbot.”

ETA: just read the article I linked and holy s*** was it easy for that guy to jailbreak it and have it tell him some scary stuff…
 
Last edited:
I'm surprised a forensic psychiatrist would take a position like this. The cynical part of me feels that this is OpenAI's way of preparing a sacrificial lamb for the inevitable lawsuit that's going to stem from an incident like the one described.
I knew exactly who it was when I saw this. He won't be a "sacrificial lamb" and isn't technically a forensic psychiatrist. I don't see any reason why someone wouldn't take a position like this. Great work for those who can get it. From the company's perspective, they can show they are taking steps to address safety concerns. It's very low liability to do this sort of work vs direct patient care.
 
Star Trek plot lines from the myriad of series runs, has quite the foreshadow of what might come.

Can't stop people making more choices with over integration into electronic platforms, be it ChatGPT, video games, online gambling, "doom scrolling", binge streaming, etc. or even things yet to be made as seen in Star Trek.

Nor should a company be held liable. We must not stray from personal responsibility. Doing so enrolls us collectively into the famed Victim Olympics.
 
Quite a poetic story.

I think it is a great idea to contract with a professional. These machines are far more powerful than we think, and this problem will only grow if unfettered.
 
I'm surprised a forensic psychiatrist would take a position like this. The cynical part of me feels that this is OpenAI's way of preparing a sacrificial lamb for the inevitable lawsuit that's going to stem from an incident like the one described.
Why? Sounds like it’s mostly a research position and they would maybe make some recommendations on how to make the programs “safer”. If the psychiatrist isn’t the one creating policies or writing programs why would they be individually liable? Seems like a great way to cash in on “research” while building a name for oneself as an “expert” in an up and coming area of MH.
 
Nor should a company be held liable. We must not stray from personal responsibility. Doing so enrolls us collectively into the famed Victim Olympics.
I don't think this is mutually exclusive, one or the other type of splitting. Why not both personal responsibility from the individual and corporate responsibility from the company as well?
 
Because then it allows pliability of politics into corporate destruction. One need only look at the 2nd amendment and liberal attacks on advertising, or magazine capacity, etc.
 
Because then it allows pliability of politics into corporate destruction. One need only look at the 2nd amendment and liberal attacks on advertising, or magazine capacity, etc.
I think you're sidestepping my main point: which is that responsibility can be shared. You then give a slippery slope argument that corporate responsibility will inevitably lead to politically motivated overreach and destruction of businesses. Holding a company liable for its role in harmful outcomes doesn't automatically mean political weaponization. We already do this in lots of companies that continue to exist (tobacco, alcohol, prescription medications, etc) despite holding them liable for their harms associated with their products. It's part of a functioning civil society.

For AI, I don't want it to go the realm of Purdue Pharmaceutical where the company is misleading the public about how safe it is or doctors overprescribing (both can co-exist and both should be held responsible). It makes sense that they are bringing someone on to study this.
 
That share is minimal for companies. Higher for people. To distill down for a more refined answer, I posit 80-90% or more for the individual. Greater benefit of the doubt should go to the company.

Interacting with another person can be harmful. People can lie, cheat, etc. When interacting with something like AI - Artificial Intelligence, and the goal of this is to make it smarter, with all the companies, to the point that it could match or rival human intellect - it stands to reason one should be prepared to be made a fool of, just as what can happen (sadly) when we interact with our fellow humans.

The 2nd amendment example is an excellent one, only labeled slippery when one is Left leaning, but most definitely isn't for those who understand and value this amazing right, and see it's legislative attacks routinely.

But the concept of harm, from company products or services is just rife for over reach:
*I recently entered a vehicle, piping hot from the summer sun, exposed skin makes contact on metal seat belt. Hot enough to possibly burn skin. Could be a lawsuit. Why wasn't I warned? Why isn't there a protective paint on this part to dissipate heat better?
*I recently recreated at a dock on a lake, and the gov entity that furnished that dock for general use, of which includes swimming. My bare feet could have burned had I not jumped in the water with haste. Why aren't there disclaimers to dock temperature levels? Could be a lawsuit.
*Homeless encampments can have people walking around nude, or revealing genitalia. This isn't a known nudist colony with appropriately labeled signage. I could be psychologically scarred by such sights and lead to PTSD, I could have a lawsuit against a city for harm to my mental health.
*Restaurant take out containers that have the little metal handle, often seen at Chinese restaurants, if placed in a microwave - as metal containing - could cause a fire. Why isn't the box labeled "Do Not Microwave." Could be a lawsuit.

The list of things goes on and on, and drawing human excessive interaction with AI as a fault of the company is poor reflection on us as Humans, and embracing personal responsibility and what it means to live. To make mistakes. To learn. The Human condition. Personal responsibility is tantamount to this, and shirking this very beautiful responsibility is just plain wrong.

Bringing on a Psychiatrist to study this AI interaction - is just damage control virtue signaling. The real solution is the coding engineers trying to undo it behind the scenes, and they might not be able to.
 
I'm not trying to blame companies, nor am I trying to say we abandon personal responsibility. I just think that we need to look at each specific case and see what proportion of responsibility each has in influencing the other. Companies are often more powerful than individual people so there's asymmetry there, which is ripe for abuse. I don't buy the argument that companies should never be held liable.

Those are all edge cases, not serious examples of product lawsuits. Courts throw out those frivolous lawsuits. Fringe abuses don't negate corporate responsibility.

I do think that arguing that anything is virtue signaling is lazy, a conversation stopper. It's dismissive rather than engaging in the substance/content/logic. Going along with your fire/burning examples, if a building catches fire and someone installs sprinklers, you wouldn’t say they’re virtue signaling. You’d say they’re responding to risk. Same with bringing on a psychiatrist when they find that AI is involved in causing/worsening psychosis or murders or suicides.

Coding engineers fixing everything behind the scenes is unlikely. There's too many profit incentives for the company to get people addicted, hooked, engaged. The incentives aren't aligned with public interest. Engineers also aren't experts in mental health and human behavior. Bringing on a psychiatrist could end up being more performative rather than integrating our insight as psychiatrists into developing these tools, but I hope that's not the goal.

What we get out of this discussion, of course, depends on our political persuasion. Politics begins when reasonable people can disagree on a given subject. My intention isn't to attack you but rather to have a polite discussion.
 
As libertarian as I tend to lean, i do think the government has a role to play in protecting us from companies and potentially harmful technology. The worst case is when the large corporations and the government are too closely aligned. As a society, we still haven’t figured out how to manage the challenge of kids and smartphones so it’s a little scary when newer technology potentially makes that even more problematic.
 
I'm not trying to blame companies, nor am I trying to say we abandon personal responsibility. I just think that we need to look at each specific case and see what proportion of responsibility each has in influencing the other. Companies are often more powerful than individual people so there's asymmetry there, which is ripe for abuse. I don't buy the argument that companies should never be held liable.

Those are all edge cases, not serious examples of product lawsuits. Courts throw out those frivolous lawsuits. Fringe abuses don't negate corporate responsibility.

I do think that arguing that anything is virtue signaling is lazy, a conversation stopper. It's dismissive rather than engaging in the substance/content/logic. Going along with your fire/burning examples, if a building catches fire and someone installs sprinklers, you wouldn’t say they’re virtue signaling. You’d say they’re responding to risk. Same with bringing on a psychiatrist when they find that AI is involved in causing/worsening psychosis or murders or suicides.

Coding engineers fixing everything behind the scenes is unlikely. There's too many profit incentives for the company to get people addicted, hooked, engaged. The incentives aren't aligned with public interest. Engineers also aren't experts in mental health and human behavior. Bringing on a psychiatrist could end up being more performative rather than integrating our insight as psychiatrists into developing these tools, but I hope that's not the goal.

What we get out of this discussion, of course, depends on our political persuasion. Politics begins when reasonable people can disagree on a given subject. My intention isn't to attack you but rather to have a polite discussion.
You weren't trying to have a polite discussion when you negated my point to that of a slippery slope, a type of logical fallacy.
 
As libertarian as I tend to lean, i do think the government has a role to play in protecting us from companies and potentially harmful technology. The worst case is when the large corporations and the government are too closely aligned. As a society, we still haven’t figured out how to manage the challenge of kids and smartphones so it’s a little scary when newer technology potentially makes that even more problematic.
Negative externalities are the biggest handwavy fatal flaw of any rigidly held minarchist/ancap political value/belief system. As long as large corporations continue to benefit from significant regulatory capture, which includes all of the extraordinary protections afforded corporations that aren't afforded to individual persons, there's an equal role of government regulation to reign in undue concentration of economic (which is also political) power. There are innumerable examples of large corporations doing everything they can to hide their negative externalities.

Car companies continually make cars safer because accidents happen. And I think "accident" is an appropriate word for someone with budding mania/psychosis/suicidality prompting an LLM into encouraging their pathological ideations. Talking someone into killing themselves is no more an intended function of a commercial LLM than is the brakes failing in a car. There's nothing wrong with LLM companies making their commercial products safer. If you want to go through the effort of self-hosting an uncensored model, nothing is stopping you (other than funds for a computer with a good amount of RAM and VRAM.)
 
As libertarian as I tend to lean, i do think the government has a role to play in protecting us from companies and potentially harmful technology. The worst case is when the large corporations and the government are too closely aligned. As a society, we still haven’t figured out how to manage the challenge of kids and smartphones so it’s a little scary when newer technology potentially makes that even more problematic.
Idk, the solution of “don’t let them have them until they’re older” seems to work pretty well for most of our friends with older kids.
 
Idk, the solution of “don’t let them have them until they’re older” seems to work pretty well for most of our friends with older kids.
That is consistent with the studies that show less negative effects on higher SES kids. You are correct, we just need to shift the distribution. That is a lot of what good public health policy I would think.
 
Negative externalities are the biggest handwavy fatal flaw of any rigidly held minarchist/ancap political value/belief system. As long as large corporations continue to benefit from significant regulatory capture, which includes all of the extraordinary protections afforded corporations that aren't afforded to individual persons, there's an equal role of government regulation to reign in undue concentration of economic (which is also political) power. There are innumerable examples of large corporations doing everything they can to hide their negative externalities.

Car companies continually make cars safer because accidents happen. And I think "accident" is an appropriate word for someone with budding mania/psychosis/suicidality prompting an LLM into encouraging their pathological ideations. Talking someone into killing themselves is no more an intended function of a commercial LLM than is the brakes failing in a car. There's nothing wrong with LLM companies making their commercial products safer. If you want to go through the effort of self-hosting an uncensored model, nothing is stopping you (other than funds for a computer with a good amount of RAM and VRAM.)
This is so so true. What's particularly troublesome is that punishments are always less than the crime earns when it comes to white collar/corporate malfeasance. Sacklers can intentionally/repeatedly cause undue harm to our population writ large and then give up only a relatively small portion of the proceeds to avoid any criminal charges. And that's for something that every person in the US has heard of (opioid crisis).

We setup a system where the parking meters cost $10 hour but the penalty for a ticket is $5. The math makes it so any reasonably acting corporation is encouraged to lie/cheat/ignore negative externalities. This is not a right/left leaning issue, it's bad math and bad deterrence policy which people from the right and the left should understand.
 
Last edited:
That is consistent with the studies that show less negative effects on higher SES kids. You are correct, we just need to shift the distribution. That is a lot of what good public health policy I would think.
Yup, it's clear what schools can afford Yonder phone free schools. I know I would certainly pay extra to make sure my kid grew up in such an environment.
 
Last edited:
Same with bringing on a psychiatrist when they find that AI is involved in causing/worsening psychosis or murders or suicides.

Coding engineers fixing everything behind the scenes is unlikely. There's too many profit incentives for the company to get people addicted, hooked, engaged.
It shouldn't be lost, it would be good business if one could thread the needle of making AI safer (at least, less associated with psychosis SI/HI/actions) AND more addicting/engaging. Who's to say that isn't the goal of bringing a psych on board? Profit incentive is part of it certainly, not because they actually care. So why would we denounce a free market company making a hire it sees fit for profit if we are all about this kind of freedom?

Disneyland hires folks to make amusement park rides BOTH extremely safe, and as thrilling as possible. That's business.
 
"It just increasingly affirms your bull**** and blows smoke up your ass so that it can get you f*cking hooked on wanting to engage with it," she said.

Wow, this is great news to know that AI can meet the standard of care practiced by the average therapist.
 
Top