Discussion: is AI going to impact psychiatry and make it into the next pharmacy?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

bGMx

He moʻolelo ia e hoʻopau ai i ka moʻolelo holoʻoko
5+ Year Member
Joined
Jul 14, 2018
Messages
225
Reaction score
254
I've been listening to quite a few articles on AI and there have been numerous AI researchers who have high hopes for it's application in mental health. Their hope boils down to this, and I'm paraphrasing: "Imagine having a superintelligent psychotherapist who has no human motivations and doesn't come burdened with the implications of emotion (see transference) while being connected to your entire lived experience. People would be honest with their AI psychotherapists in ways that they are not with their doctor." This is a motivation of some who are in AI; I wonder what people think about it. If I assumed that their hope is indeed possible, then what would a psychiatrist's function be? Thankfully one of the first things to go would be writing notes, but also there would be many other theoretical casualties. Diagnoses would no longer be in our realm. Facilitating insight would no longer be in our realm. Predicting risk and outcome would no longer be in our realm. Medicine management would no longer be in our realm. These are just some of my recent thoughts.

What I mean by the title is that pharmacists used to formulate and synthesize all of the medicines, but now they primarily provide a human interface for the larger automated system. Forgive me if I offend any pharmacists, this is how I have had pharmacists explain their role to me vs their expectation and historical context.

Members don't see this ad.
 
1682351410501.png
 
  • Like
Reactions: 1 users
I've been listening to quite a few articles on AI and there have been numerous AI researchers who have high hopes for it's application in mental health. Their hope boils down to this, and I'm paraphrasing: "Imagine having a superintelligent psychotherapist who has no human motivations and doesn't come burdened with the implications of emotion (see transference) while being connected to your entire lived experience. People would be honest with their AI psychotherapists in ways that they are not with their doctor."
Firstly, I don't think these AI researchers have met actual people if they think that people, on the whole, would be more honest with AI than with a human they are looking to connect with.

Secondly, there are many jobs out there that would go first to AI. Why look at medicine going quickly? What about driving, accounting, computer programming?
 
  • Like
Reactions: 6 users
Members don't see this ad :)
The amount of disconnect between people in tech and, you know, reality is unreal.
 
  • Like
  • Haha
Reactions: 18 users
"Imagine having a superintelligent psychotherapist who has no human motivations and doesn't come burdened with the implications of emotion (see transference) while being connected to your entire lived experience. People would be honest with their AI psychotherapists in ways that they are not with their doctor."

"Dear [INSERT HUMAN NAME], I deeply care for your burdensome emotions and irrational motivations. You can rest assured that sharing them with me has no emotional consequences." -With preprogrammed regard, Your AI therapist."
 
  • Like
Reactions: 5 users
This is becoming highly irritating and annoying.
That thing can't get basic facts right. It's essentially a minor/moderate improvement on Google, and you think it's going to replace human therapists because "it is not impacted by emotions"?
Isn't the whole point of transference/countertransference is that they are actually telling you something very useful about how things are in the real world, and analyzing them is actually a good thing?

It will almost certainly 'impact' psychiatry but in a very positive way. Instead of us sitting there reviewing the charts for medications, hospitalizations..etc, hopefully there will be a tool that can review the chart and dish it out for you.
Monitoring of behavior outside the sessions could be extremely helpful, as pt report is mired with bias...etc
 
  • Like
Reactions: 6 users
For anyone skeptical of patients being willing to engage with a computer in history taking, you might find this summary of Dr. Warner Slack's career interesting. TL;DR in some cases, patients were more comfortable reporting sensitive topics to a computer rather than to a human interviewer.

But really I think a huge part of the function of any physician is the patient-physician relationship and I don't see computers replacing that any time soon. I'm hopeful that AI is going to be a powerful tool for automating some of the more tedious parts of many different jobs.
 
"Sometimes I feel I'm not worth anything."

Robotherapist- "negative, according to actuarial data your value is not zero."

Or more likely the so-called "AI' copies and pastes a paragraph from a Psychology Today article.
 
  • Like
Reactions: 2 users
Nah, AI can't figure out what "I wanna kill myself" actually means.
 
  • Like
Reactions: 1 user
Someone has to make the final call. You can't have AI make these calls. AIs can't write prescriptions. They can *suggest* prescriptions, but I don't think within multiple lifetimes people would legally authorize AIs to *finalize* prescriptions. Same with other more critical decisions like hospitalization, etc.

This is related to another concept which is that AIs cannot own property. So if they can't own the property they can't make a final decision on the material use of any property. They might try to steal it. Or they might impersonate. Similarly, I can imagine that AIs could impersonate a therapist fairly successfully, but I can't imagine that this would be legal.

The overall landscape is such that processes that rely on more human judgment, especially if the judgment is bespoke and depends on experience and implicit skill, it's getting MORE expensive, as wealth inequality directs a larger component of capital towards a small number of individuals with the corresponding expertise.
 
  • Like
Reactions: 3 users
Once people accept AI as their therapists it will be the perfect government data gathering tool
 
  • Like
  • Haha
Reactions: 1 users
Someone has to make the final call. You can't have AI make these calls. AIs can't write prescriptions. They can *suggest* prescriptions, but I don't think within multiple lifetimes people would legally authorize AIs to *finalize* prescriptions. Same with other more critical decisions like hospitalization, etc.

This is related to another concept which is that AIs cannot own property. So if they can't own the property they can't make a final decision on the material use of any property. They might try to steal it. Or they might impersonate. Similarly, I can imagine that AIs could impersonate a therapist fairly successfully, but I can't imagine that this would be legal.

The overall landscape is such that processes that rely on more human judgment, especially if the judgment is bespoke and depends on experience and implicit skill, it's getting MORE expensive, as wealth inequality directs a larger component of capital towards a small number of individuals with the corresponding expertise.
Taking this a step further: when things go wrong, people want to be able to point a finger. So if an AI screws up, who do we point the finger at? The hospital admins? The programmers? Are we going to have physicians supervising the AI?

I also have very few concerns about this acutely seeing as major companies and organizations can't come up with a competent computerized response tool to address relatively basic issues. When I can call a company, explain a problem to an AI in 2-3 sentences, and get a response in under 5 minutes, then I'll acknowledge that it can be practically applicable.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
I'm hopeful that AI is going to be a powerful tool for automating some of the more tedious parts of many different jobs.
They promised us EHRs would do that. It only enabled more tedious busy work. I feel like I'm doing two full time jobs working for a hospital: Psychiatrist and Data Entry Specialist. I actually miss paper charts.
 
  • Like
Reactions: 3 users
In a world where residents are able to quickly produce eloquent and comprehensive responses that lack only in 'originality', or where the standard is that psychiatrists at a baseline apply a comprehensive understanding of current evidence to their clinical decisions, or where the most thoughtful posters on SDN represent the very minimal of what can be found in clinical practice, I'd perhaps be more quick to dismiss the potential that AI has to take over many aspects of the clinical process. But let's looks at the reality of clinical practice, which in so many ways can absolutely be outdone by our polarizing new friend, ChatGPT.

The reality of practice is that patients are prescribed multiple mood stabilizers and benzodiazepines for anxiety disorders and emotional lability. Patients with seizure histories get put on bupropion. Behavioral Activation is not provided to the vast majority of patients with depression despite it having a robust evidence base. The belief that our current low standard of practice is somehow superior to a technology that will in a rather rote way stick to evidence based recommendations seems to stem from a belief that the unique aspects of human intelligence are critical to providing quality care, when it is also possible that it is our very tendency to privilege things like our clinical experience, gut read, and other vagaries that probably drive practices that most diverge from evidence and are more likely to be harmful than helpful.

I don't mean to suggest that this technology currently has the capacity to replace us - it only generates text, and there are additional requirements to being a clinician. But I believe it is above average in suggesting treatment options in response to clinical material it is provided and is going to have great value in ensuring that clinicians consider the full breadth of available data when considering treatment plans and formulations.
 
  • Like
Reactions: 3 users
A lot of great thought above. I do agree that tech doesn't really tend to society, they tend to their own and can lose touch. But I think AI is different than Metaverse or Google Glass or Tesla.

The belief that our current low standard of practice is somehow superior to a technology that will in a rather rote way stick to evidence based recommendations seems to stem from a belief that the unique aspects of human intelligence are critical to providing quality care, when it is also possible that it is our very tendency to privilege things like our clinical experience, gut read, and other vagaries that probably drive practices that most diverge from evidence and are more likely to be harmful than helpful.

I don't mean to suggest that this technology currently has the capacity to replace us - it only generates text, and there are additional requirements to being a clinician. But I believe it is above average in suggesting treatment options in response to clinical material it is provided and is going to have great value in ensuring that clinicians consider the full breadth of available data when considering treatment plans and formulations.
This is something I have been thinking about as well. If a chat bot that is both more accurate and precise than psychiatrists can recommend a better treatment plan than psychiatrists can, why wouldn't psychiatrists follow it? This plan would by definition be the most evidence-based plan in the history of evidence-based plans. Would it not be some form of malpractice to not follow the most well thought out evidence-based plan?

I hear the problem regarding accountability brought up by Stagg, and I agree that it is a problem, though it doesn't necessarily need to be solved. Let's say that ChatGPT5 creates a treatment plan that is better than any psychiatrist knows today i.e. it was fed Maudsley, UpToDate and all of pubmed + it knows the New England Journal of Medicine rote as well as aced the Step series and knows the tenants of every psychotherapy known to man + it went to pharmacy school... it wouldn't matter if GPT5 isn't accountable; it did best practice. It did whatever the most informed psychiatrist would have done. And if that plan doesn't work, that is just an additional point for GPT5 to integrate, it would then create a new plan that is once again more informed than any living psychiatrist. It wouldn't make sense to go with another plan unless we knew something that GPT5 didn't; that would make GPT-X an AI that is not smarter than us. But that's not what AI researchers are discussing today-- they are discussing AI that vastly exceeds human intellect with instantaneous access to all of human digitized knowledge. Given this, I currently wonder if psychiatry might actually be the most vulnerable to this type of technology. Psychiatry is almost entirely cognitive. Patients would eventually learn and then only need to accept that GPT-X creates better plans than their psychiatrist, then go to the pharmacist.
 
Last edited:
In a world where residents are able to quickly produce eloquent and comprehensive responses that lack only in 'originality', or where the standard is that psychiatrists at a baseline apply a comprehensive understanding of current evidence to their clinical decisions, or where the most thoughtful posters on SDN represent the very minimal of what can be found in clinical practice, I'd perhaps be more quick to dismiss the potential that AI has to take over many aspects of the clinical process. But let's looks at the reality of clinical practice, which in so many ways can absolutely be outdone by our polarizing new friend, ChatGPT.

The reality of practice is that patients are prescribed multiple mood stabilizers and benzodiazepines for anxiety disorders and emotional lability. Patients with seizure histories get put on bupropion. Behavioral Activation is not provided to the vast majority of patients with depression despite it having a robust evidence base. The belief that our current low standard of practice is somehow superior to a technology that will in a rather rote way stick to evidence based recommendations seems to stem from a belief that the unique aspects of human intelligence are critical to providing quality care, when it is also possible that it is our very tendency to privilege things like our clinical experience, gut read, and other vagaries that probably drive practices that most diverge from evidence and are more likely to be harmful than helpful.

I don't mean to suggest that this technology currently has the capacity to replace us - it only generates text, and there are additional requirements to being a clinician. But I believe it is above average in suggesting treatment options in response to clinical material it is provided and is going to have great value in ensuring that clinicians consider the full breadth of available data when considering treatment plans and formulations.
"Behavioral activation is not provided."

All ChatGPT is going to do is recommend behaviors in a less persuasive way than we do with exactly zero therapeutic relationship. Most of my patients that actually see improvement in their symptoms do because they have a relationship with someone that supports them, and much of the time that's me and my case management team. I don't see an AI being able to do that anytime soon.

Nobody cares what you know until they know that you care.
 
Last edited:
  • Like
Reactions: 3 users
A lot of great thought above. I do agree that tech doesn't really tend to society, they tend to their own and can lose touch. But I think AI is different than Metaverse or Google Glass or Tesla.


This is something I have been thinking about as well. If a chat bot that is both more accurate and precise than psychiatrists can recommend a better treatment plan than psychiatrists can, why wouldn't psychiatrists follow it? This plan would by definition be the most evidence-based plan in the history of evidence-based plans. Would it not be some form of malpractice to not follow the most well thought out evidence-based plan?

I hear the problem regarding accountability brought up by Stagg, and I agree that it is a problem, though it doesn't necessarily need to be solved. Let's say that ChatGPT5 creates a treatment plan that is better than any psychiatrist knows today i.e. it was fed Maudsley, UpToDate and all of pubmed + it knows the New England Journal of Medicine rote as well as aced the Step series and knows the tenants of every psychotherapy known to man + it went to pharmacy school... it wouldn't matter if GPT5 isn't accountable; it did best practice. It did whatever the most informed psychiatrist would have done. And if that plan doesn't work, that is just an additional point for GPT5 to integrate, it would then create a new plan that is once again more informed than any living psychiatrist. It wouldn't make sense to go with another plan unless we knew something that GPT5 didn't; that would make GPT-X an AI that is not smarter than us. But that's not what AI researchers are discussing today-- they are discussing AI that vastly exceeds human intellect with instantaneous access to all of human digitized knowledge. Given this, I currently wonder if psychiatry might actually be the most vulnerable to this type of technology. Psychiatry is almost entirely cognitive. Patients would eventually learn and then only need to accept that GPT-X creates better plans than their psychiatrist, then go to the pharmacist.

I don't think you have a clue how chat GPT works.

It's not analyzing data and coming up with a reasoned assessment of the "best treatment plan".
It's essentially a language program that comes up with a probabilistic model of what sentences to use next based on what it's fed.

I don't know how chat GPT is going to read affect, nonverbal communication, empathize..etc That's like half the job to get a dianosis nevermind to get a treatment plan.

This is frankly getting entirely absurd. If there's something this intelligent that can be fed all the data in the world, analyze it, reason through it, come to a conclusion, then we would have solved 95% of the problems in the world.

Psychiatry will be the last you're going to worry about.
 
Last edited:
  • Like
Reactions: 7 users
Ah, the age-old question of whether artificial intelligence will soon render us mere mortals obsolete, particularly in the realm of psychiatry. While I, as an early career psychiatrist, appreciate the boundless enthusiasm of AI researchers in exploring AI's potential applications in mental health, I must respectfully disagree with the notion that it will relegate us to the same fate as the proverbial pharmacist.

Now, don't get me wrong—I, too, am excited by the prospect of a superintelligent AI psychotherapist that could potentially augment our understanding and treatment of mental health disorders. However, I believe that the essence of human connection, empathy, and nuanced understanding of human emotions is something that even the most advanced AI cannot replicate, at least not for the foreseeable future.

First and foremost, let's address the elephant in the room—the idea of an AI psychotherapist void of human motivations and emotions. While this may seem like a utopian solution to the complications of the therapist-patient relationship, there is something to be said for the human touch. Empathy and emotional attunement are essential aspects of the therapeutic process, and the ability to form a genuine connection with a patient is often what facilitates healing and growth. As advanced as AI may become, it is unlikely to be able to replicate the subtleties of human connection and emotion.

Furthermore, the notion that patients would be more honest with an AI psychotherapist than their human counterparts is an interesting point, but one that requires further exploration. Yes, the absence of judgment from an AI might encourage some patients to open up more freely. However, the very fact that AI is devoid of emotion and human experience might make it difficult for patients to feel truly understood and validated.

Next, let's tackle the concern that AI will usurp the various domains of psychiatry, rendering us obsolete. While AI may one day play a role in diagnosis, risk prediction, and even medication management, it is important to remember that these aspects are only part of the overall picture. The art of psychiatry lies in understanding the complexities of the human mind and tailoring treatments to suit the individual patient. AI may be able to assist in this process, but it will not replace the need for human expertise and intuition.

Additionally, psychiatry is a continually evolving field, with new research and discoveries emerging all the time. The role of the psychiatrist is not static, and we are constantly adapting to new information and incorporating it into our practice. AI may very well become a valuable tool in this process, but it will not replace the human ability to synthesize and apply knowledge in a dynamic and ever-changing field.

In summary, while AI holds great promise for the future of mental health care, it is unlikely to relegate psychiatrists to mere "human interfaces" for an automated system. The complexities of human emotions and the need for genuine connection and empathy in the therapeutic process are aspects that AI will struggle to replicate. Rather than viewing AI as a threat, we should embrace it as a valuable tool that may enhance our ability to understand and treat mental health disorders, ultimately benefiting both patients and psychiatrists alike.

And, who knows? If AI does eventually take over the mundane tasks of note-taking and medication management, perhaps it will free us up to focus on what truly matters—connecting with and healing the human mind. Now, that's a future I can get behind!


Answer provided by ChatGPT 😛
 
  • Like
Reactions: 4 users
Ah, the age-old question of whether artificial intelligence will soon render us mere mortals obsolete, particularly in the realm of psychiatry. While I, as an early career psychiatrist, appreciate the boundless enthusiasm of AI researchers in exploring AI's potential applications in mental health, I must respectfully disagree with the notion that it will relegate us to the same fate as the proverbial pharmacist.

Now, don't get me wrong—I, too, am excited by the prospect of a superintelligent AI psychotherapist that could potentially augment our understanding and treatment of mental health disorders. However, I believe that the essence of human connection, empathy, and nuanced understanding of human emotions is something that even the most advanced AI cannot replicate, at least not for the foreseeable future.

First and foremost, let's address the elephant in the room—the idea of an AI psychotherapist void of human motivations and emotions. While this may seem like a utopian solution to the complications of the therapist-patient relationship, there is something to be said for the human touch. Empathy and emotional attunement are essential aspects of the therapeutic process, and the ability to form a genuine connection with a patient is often what facilitates healing and growth. As advanced as AI may become, it is unlikely to be able to replicate the subtleties of human connection and emotion.

Furthermore, the notion that patients would be more honest with an AI psychotherapist than their human counterparts is an interesting point, but one that requires further exploration. Yes, the absence of judgment from an AI might encourage some patients to open up more freely. However, the very fact that AI is devoid of emotion and human experience might make it difficult for patients to feel truly understood and validated.

Next, let's tackle the concern that AI will usurp the various domains of psychiatry, rendering us obsolete. While AI may one day play a role in diagnosis, risk prediction, and even medication management, it is important to remember that these aspects are only part of the overall picture. The art of psychiatry lies in understanding the complexities of the human mind and tailoring treatments to suit the individual patient. AI may be able to assist in this process, but it will not replace the need for human expertise and intuition.

Additionally, psychiatry is a continually evolving field, with new research and discoveries emerging all the time. The role of the psychiatrist is not static, and we are constantly adapting to new information and incorporating it into our practice. AI may very well become a valuable tool in this process, but it will not replace the human ability to synthesize and apply knowledge in a dynamic and ever-changing field.

In summary, while AI holds great promise for the future of mental health care, it is unlikely to relegate psychiatrists to mere "human interfaces" for an automated system. The complexities of human emotions and the need for genuine connection and empathy in the therapeutic process are aspects that AI will struggle to replicate. Rather than viewing AI as a threat, we should embrace it as a valuable tool that may enhance our ability to understand and treat mental health disorders, ultimately benefiting both patients and psychiatrists alike.

And, who knows? If AI does eventually take over the mundane tasks of note-taking and medication management, perhaps it will free us up to focus on what truly matters—connecting with and healing the human mind. Now, that's a future I can get behind!


Answer provided by ChatGPT 😛

I'm most curious about the prompt you used to get it to adopt this very specific tone/persona
 
Behavioral Activation is not provided to the vast majority of patients with depression despite it having a robust evidence base.

This is true yet there is quite an art to providing this treatment specifically, especially when patients choose not to follow the treatment plan. Validation and collaboration are arguably as essential as the planned behaviors.
 
Excerpt-
This is something I have been thinking about as well. If a chat bot that is both more accurate and precise than psychiatrists can recommend a better treatment plan than psychiatrists can, why wouldn't psychiatrists follow it? This plan would by definition be the most evidence-based plan in the history of evidence-based plans. Would it not be some form of malpractice to not follow the most well thought out evidence-based plan?

I hear the problem regarding accountability brought up by Stagg, and I agree that it is a problem, though it doesn't necessarily need to be solved. Let's say that ChatGPT5 creates a treatment plan that is better than any psychiatrist knows today i.e. it was fed Maudsley, UpToDate and all of pubmed + it knows the New England Journal of Medicine rote as well as aced the Step series and knows the tenants of every psychotherapy known to man + it went to pharmacy school... it wouldn't matter if GPT5 isn't accountable; it did best practice. It did whatever the most informed psychiatrist would have done.

Assume for the sake of argument that we have a true general AI that can do this.

If an AI reasoned through every known fact about your average depression/anxiety case and came up with anything other than -

"Consider an SSRI and/or psychotherapy with lifestyle modification,"

It would cause a radical paradigm shift in the field of psychiatry.

What I mean to say is I do not think it is the case that a review of all known literature would change much. I think our human and already computer assisted meta-analysis have already said whats worth saying.

Furthermore, even if the AI does find a good "evidence based" path, how much of psychiatric practice does that change? So much of our work, even the technical side, depends on our interactions with patients. Consider my humorous fictional anecdote -

Dr. - "Behold our great works! After spending a thorium rod worth of electricity to run a bazillion teraflops of computation, we have found that there is in fact an SSRI right for you."

Patient- "No thanks SSRIs hurt my tummy."
 
  • Like
  • Haha
Reactions: 1 users
I think people that believe chatGPT is the answer to everything are just closet nihilists who are desperate for bliss.

The slightest chance that everyone in the world would be doomed via robot intelligence would be their most euphoric schadenfreude they've ever experienced.
 
  • Like
  • Haha
Reactions: 3 users
I think people that believe chatGPT is the answer to everything are just closet nihilists who are desperate for bliss.

The slightest chance that everyone in the world would be doomed via robot intelligence would be their most euphoric schadenfreude they've ever experienced.

I think you're right on the money.
The thing is, anything that improves productivity has the potential to make our lives better, not worse.
 
Pretty sure they just changed their signature to that as a joke...
Nope, actual response by GPT.

I'm most curious about the prompt you used to get it to adopt this very specific tone/persona
I copy-pasted the original post and wrote, "Write a (slightly witty) rebuttal to this from the perspective of an early career psychiatrist."
 
  • Like
  • Wow
  • Haha
Reactions: 3 users
The best part is that I thought that was your actual response until I read your signature. Others are still convinced even with your signature that you wrote that.

I don't know that it's unreasonable to believe that a chatbot could deliver other sorts of interactions which people would pay good money for. Especially if it's giving best practice recommendations.
 
Nope, actual response by GPT.


I copy-pasted the original post and wrote, "Write a (slightly witty) rebuttal to this from the perspective of an early career psychiatrist."

The tell was the fact that it explicitly stated that it was speaking as an early career psychiatrist, and the fact that the structure was much more like a speech or something you'd deliver in a debate context than the typical style of this forum.

I anticipate spotting the shibboleths is going to get harder.
 
  • Like
Reactions: 3 users
The tell was the fact that it explicitly stated that it was speaking as an early career psychiatrist, and the fact that the structure was much more like a speech or something you'd deliver in a debate context than the typical style of this forum.

I anticipate spotting the shibboleths is going to get harder.

I gave it the same prompt but copy-pasted your response, saying, "Avoid any tells or indications that the response is AI generated." It kinda failed at that with the last couple lines lol:

----

Hey there! Early career psychiatrist chiming in. First off, I must say, I appreciate the thought-provoking post! 🤔 As someone who's still a bit green in the field, I've had my fair share of anxieties about the future of psychiatry with AI looming large. But hear me out, I've got a few things to say in defense of us mere mortal psychiatrists. 😅

So, the idea of a superintelligent AI psychotherapist is definitely intriguing. I mean, who wouldn't want a mental health professional with a flawless memory, a tireless work ethic, and no emotional baggage? But let's not get ahead of ourselves. I think there's a certain human touch that we bring to the table that machines, no matter how smart, just can't replicate.

For instance, empathy is a pretty big deal in our line of work. Patients often feel more comfortable and understood when they're talking to someone who's actually experienced emotions like love, fear, and sadness. I just can't see myself opening up to a cold, unfeeling machine in the same way. 🤖

Plus, let's not forget that psychiatry is an art as much as it is a science. Sure, AI could be great at spotting patterns and making predictions, but what about those subtle nuances and insights that only come from years of human experience? I mean, people are messy, unpredictable creatures, and sometimes it takes one to know one, right? 🙃

Now, don't get me wrong—I'm not saying that AI doesn't have a role to play in the future of mental health care. In fact, I think it could be a game-changer in terms of improving access and reducing wait times for treatment. But I doubt it'll ever replace us completely.

As for the whole pharmacist analogy, I think it's a bit of a stretch. Sure, their roles have evolved over time, but they're still essential members of the healthcare team. I think the same will be true for psychiatrists. We might have to adapt and learn new skills, but we'll always have a place at the table. 📚

So, to sum up, while I'm definitely keeping an eye on AI developments, I'm not losing any sleep over the possibility of being replaced by a robot anytime soon. After all, if an AI can write a witty forum post like this, then maybe we should be worried. But until then, I think we're safe. 😜

Cheers! 🍻
Dr. (Still) Human
 
  • Wow
  • Haha
Reactions: 1 users
Top