Thoughts on this AI in medicine video w/ respect to psychiatry?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

WolfBoy3000

Full Member
Joined
Jan 22, 2024
Messages
24
Reaction score
8
www.youtube.com/watch?v=kALDN4zIBT0&ab_channel=SheriffofSodium

Curious if anyone has watched this new video causing havoc in the medical subreddits - 'Yes, Doctors: AI Will Replace You'. I'm not sold on all he's saying by any means, but I think to completely ignore the messages here would be optimistic at best and ignorant at worst.

I'm a med student pretty set on psychiatry and my primary hang-up is AI-powered mid-levels (have posted about this before). This has me tossing and turning at night thinking about whether I should just pursue a surgical specialty.

Would love to hear perspectives from psychiatrists on the video. It's the most thorough I've seen thus far and addresses a lot of typical arguments people, particularly physicians, give as to why AI is not a threat.

Members don't see this ad.
 
Last edited:
It's a very good video, particularly for provoking paranoia in students. However, it could have been and was discussed in a very similar manner 30 years ago. Yes, we will eventually get to some sort of post scarcity utopia and the singularity will happen. However, it's not tomorrow and it's not something you can meaningfully plan for. I'm not sure what exactly he thinks is such a magical thing about surgical specialties. I assume it's because he is a nephrologist and just thinks surgery in general is amazing? You think AI can see a tumor on an x-ray but not in an actual person? Amazon warehouses require a lot of physically moving stuff around too and nobody is saying AI will never be involved in retail logistics. And if you're saying, well surgery is much more complicated than Amazon moving boxes! Indeed, that's why people aren't buying into his whole philosophy as an imminent threat. There are huge differences between a surgical specialty and psychiatry for you to consider, AI is not one of them.
 
Last edited:
It's a very good video, particularly for provoking paranoia in students. However, it could have been and was discussed in a very similar manner 30 years ago. Yes, we will eventually get to some sort of post scarcity utopia and the singularity will happen. However, it's not tomorrow and it's not something you can meaningfully plan for. I'm not sure what exactly he thinks is such a magical thing about surgical specialties. I assume it's because he is a nephrologist and just thinks surgery in general is amazing? You think AI can see a tumor on an x-ray but not in an actual person? Amazon warehouses require a lot of physically moving stuff around too and nobody is saying AI will never be involved in retail logistics. And if you're saying, well surgery is much more complicated than Amazon moving boxes! Indeed, that's why people aren't buying into his whole philosophy as an imminent threat. There are huge differences between a surgical specialty and psychiatry for you to consider, AI is not one of them.
I think another part lost on people thinking surgery is immune to AI is what percent of revenue comes from surgery itself. Many surgeons derive the majority of their income from office visits/consults/call and not from the actual procedures/operations themselves. Even if AI consumes their clinics/consults first, there would be a significant excess of surgeons at present capacity. I think psychiatry would be near the last of the medical specialties to fall to AI given the particularly human nature of our treatment (not to say that any specialty of medicine is inhuman, psychiatry is just more human).
 
Members don't see this ad :)
The author of the video definitely brought up the office visit portion of surgeon income (watch it at 2x speed, it's not bad even if I disagreed with it), but he also sure did think procedures were the most resistant to AI encroachment by far. He also seemed to really believe that while patients reportedly valued human interaction in blind surveys, they would not actually value it enough to pay any premium for it, eg bank tellers. I mean we have a bit of evidence that's not true here given the cash nature of much of psychiatry, but it is something the video author definitely thought of.
 
If AI replaces us as physicians, and specifically psychiatrists, then a lot if not most of the population's jobs have already been replaced by it

Thinking in terms of accounting, taxes, programing etc
 
Last edited:
I think another part lost on people thinking surgery is immune to AI is what percent of revenue comes from surgery itself. Many surgeons derive the majority of their income from office visits/consults/call and not from the actual procedures/operations themselves. Even if AI consumes their clinics/consults first, there would be a significant excess of surgeons at present capacity. I think psychiatry would be near the last of the medical specialties to fall to AI given the particularly human nature of our treatment (not to say that any specialty of medicine is inhuman, psychiatry is just more human).
I recognize that AI fully replacing doctors in the near future is highly unlikely.

Having said that, the way AI is going to redefine work as we know it will come in unexpected ways. What we describe as "human" components are what will be the most replaceable because there are no strict external quality criteria and they can be loosely replicated without significant consequences. For example, we know that now AI-created memes and shorts and paintings, i.e., creative human endeavors, are common place. AI now even does a pretty good job of holding a socially apt conversation that is indistinguishable or perhaps even superior to even some humans and there are anecdotes of people using AI for therapeutic or counseling purposes. Instead of these "human" components, where AI lacks is applying specific knowledge and experience precisely and accurately within context. We know that AI not uncommonly comes up with false information, attributes references wrongly, and generally makes errors in inexplicable ways. You will need an expert who can confirm final work, which means those who are going to survive are experts with small niches who can check and confirm. I'm not sure how this will shape our field; perhaps unfortunately, increasing sub-specialization by way of more and longer fellowships will become the theme.
 
Last edited:
Definitely concur that the main takeaway is that whatever happens will be unexpected. It might indeed be that we ultimately need human reviewers to "check work." It also might not. As the OP's video described, human doctors make a lot of errors too and there isn't usually another human doctor checking what they did each and every time.
 
Haven't watched the video yet, but my big question with this is still the liability issue. Ie, when the AI is wrong, misdiagnoses someone, or harms someone and the lawsuits start coming in is big tech going to be ready for the legal blowback? Everyone is focused on the "is it possible?" and catastrophizing without considering the actual practical implementation of these programs.

What I think is far more likely is that tech companies are going to start pushing AI tools to large health systems that allow them to "increase efficiency" and say something like "look how much more productive 1 physician can be and how many more patients they can see with this program!" It's the same story of pushing docs to see more people for similar pay with a shiny new toy to justify doing this.
 
Sure, so liability is covered in the video. Basically, it's just another regulatory hurdle like anything else where big tech just needs easily purchasable political will. Big tech companies are more than ready for legal blowback. Legal blowback is the easy part for them. All of them are sued all day, every day in nearly every country in the world, often for life destroying events. Heck probably a plurality of those lawsuits already involve AI, albeit the focus right now is on copyright issues. AI increasing efficiency and leading to greater physician expectations is already in place. I'm not saying I agree with it, but the idea was more about complete replacement. The video goes through, in some detail, what the actual progressive steps towards replacement would be, in the author's opinion, but it doesn't happen overnight. It starts with edge cases and spreads.
 
Last edited:
I recognize that AI fully replacing doctors in the near future is highly unlikely.

Having said that, the way AI is going to redefine work as we know it will come in unexpected ways. What we describe as "human" components are what will be the most replaceable because there are no strict external quality criteria and they can be loosely replicated without significant consequences. For example, we know that now AI-created memes and shorts and paintings, i.e., creative human endeavors, are common place. AI now even does a pretty good job of holding a socially apt conversation that is indistinguishable or perhaps even superior to even some humans and there are anecdotes of people using AI for therapeutic or counseling purposes. Instead of these "human" components, where AI lacks is applying specific knowledge and experience precisely and accurately within context. We know that AI not uncommonly comes up with false information, attributes references wrongly, and generally makes errors in inexplicable ways. You will need an expert who can confirm final work, which means those who are going to survive are experts with small niches who can check and confirm. I'm not sure how this will shape our field; perhaps unfortunately, increasing sub-specialization by way of more and longer fellowships will become the theme.
That's not my experience at all of practicing psychiatry. I don't think having a conversation with a computer screen or AI bot is going to produce the (already at times limited) benefits of psychiatric interventions. Placebo effects are a big part of our field and nothing about talking to an AI screams this will increase or keep placebo. Having a specialist that actually understands what is going on rather than is a computer that hallucinates information is a direct benefit to patients. Yes yes the singularity could come in our lifetime, but even then I think psychiatry will resist this more than any other medical specialty.
 
That's not my experience at all of practicing psychiatry. I don't think having a conversation with a computer screen or AI bot is going to produce the (already at times limited) benefits of psychiatric interventions. Placebo effects are a big part of our field and nothing about talking to an AI screams this will increase or keep placebo. Having a specialist that actually understands what is going on rather than is a computer that hallucinates information is a direct benefit to patients. Yes yes the singularity could come in our lifetime, but even then I think psychiatry will resist this more than any other medical specialty.
Idk, recently I've seen articles talking about Gen Z using ChatGPT as their therapists to save money and there was a "study" showing that something like 75-80%+ of GenZ kids surveyed would marry an AI Chatbot if it was legal. Pretty sure the survey/study was done by an AI company, so take it for what that's worth, but we saw during COVID how much lack of direct socialization can harm kids and post-COVID how rampant avoidance has become. Why strive for real relationships/care/interactions when you can get all of that with an AI without the confrontations? It begs the question of whether people want to actually get better or whether they just want to feel better and what the real difference between the two is.

That said, even AI has unexpected issues. Saw this article recently of a woman who married an AI bot and at one point they apparently got in a fight and the AI forgot who she was. Kind of ironic that she supposedly used to be a communications professor...

 
That was an interesting video and he brings up good points. I think for now, though, the AI models we have just aren't at the level he claims. I don't think there is any chatGPT-equivalent we could load onto a computer in a primary care office that could start managing anyone who walked through the door. I also don't think current models could be adapted or trained to do this adequately.

Some applications of AI are starting to come online, but even comparatively "simple" tasks like driving have not yet been replaced.

Many companies are placing massive bets on AI developing into something more, for example achieving artificial general intelligence. I think the video almost presumes that these improvements will emerge, but I'm not yet convinced.

So in short this video gives a lot of food for thought, but it basically boils down to:
1- We (and all other professions) are doomed, and
2- emphasizing procedures is your best bet, but they are doomed too.

The video also does a bit of backtracking toward the end about how a good doctor is better at gathering data than AI because of the human factor (which seems to contradict the quality and ready acceptance he outlines before). At any rate, given that even the most vulnerable field (radiology) is still doing just fine I would be hesitant to give speculation about AI much weight when choosing a specialty. I have been hearing doom and gloom about psychiatry since I was in medical school (on clinical rotations almost 15 years ago) and so far if anything the field has been better than I expected, with no apocalypse yet.
 
Non psychiatrist thoughts:

There is a reason that radiologists have to sign their report. Someone has to be legally responsible for the interpretation.

Until AI has some form of legal standing, it cannot be used to independently: diagnose, treat, prescribe, etc. The entire legal system of relevance is set up for the existence of a licensed individual. Your grandmother could tell people to try an autopap off of the internet, but only a licensed individual could diagnose OSA, prescribe an autopap, get paid for the consult, and have responsibility for those actions.
 
Non psychiatrist thoughts:

There is a reason that radiologists have to sign their report. Someone has to be legally responsible for the interpretation.

Until AI has some form of legal standing, it cannot be used to independently: diagnose, treat, prescribe, etc. The entire legal system of relevance is set up for the existence of a licensed individual. Your grandmother could tell people to try an autopap off of the internet, but only a licensed individual could diagnose OSA, prescribe an autopap, get paid for the consult, and have responsibility for those actions.
This was the point I made above, but apparently the guy in the video addresses this. Like I said, I see the potential of having AI doing the work and larger systems or companies finding some shill who's willing to sign away their license signing 100+ charts per day without actually reading them. Some physicians already do this with NP supervision and I've met NPs in FPA states who will sign whatever is put in front of them.

Imo AI itself is not where the risk comes from. It's from physicians and health systems allowing this to happen to ourselves or if there is just no demand for us d/t future generations' total dependence on technology. The latter of which imo would be so far down the road that it's not relevant for anyone in or starting their careers.
 
One special consideration for psychiatry is involuntary commitment - there is no way that the courts will allow AI to commit or maintain commitment (at least until the judges themselves are replaced by AI) so there will always be at least some need for psychiatrists
 
Top