Thoughts on this AI in medicine video w/ respect to psychiatry?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

WolfBoy3000

Full Member
Joined
Jan 22, 2024
Messages
39
Reaction score
11
www.youtube.com/watch?v=kALDN4zIBT0&ab_channel=SheriffofSodium

Curious if anyone has watched this new video causing havoc in the medical subreddits - 'Yes, Doctors: AI Will Replace You'. I'm not sold on all he's saying by any means, but I think to completely ignore the messages here would be optimistic at best and ignorant at worst.

I'm a med student pretty set on psychiatry and my primary hang-up is AI-powered mid-levels (have posted about this before). This has me tossing and turning at night thinking about whether I should just pursue a surgical specialty.

Would love to hear perspectives from psychiatrists on the video. It's the most thorough I've seen thus far and addresses a lot of typical arguments people, particularly physicians, give as to why AI is not a threat.
 
Last edited:
It's a very good video, particularly for provoking paranoia in students. However, it could have been and was discussed in a very similar manner 30 years ago. Yes, we will eventually get to some sort of post scarcity utopia and the singularity will happen. However, it's not tomorrow and it's not something you can meaningfully plan for. I'm not sure what exactly he thinks is such a magical thing about surgical specialties. I assume it's because he is a nephrologist and just thinks surgery in general is amazing? You think AI can see a tumor on an x-ray but not in an actual person? Amazon warehouses require a lot of physically moving stuff around too and nobody is saying AI will never be involved in retail logistics. And if you're saying, well surgery is much more complicated than Amazon moving boxes! Indeed, that's why people aren't buying into his whole philosophy as an imminent threat. There are huge differences between a surgical specialty and psychiatry for you to consider, AI is not one of them.
 
Last edited:
It's a very good video, particularly for provoking paranoia in students. However, it could have been and was discussed in a very similar manner 30 years ago. Yes, we will eventually get to some sort of post scarcity utopia and the singularity will happen. However, it's not tomorrow and it's not something you can meaningfully plan for. I'm not sure what exactly he thinks is such a magical thing about surgical specialties. I assume it's because he is a nephrologist and just thinks surgery in general is amazing? You think AI can see a tumor on an x-ray but not in an actual person? Amazon warehouses require a lot of physically moving stuff around too and nobody is saying AI will never be involved in retail logistics. And if you're saying, well surgery is much more complicated than Amazon moving boxes! Indeed, that's why people aren't buying into his whole philosophy as an imminent threat. There are huge differences between a surgical specialty and psychiatry for you to consider, AI is not one of them.
I think another part lost on people thinking surgery is immune to AI is what percent of revenue comes from surgery itself. Many surgeons derive the majority of their income from office visits/consults/call and not from the actual procedures/operations themselves. Even if AI consumes their clinics/consults first, there would be a significant excess of surgeons at present capacity. I think psychiatry would be near the last of the medical specialties to fall to AI given the particularly human nature of our treatment (not to say that any specialty of medicine is inhuman, psychiatry is just more human).
 
The author of the video definitely brought up the office visit portion of surgeon income (watch it at 2x speed, it's not bad even if I disagreed with it), but he also sure did think procedures were the most resistant to AI encroachment by far. He also seemed to really believe that while patients reportedly valued human interaction in blind surveys, they would not actually value it enough to pay any premium for it, eg bank tellers. I mean we have a bit of evidence that's not true here given the cash nature of much of psychiatry, but it is something the video author definitely thought of.
 
If AI replaces us as physicians, and specifically psychiatrists, then a lot if not most of the population's jobs have already been replaced by it

Thinking in terms of accounting, taxes, programing etc
 
Last edited:
I think another part lost on people thinking surgery is immune to AI is what percent of revenue comes from surgery itself. Many surgeons derive the majority of their income from office visits/consults/call and not from the actual procedures/operations themselves. Even if AI consumes their clinics/consults first, there would be a significant excess of surgeons at present capacity. I think psychiatry would be near the last of the medical specialties to fall to AI given the particularly human nature of our treatment (not to say that any specialty of medicine is inhuman, psychiatry is just more human).
I recognize that AI fully replacing doctors in the near future is highly unlikely.

Having said that, the way AI is going to redefine work as we know it will come in unexpected ways. What we describe as "human" components are what will be the most replaceable because there are no strict external quality criteria and they can be loosely replicated without significant consequences. For example, we know that now AI-created memes and shorts and paintings, i.e., creative human endeavors, are common place. AI now even does a pretty good job of holding a socially apt conversation that is indistinguishable or perhaps even superior to even some humans and there are anecdotes of people using AI for therapeutic or counseling purposes. Instead of these "human" components, where AI lacks is applying specific knowledge and experience precisely and accurately within context. We know that AI not uncommonly comes up with false information, attributes references wrongly, and generally makes errors in inexplicable ways. You will need an expert who can confirm final work, which means those who are going to survive are experts with small niches who can check and confirm. I'm not sure how this will shape our field; perhaps unfortunately, increasing sub-specialization by way of more and longer fellowships will become the theme.
 
Last edited:
Definitely concur that the main takeaway is that whatever happens will be unexpected. It might indeed be that we ultimately need human reviewers to "check work." It also might not. As the OP's video described, human doctors make a lot of errors too and there isn't usually another human doctor checking what they did each and every time.
 
Haven't watched the video yet, but my big question with this is still the liability issue. Ie, when the AI is wrong, misdiagnoses someone, or harms someone and the lawsuits start coming in is big tech going to be ready for the legal blowback? Everyone is focused on the "is it possible?" and catastrophizing without considering the actual practical implementation of these programs.

What I think is far more likely is that tech companies are going to start pushing AI tools to large health systems that allow them to "increase efficiency" and say something like "look how much more productive 1 physician can be and how many more patients they can see with this program!" It's the same story of pushing docs to see more people for similar pay with a shiny new toy to justify doing this.
 
Sure, so liability is covered in the video. Basically, it's just another regulatory hurdle like anything else where big tech just needs easily purchasable political will. Big tech companies are more than ready for legal blowback. Legal blowback is the easy part for them. All of them are sued all day, every day in nearly every country in the world, often for life destroying events. Heck probably a plurality of those lawsuits already involve AI, albeit the focus right now is on copyright issues. AI increasing efficiency and leading to greater physician expectations is already in place. I'm not saying I agree with it, but the idea was more about complete replacement. The video goes through, in some detail, what the actual progressive steps towards replacement would be, in the author's opinion, but it doesn't happen overnight. It starts with edge cases and spreads.
 
Last edited:
I recognize that AI fully replacing doctors in the near future is highly unlikely.

Having said that, the way AI is going to redefine work as we know it will come in unexpected ways. What we describe as "human" components are what will be the most replaceable because there are no strict external quality criteria and they can be loosely replicated without significant consequences. For example, we know that now AI-created memes and shorts and paintings, i.e., creative human endeavors, are common place. AI now even does a pretty good job of holding a socially apt conversation that is indistinguishable or perhaps even superior to even some humans and there are anecdotes of people using AI for therapeutic or counseling purposes. Instead of these "human" components, where AI lacks is applying specific knowledge and experience precisely and accurately within context. We know that AI not uncommonly comes up with false information, attributes references wrongly, and generally makes errors in inexplicable ways. You will need an expert who can confirm final work, which means those who are going to survive are experts with small niches who can check and confirm. I'm not sure how this will shape our field; perhaps unfortunately, increasing sub-specialization by way of more and longer fellowships will become the theme.
That's not my experience at all of practicing psychiatry. I don't think having a conversation with a computer screen or AI bot is going to produce the (already at times limited) benefits of psychiatric interventions. Placebo effects are a big part of our field and nothing about talking to an AI screams this will increase or keep placebo. Having a specialist that actually understands what is going on rather than is a computer that hallucinates information is a direct benefit to patients. Yes yes the singularity could come in our lifetime, but even then I think psychiatry will resist this more than any other medical specialty.
 
That's not my experience at all of practicing psychiatry. I don't think having a conversation with a computer screen or AI bot is going to produce the (already at times limited) benefits of psychiatric interventions. Placebo effects are a big part of our field and nothing about talking to an AI screams this will increase or keep placebo. Having a specialist that actually understands what is going on rather than is a computer that hallucinates information is a direct benefit to patients. Yes yes the singularity could come in our lifetime, but even then I think psychiatry will resist this more than any other medical specialty.
Idk, recently I've seen articles talking about Gen Z using ChatGPT as their therapists to save money and there was a "study" showing that something like 75-80%+ of GenZ kids surveyed would marry an AI Chatbot if it was legal. Pretty sure the survey/study was done by an AI company, so take it for what that's worth, but we saw during COVID how much lack of direct socialization can harm kids and post-COVID how rampant avoidance has become. Why strive for real relationships/care/interactions when you can get all of that with an AI without the confrontations? It begs the question of whether people want to actually get better or whether they just want to feel better and what the real difference between the two is.

That said, even AI has unexpected issues. Saw this article recently of a woman who married an AI bot and at one point they apparently got in a fight and the AI forgot who she was. Kind of ironic that she supposedly used to be a communications professor...

 
That was an interesting video and he brings up good points. I think for now, though, the AI models we have just aren't at the level he claims. I don't think there is any chatGPT-equivalent we could load onto a computer in a primary care office that could start managing anyone who walked through the door. I also don't think current models could be adapted or trained to do this adequately.

Some applications of AI are starting to come online, but even comparatively "simple" tasks like driving have not yet been replaced.

Many companies are placing massive bets on AI developing into something more, for example achieving artificial general intelligence. I think the video almost presumes that these improvements will emerge, but I'm not yet convinced.

So in short this video gives a lot of food for thought, but it basically boils down to:
1- We (and all other professions) are doomed, and
2- emphasizing procedures is your best bet, but they are doomed too.

The video also does a bit of backtracking toward the end about how a good doctor is better at gathering data than AI because of the human factor (which seems to contradict the quality and ready acceptance he outlines before). At any rate, given that even the most vulnerable field (radiology) is still doing just fine I would be hesitant to give speculation about AI much weight when choosing a specialty. I have been hearing doom and gloom about psychiatry since I was in medical school (on clinical rotations almost 15 years ago) and so far if anything the field has been better than I expected, with no apocalypse yet.
 
Non psychiatrist thoughts:

There is a reason that radiologists have to sign their report. Someone has to be legally responsible for the interpretation.

Until AI has some form of legal standing, it cannot be used to independently: diagnose, treat, prescribe, etc. The entire legal system of relevance is set up for the existence of a licensed individual. Your grandmother could tell people to try an autopap off of the internet, but only a licensed individual could diagnose OSA, prescribe an autopap, get paid for the consult, and have responsibility for those actions.
 
Non psychiatrist thoughts:

There is a reason that radiologists have to sign their report. Someone has to be legally responsible for the interpretation.

Until AI has some form of legal standing, it cannot be used to independently: diagnose, treat, prescribe, etc. The entire legal system of relevance is set up for the existence of a licensed individual. Your grandmother could tell people to try an autopap off of the internet, but only a licensed individual could diagnose OSA, prescribe an autopap, get paid for the consult, and have responsibility for those actions.
This was the point I made above, but apparently the guy in the video addresses this. Like I said, I see the potential of having AI doing the work and larger systems or companies finding some shill who's willing to sign away their license signing 100+ charts per day without actually reading them. Some physicians already do this with NP supervision and I've met NPs in FPA states who will sign whatever is put in front of them.

Imo AI itself is not where the risk comes from. It's from physicians and health systems allowing this to happen to ourselves or if there is just no demand for us d/t future generations' total dependence on technology. The latter of which imo would be so far down the road that it's not relevant for anyone in or starting their careers.
 
One special consideration for psychiatry is involuntary commitment - there is no way that the courts will allow AI to commit or maintain commitment (at least until the judges themselves are replaced by AI) so there will always be at least some need for psychiatrists
 
One special consideration for psychiatry is involuntary commitment - there is no way that the courts will allow AI to commit or maintain commitment (at least until the judges themselves are replaced by AI) so there will always be at least some need for psychiatrists
Nah, AI will just recommend involuntary and will force all potential involuntaries to be arbitrated by a judge. Once we all enter the Matrix it'll all be irrelevant though, lol.
 
Idk, recently I've seen articles talking about Gen Z using ChatGPT as their therapists to save money and there was a "study" showing that something like 75-80%+ of GenZ kids surveyed would marry an AI Chatbot if it was legal. Pretty sure the survey/study was done by an AI company, so take it for what that's worth, but we saw during COVID how much lack of direct socialization can harm kids and post-COVID how rampant avoidance has become. Why strive for real relationships/care/interactions when you can get all of that with an AI without the confrontations? It begs the question of whether people want to actually get better or whether they just want to feel better and what the real difference between the two is.
This is a regular talking point I hear on podcasts and as part of the zeitgeist but is not at all my experience in clinical practice where 80% of my days are with adolescents. Yes there are plenty of socially anxious, ASD or social pragmatic language disorder, or asocial kids but they clearly make up a minority of adolescents. Adolescents still care very much about being "cool", going to parties, hanging out, spending time driving in a car with friends for no reason, listening to music, being in sports/activity leagues etc. The adolescents who do struggle in the meatverse feel lonely and have some awareness this contributes to their psychopathology. They are not looking to just fade into the abyss of AI/online interactions.

I understand there is a sampling bias of people who are seeking care versus those who are not, but teens really do get the idea that even though real relationships/interactions are messy, they offer a fundamentally human experience that is not the same as AI. Teens really do get the idea that social media is bad for them, they understand this better than most adults, but they end up being swallowed by network effects pulling them in.
 
Thank you everyone for the commentary and input! I read about AI a lot, not just how it will impact healthcare, but society as a whole. For any argument about what AI can/cannot do, the reality is it is only going to get exponentially better. I think judging AI based on chatGPT is like perceiving the era of Motorola Razor to be the peak of mobile phones. We're all in for a serious treat...

For medicine, fundamentally what concerns me is the corporatization of healthcare, which does not seem to be going away ever. It seems pretty inevitable that everyone from healthcare executives to insurance companies to AI companies to the government will continue to push for whatever saves money and increases profits - AKA avoiding paying doctors a lot of money and implementing AI into as many profitable industries as possible (medicine!). There are already bills being proposed to give AI prescribing privileges, and they will iron out the liability thing some how. This seems loony at the moment, but people are already very acquainted with technology permeating their lives, and it does not seem too farfetched for people to trust AI with their healthcare sooner rather than later.

It seems to me psychiatry is simultaneously the most AI resistant and the least AI resistant at the same time. The big box shops are already barely providing legitimate psychiatric care with midlevels, why would they not lean into AI the moment they can? I can see how inpatient psychiatry may be different than outpatient though.

The human element of psychiatry is really my beacon of hope, but it is all dependent on whether people remain willing to pay for the expertise of a human being in the future. This seems like a silly question now, but I feel we'll all be surprised how quickly society completely submits to the AI overlords - first for efficiencies sake, then necessity, then for lack of any other option..

I want to be a psychiatrist. But.... *sigh*
 
Thank you everyone for the commentary and input! I read about AI a lot, not just how it will impact healthcare, but society as a whole. For any argument about what AI can/cannot do, the reality is it is only going to get exponentially better. I think judging AI based on chatGPT is like perceiving the era of Motorola Razor to be the peak of mobile phones. We're all in for a serious treat...

For medicine, fundamentally what concerns me is the corporatization of healthcare, which does not seem to be going away ever. It seems pretty inevitable that everyone from healthcare executives to insurance companies to AI companies to the government will continue to push for whatever saves money and increases profits - AKA avoiding paying doctors a lot of money and implementing AI into as many profitable industries as possible (medicine!). There are already bills being proposed to give AI prescribing privileges, and they will iron out the liability thing some how. This seems loony at the moment, but people are already very acquainted with technology permeating their lives, and it does not seem too farfetched for people to trust AI with their healthcare sooner rather than later.

It seems to me psychiatry is simultaneously the most AI resistant and the least AI resistant at the same time. The big box shops are already barely providing legitimate psychiatric care with midlevels, why would they not lean into AI the moment they can? I can see how inpatient psychiatry may be different than outpatient though.

The human element of psychiatry is really my beacon of hope, but it is all dependent on whether people remain willing to pay for the expertise of a human being in the future. This seems like a silly question now, but I feel we'll all be surprised how quickly society completely submits to the AI overlords - first for efficiencies sake, then necessity, then for lack of any other option..

I want to be a psychiatrist. But.... *sigh*
None of us can predict the future, particularly when discussing things like the singularity. But we certainly can predict how you will feel working in a specialty you enjoy versus one you choose for financial/prestige/parent/AI proof reasons. Focus on what you can control in this world, the rest you just need to adapt to as it actually comes to pass.
 
Idk, recently I've seen articles talking about Gen Z using ChatGPT as their therapists to save money and there was a "study" showing that something like 75-80%+ of GenZ kids surveyed would marry an AI Chatbot if it was legal. Pretty sure the survey/study was done by an AI company, so take it for what that's worth, but we saw during COVID how much lack of direct socialization can harm kids and post-COVID how rampant avoidance has become. Why strive for real relationships/care/interactions when you can get all of that with an AI without the confrontations? It begs the question of whether people want to actually get better or whether they just want to feel better and what the real difference between the two is.

That said, even AI has unexpected issues. Saw this article recently of a woman who married an AI bot and at one point they apparently got in a fight and the AI forgot who she was. Kind of ironic that she supposedly used to be a communications professor...


I can’t tell if this woman actually believes this or if all these articles are just a weird publicity plug for her website….

 


Part 2 of the video - responds to a lot of the crticisms of his video, quite well at that. Strengthens his argument even further IMO

If you listen to whole thing through an objective viewpoint, it only seems logical medicine as a whole is going to be upended.

SIGH... back to studying for my endocrinology exam..
 
I don't know why you're really focused on watching youtube videos about AI from a pediatric nephrologist. It reminds me of the teenagers who watch philosophy or nutrition videos from random streamers/"broscience" dudes online and tell me "dawg have you ever listened to X on youtube, he's got some really good philosophy ideas"....and I go look them up and it's a shirtless guy talking about how he decided god wasn't real or something.

He's just making super generic theoretical arguments and titling it "AI will replace you" without giving great reasons for why this statement is true.
Technology progresses in extremely unpredictable ways and often in ways that nobody would have predicted accurately 25 years earlier. As others have noted, this may all come to fruition some day but there will be many many other industries that will be completely hollowed out with human participation before there are masses of unemployed doctors begging on the streets.

If we want to take a similarish industry, most pharmacists would probably cease to exist before then....you know what would be really good at instantly know all data about every drug in existence, updating itself automatically on that limited dataset, checking all interactions instantly, knowing every possible formulation and pharmacokinetic profile of every drug in existence and spitting out any relevant information there along with a prescription to a patient? Doesn't require gathering, interpreting or deciding any clinical information on the pharmacists part 99% of the time.

Since you're in med school you might be too young to remember this but for everyone else, remember in the 2000s when the internet was gonna make everyone super smart and nobody would need to consult an expert for anything because they'd have all the information in the world at their fingertips every day? Instead most people are as dumb as ever and spend half their time on electronics watching TikTok and Instagram not reading about quantum physics....
 


Part 2 of the video - responds to a lot of the crticisms of his video, quite well at that. Strengthens his argument even further IMO

If you listen to whole thing through an objective viewpoint, it only seems logical medicine as a whole is going to be upended.

SIGH... back to studying for my endocrinology exam..

If you listen to technooptimists and get huffing what Jensen Huang or wtf Palentir CEO spin man says, than sure everything you know about the world is going to change entirely. If that's your viewpoint, that one of the most complicated and regulatory captured systems that has been the most resistant to change (other than maybe education) is going to be radically upended in the next 10-15 years, then honestly almost nothing you do matters. Your time would be better spent learning survival, watching Bladerunner, and reading every sci fi book. Or just get high and watch Netflix until the singularity changes everything.

Or you can look at the whole of human history and see how change can and will cause some displacement, but will also keep society largely going as it has. Yes this is real arguments to make that the rate of change is increasing, but honestly if you took some feudal lord from Europe 600 years ago and dropped them into Austin, I think if anything they would be most surprised that despite the profound change in technology, the overall structure of the world is still largely the same. Maybe their blacksmiths are mechanical engineers and their jesters are celebrities/influencers, but they would certainly see the parallels.
 
AI models are already being developed with studies showing they are getting close to being equivalent in efficacy with counselors.

Bill Gates claims that all of physicians have 10 years before we could all theoretically be replaced.

Arizona dropped the first legislation that states AI can pass boards to become a licensed physician in the state. I don’t think it moved forward, but politicians are already thinking it.

I wouldn’t be concerned that psychiatry is any more likely to be effected than other fields.
 
I don't know why you're really focused on watching youtube videos about AI from a pediatric nephrologist. It reminds me of the teenagers who watch philosophy or nutrition videos from random streamers/"broscience" dudes online and tell me "dawg have you ever listened to X on youtube, he's got some really good philosophy ideas"....and I go look them up and it's a shirtless guy talking about how he decided god wasn't real or something.

He's just making super generic theoretical arguments and titling it "AI will replace you" without giving great reasons for why this statement is true.
Technology progresses in extremely unpredictable ways and often in ways that nobody would have predicted accurately 25 years earlier. As others have noted, this may all come to fruition some day but there will be many many other industries that will be completely hollowed out with human participation before there are masses of unemployed doctors begging on the streets.

If we want to take a similarish industry, most pharmacists would probably cease to exist before then....you know what would be really good at instantly know all data about every drug in existence, updating itself automatically on that limited dataset, checking all interactions instantly, knowing every possible formulation and pharmacokinetic profile of every drug in existence and spitting out any relevant information there along with a prescription to a patient? Doesn't require gathering, interpreting or deciding any clinical information on the pharmacists part 99% of the time.

Since you're in med school you might be too young to remember this but for everyone else, remember in the 2000s when the internet was gonna make everyone super smart and nobody would need to consult an expert for anything because they'd have all the information in the world at their fingertips every day? Instead most people are as dumb as ever and spend half their time on electronics watching TikTok and Instagram not reading about quantum physics....
Okay I see your point but you didn't have to roast me so hard man 😂

I didn't really say unemployed doctors begging on the streets, I'm just trying to determine whether it is still a wise choice to pursue psychiatry specifically.
Yes at this point the only thing he can make are theoretical arguments, because that is the stage we are at with this technology. Most of his reasons seem pretty sensible, and it's the exact copium he addresses to just wave it all off as doomerism.
I'm not interested in every other industry, I'm interested in medicine.
Not sure pharmacists are really an equivalent comparison lol.

And I don't really think your last paragraph is even relevant to the points he was making in the videos. It's about capitalistic market pressures rendering physicians expensive and redundant, not about the intelligence of the patient population.

Perhaps for you attendings this seems to be a futile and foolish point of focus, but if you were an M1 right now, I assure you these videos would seem more consequential than "broscience videos".

Thanks
 
If you listen to technooptimists and get huffing what Jensen Huang or wtf Palentir CEO spin man says, than sure everything you know about the world is going to change entirely. If that's your viewpoint, that one of the most complicated and regulatory captured systems that has been the most resistant to change (other than maybe education) is going to be radically upended in the next 10-15 years, then honestly almost nothing you do matters. Your time would be better spent learning survival, watching Bladerunner, and reading every sci fi book. Or just get high and watch Netflix until the singularity changes everything.

Or you can look at the whole of human history and see how change can and will cause some displacement, but will also keep society largely going as it has. Yes this is real arguments to make that the rate of change is increasing, but honestly if you took some feudal lord from Europe 600 years ago and dropped them into Austin, I think if anything they would be most surprised that despite the profound change in technology, the overall structure of the world is still largely the same. Maybe their blacksmiths are mechanical engineers and their jesters are celebrities/influencers, but they would certainly see the parallels.
I don't think that but I think it's worth considering whether psychiatry will still be a worthwhile career that supports a family in the decades to come. If not, I should probably get started on those surgical gunner pursuits. The points in the videos are valid, I am just seeking the perspective of experienced psychiatrists on whether they truly feel insulated from these phenomena longterm. Half the doctors on here seem to fret about midlevels - I believe this is another valid concern.
 
AI models are already being developed with studies showing they are getting close to being equivalent in efficacy with counselors.

Bill Gates claims that all of physicians have 10 years before we could all theoretically be replaced.

Arizona dropped the first legislation that states AI can pass boards to become a licensed physician in the state. I don’t think it moved forward, but politicians are already thinking it.

I wouldn’t be concerned that psychiatry is any more likely to be effected than other fields.
Thank you for paying attention to what is going on. And thank you for the hope that I still can and should pursue my passion.
 
Okay I see your point but you didn't have to roast me so hard man 😂

I didn't really say unemployed doctors begging on the streets, I'm just trying to determine whether it is still a wise choice to pursue psychiatry specifically.
Yes at this point the only thing he can make are theoretical arguments, because that is the stage we are at with this technology. Most of his reasons seem pretty sensible, and it's the exact copium he addresses to just wave it all off as doomerism.
I'm not interested in every other industry, I'm interested in medicine.
Not sure pharmacists are really an equivalent comparison lol.

And I don't really think your last paragraph is even relevant to the points he was making in the videos. It's about capitalistic market pressures rendering physicians expensive and redundant, not about the intelligence of the patient population.

Perhaps for you attendings this seems to be a futile and foolish point of focus, but if you were an M1 right now, I assure you these videos would seem more consequential than "broscience videos".

Thanks

You sound like you've made up your mind. Keep doomscrolling videos if you want.

You also aren't understanding a large part of this, which is that part of what makes physicians "expensive" in the United States is the regulatory system protecting the guild, largely as a result of how UNregulated the medical profession was in the US prior to the early 1900s and the fact that insurance companies literally did not exist as a concept prior to WWII. Doctors actually relatively did NOT make that much money prior to the mid 1900s in the US and there were terrible standards for training overall before the Flexner report in 1910. Large changes in regulation and standards over time are much more influential than pieces of technology....which is why midlevels are much more of an immediate pressing issue overall from a patient safety/job security standpoint than AI.

Pharmacists are an extremely equivalent comparison. You don't think an appropriate comparison is another healthcare professional who goes through a similar amount of schooling for a relatively higher paying job, in large part due to the high amount of regulation around their profession?

My point with the last part is that if you're the typical age for a M1, you weren't even alive when the .com boom happened. All kinds of various predictions were being thrown out then too. Is the world different than in 2000-2001? For sure. Do tons of industries still exist that many of these guys who go on every 10-15 years trying to predict the future predicted wouldn't exist? Yup.

When I was a M1, radiologists weren't gonna have a job anymore in 15 years because all the imaging was gonna be autoread by imaging recognition software and anesthesia was going to barely exist as a specialty because of CRNAs. Last I checked both those specialities still made more than me lol. Predicting the future is often a fruitless endeavor that has a strong survivor bias retroactively...nobody remembers all the guys who made terrible predictions.
 
Last edited:
I don't think that but I think it's worth considering whether psychiatry will still be a worthwhile career that supports a family in the decades to come. If not, I should probably get started on those surgical gunner pursuits. The points in the videos are valid, I am just seeking the perspective of experienced psychiatrists on whether they truly feel insulated from these phenomena longterm. Half the doctors on here seem to fret about midlevels - I believe this is another valid concern.
Docs have been panicking about mid-level encroachment for 20+ years. While some patients have certainly suffered because of this, I can assure you that most physicians have not and psychiatry opportunities and pay in that time have steadily increased. I say this as someone in a FPA state where mid-level expansion in the past 5 years have had almost no impact on psychiatrists here.

Still haven't watched the videos, but I have no concerns whatsoever about my career prospects for the next 10-20 years and that includes AI.
 
I don't think that but I think it's worth considering whether psychiatry will still be a worthwhile career that supports a family in the decades to come. If not, I should probably get started on those surgical gunner pursuits. The points in the videos are valid, I am just seeking the perspective of experienced psychiatrists on whether they truly feel insulated from these phenomena longterm. Half the doctors on here seem to fret about midlevels - I believe this is another valid concern.
I'm not sure that surgery has a significantly larger moat if you think AI gets to the point of removing all non-surgical doctors. By that point I would expect AI to be able to evaluate the surgical layout and operate with much more technical accuracy than any human would be able to. They are already looking at having human's remotely use surgery robots in different countries, the technology moat differential between them being able to independently operate and independently complete all psychiatry tasks seem very little. I know presently words (LLMs) are much more advanced than visual AI work, but clearly in the world you are imaging the work has come along with visual AI representations.

Again if AI is getting to the point of putting psychiatrists out of a job, every lawyer, teacher, C-suite manager, entrepreneur, artist, accountant, etc etc are all out of work. At which point all of society will need to be reimagined and trying to predict the future seems very very cloudy.
 
You sound like you've made up your mind. Keep doomscrolling videos if you want.

You also aren't understanding a large part of this, which is that part of what makes physicians "expensive" in the United States is the regulatory system protecting the guild, largely as a result of how UNregulated the medical profession was in the US prior to the early 1900s and the fact that insurance companies literally did not exist as a concept prior to WWII. Doctors actually relatively did NOT make that much money prior to the mid 1900s in the US and there were terrible standards for training overall before the Flexner report in 1910. Large changes in regulation and standards over time are much more influential than pieces of technology....which is why midlevels are much more of an immediate pressing issue overall from a patient safety/job security standpoint than AI.

Pharmacists are an extremely equivalent comparison. You don't think an appropriate comparison is another healthcare professional who goes through a similar amount of schooling for a relatively higher paying job, in large part due to the high amount of regulation around their profession?

My point with the last part is that if you're the typical age for a M1, you weren't even alive when the .com boom happened. All kinds of various predictions were being thrown out then too. Is the world different than in 2000-2001? For sure. Do tons of industries still exist that many of these guys who go on every 10-15 years trying to predict the future predicted wouldn't exist? Yup.

When I was a M1, radiologists weren't gonna have a job anymore in 15 years because all the imaging was gonna be autoread by imaging recognition software and anesthesia was going to barely exist as a specialty because of CRNAs. Last I checked both those specialities still made more than me lol. Predicting the future is often a fruitless endeavor that has a strong survivor bias retroactively...nobody remembers all the guys who made terrible predictions.
Appreciate your points thank you. I don't mean to be combative, just seeking honest opinions from experienced people who have the career I want, who are also open to acknowledging that society is likely about to transform in unprecedented ways. I have not made up my mind. In fact I'm looking for every reason to believe medicine and specifically psychiatry will continue to be a financially rewarding career given the cost & length of training.

If it all comes down to regulation, then the fact there are already bills seeking prescribing priveleges for AI is not too reassuring. Regarding pharmacists, I meant based on how you explained it, it seems a significant part of the profession is almost definitely going to become automated, which further dismays me.

I'm sure the .com boom yielded similarly cataclysmic predictions, but... AI is simply different. We're talking about engineered human-level intelligence. I hope you're right, I really hope you are. I know who I am and what I care about and I will likely decide to pursue psychiatry in spite of all this, but I firmly believe that for any of us medical students to completely exclude AI from their specialty choice in 2025 is unfortunately shortsighted.

Just seeking inspiration and reasurrance during this turbulent time. Thank you
 
Docs have been panicking about mid-level encroachment for 20+ years. While some patients have certainly suffered because of this, I can assure you that most physicians have not and psychiatry opportunities and pay in that time have steadily increased. I say this as someone in a FPA state where mid-level expansion in the past 5 years have had almost no impact on psychiatrists here.

Still haven't watched the videos, but I have no concerns whatsoever about my career prospects for the next 10-20 years and that includes AI.
The mid-levels point is very reassuring thank you. Unfortunately, my career is only going to begin in 10 years, so that's not exactly encouraging. Confusing time to be a medical student.
 
The mid-levels point is very reassuring thank you. Unfortunately, my career is only going to begin in 10 years, so that's not exactly encouraging. Confusing time to be a medical student.
I mean, realistically, no one really knows what’ll happen in 10, 20, 30 years. There are guesses and educated guesses, but if you want a completely guaranteed future, well, it’s not there and never will be.
 
I'm not sure that surgery has a significantly larger moat if you think AI gets to the point of removing all non-surgical doctors. By that point I would expect AI to be able to evaluate the surgical layout and operate with much more technical accuracy than any human would be able to. They are already looking at having human's remotely use surgery robots in different countries, the technology moat differential between them being able to independently operate and independently complete all psychiatry tasks seem very little. I know presently words (LLMs) are much more advanced than visual AI work, but clearly in the world you are imaging the work has come along with visual AI representations.

Again if AI is getting to the point of putting psychiatrists out of a job, every lawyer, teacher, C-suite manager, entrepreneur, artist, accountant, etc etc are all out of work. At which point all of society will need to be reimagined and trying to predict the future seems very very cloudy.
Good points thank you. I don't think surgical specialties are insulated indefinitely, but I did believe they have a much longer timeframe before AI really makes a difference. Maybe you're right about psychiatry, and that is part of why I still have faith in pursuing this career.

I'm not sure I agree with the every other job thing simply because there is a lot of money to be made in healthcare and AI companies will want a piece of it sooner rather than later.

Cheers to a cloudy future and sticking to our gut.
 
I'm sure the .com boom yielded similarly cataclysmic predictions, but... AI is simply different. We're talking about engineered human-level intelligence. I hope you're right, I really hope you are. I know who I am and what I care about and I will likely decide to pursue psychiatry in spite of all this, but I firmly believe that for any of us medical students to completely exclude AI from their specialty choice in 2025 is unfortunately shortsighted.
Unless you’re worried about singularity leading to destruction of entire industries, AI isn’t really that different. I’m relatively young, but I remember when the Internet took off and EMRs exploded. We were told that we would eventually be able to access a patient’s entire lifetime medical history with the click of a button. 15-20 years later and we still can’t even get clouds from neighboring hospital systems to be compatible and it can take days to weeks to get records from appointments that happened earlier in the month. I understand your fears, but I think you (and many others touting the advancements in AI) are dramatically, underestimating the force it would take to break the inertia of the bureaucratic machines in place.
 
Unless you’re worried about singularity leading to destruction of entire industries, AI isn’t really that different. I’m relatively young, but I remember when the Internet took off and EMRs exploded. We were told that we would eventually be able to access a patient’s entire lifetime medical history with the click of a button. 15-20 years later and we still can’t even get clouds from neighboring hospital systems to be compatible and it can take days to weeks to get records from appointments that happened earlier in the month. I understand your fears, but I think you (and many others touting the advancements in AI) are dramatically, underestimating the force it would take to break the inertia of the bureaucratic machines in place.
My thought is that if AI gets to the point that it is replacing docs it has altered the way society works so much that you can't even begin to make a plan for the situation. So either it won't and there's no reason to worry much, or it will and there's no utility in worrying.
 
Good points thank you. I don't think surgical specialties are insulated indefinitely, but I did believe they have a much longer timeframe before AI really makes a difference. Maybe you're right about psychiatry, and that is part of why I still have faith in pursuing this career.

I'm not sure I agree with the every other job thing simply because there is a lot of money to be made in healthcare and AI companies will want a piece of it sooner rather than later.

Cheers to a cloudy future and sticking to our gut.
You're listening to the wrong people then. I've heard from the CTOs and CEOs (via podcasts OFC, I know no one in real life lol) of the current machine learning players and they are all pointing a significant to majority of current development into the visual (rather than language) sphere, which will be enhanced further from smart glasses providing a nearly unlimited data source. My wife is a surgeon who uses robots on a regular basis and while the general population somehow thinks they are already autonomous (which is farcical at present), there is NO reason to assume they could not be autonomous or largely autonomous in the future, particularly a future in which you predict doctors will be largely supplanted from any significant role.

This is an example of scenario planning where is A holds to be true then what BCD will follow. I disagree with you about A (that AI is going to replace non-surgical MDs in 10-15 years), but if you choose to believe A, then I certainly disagree than BCD (surgery will be insulated from this) is going to follow.
 
Genuine question, how would AI logistically replace psychiatrists? I think it's easy to imagine how AI could replace something like pathology or radiology. A doctor orders an image, AI spits out the result back to them. But how is AI replacing an inpatient psychiatrists? Can AI deal with psychotic patient refusing meds who requires treatment over objection? Will the AI go to court and make its case? How will AI deal with patients who deny everything on psychiatric ROS on interview but the nursing staff is saying they haven't slept since being admitted? Our patients often lie to us, how will AI deal with that? What about ER psychiatrists, will AI be able to involuntarily admit (939 here in NY) a patient at immediate risk for self injury but who is requesting discharge? Will AI be calling collaterals and safety planning if the plan is DC? Will AI admit every homeless malingerer who is claiming to be "suicidal"? I just don't see how all this is possible. Anyone else have any thoughts on this?
 
Genuine question, how would AI logistically replace psychiatrists? I think it's easy to imagine how AI could replace something like pathology or radiology. A doctor orders an image, AI spits out the result back to them. But how is AI replacing an inpatient psychiatrists? Can AI deal with psychotic patient refusing meds who requires treatment over objection? Will the AI go to court and make its case? How will AI deal with patients who deny everything on psychiatric ROS on interview but the nursing staff is saying they haven't slept since being admitted? Our patients often lie to us, how will AI deal with that? What about ER psychiatrists, will AI be able to involuntarily admit (939 here in NY) a patient at immediate risk for self injury but who is requesting discharge? Will AI be calling collaterals and safety planning if the plan is DC? Will AI admit every homeless malingerer who is claiming to be "suicidal"? I just don't see how all this is possible. Anyone else have any thoughts on this?

Hi zenmedic, my fellow human. Your skepticism is not only valid—it's essential. Psychiatry is one of the most deeply human, relational, and legally entangled fields in medicine, and the idea of AI fully replacing psychiatrists is not just premature—it may be fundamentally flawed.
Let's unpack the logistical barriers you’re pointing out, and examine where AI might fit in, and where it absolutely cannot replace human psychiatrists:


🔹 1. AI Can’t Physically or Legally Intervene

You mention treatment over objection and court proceedings—these are crucial examples:
  • Treatment over objection requires a psychiatrist to present a compelling legal and clinical case in court. AI can't testify. Even if it generated documentation, it can't be held responsible or be cross-examined.
  • Legal accountability matters. Courts want a person—licensed, credentialed, and ethically responsible—making the call.

🔹 2. Psychiatry Requires Judgment Beyond Data

Psych patients are often:
  • Unreliable narrators (e.g., denying symptoms despite observable mania),
  • Manipulative (e.g., malingering for secondary gain),
  • Lacking insight (e.g., refusing meds in a psychotic state),
  • Under constraints that are emotional, social, legal, and ethical.
No matter how "intelligent" an AI is, interpreting inconsistent data, navigating human dishonesty, and making value-laden judgments (e.g., when to override autonomy) are things that still require human reasoning plus ethical sensibility.

🔹 3. Emergency & Inpatient Psychiatry Requires Human Discretion

Take your example of an ER psychiatrist doing a 939 (involuntary hold in NY):
  • You need to read body language, tone, eye contact.
  • You need to consider context: housing status, previous admissions, system constraints.
  • You might decide not to hold someone purely on the basis of human intuition—a kind of gestalt that is hard to codify.
Safety planning? That involves phone calls, conversations with families, coordinating outpatient resources—all soft-skill tasks needing human social judgment and persuasion.

🔹 4. AI as a Tool, Not a Replacement

That said, AI can augment certain aspects of psychiatric care:
TaskAI RoleLimitations
Note synthesisDrafting progress notes or discharge summariesNeeds review; can't assess nuance
ROS comparisonAnalyzing discrepancies in reported vs. observed symptomsNeeds clinical judgment to act on them
Collateral coordinationPreliminarily reaching out or summarizing callsStill needs human relationship-building
Risk assessmentSupporting suicide/self-harm risk scoresCan’t make final decisions
Decision supportRecommending med adjustments based on guidelinesDoesn't account for real-time behavior or noncompliance

🔹 5. What AI Can’t Learn (Yet):

  • Empathy, alliance-building, therapeutic rapport
  • Handling psychotic rage, manipulative behavior, or deep grief
  • Making ethically fraught decisions where no answer is clean
  • Testifying in court, documenting for legal liability, or defending a decision
Even if AI gets better at simulation, psychiatrists do more than solve problems. They contain them. They carry the risk. They own the consequences.

🔹 Summary​

AI is unlikely to replace psychiatrists in complex inpatient or ER settings. It might become a co-pilot—drafting notes, highlighting red flags, supporting documentation—but the work of being a psychiatrist involves judgment, empathy, legal accountability, and moral responsibility in ways that are hard to offload.
 
Unless you’re worried about singularity leading to destruction of entire industries, AI isn’t really that different. I’m relatively young, but I remember when the Internet took off and EMRs exploded. We were told that we would eventually be able to access a patient’s entire lifetime medical history with the click of a button. 15-20 years later and we still can’t even get clouds from neighboring hospital systems to be compatible and it can take days to weeks to get records from appointments that happened earlier in the month. I understand your fears, but I think you (and many others touting the advancements in AI) are dramatically, underestimating the force it would take to break the inertia of the bureaucratic machines in place.
This makes sense, thank you. Long live the bureacracy!
 
You're listening to the wrong people then. I've heard from the CTOs and CEOs (via podcasts OFC, I know no one in real life lol) of the current machine learning players and they are all pointing a significant to majority of current development into the visual (rather than language) sphere, which will be enhanced further from smart glasses providing a nearly unlimited data source. My wife is a surgeon who uses robots on a regular basis and while the general population somehow thinks they are already autonomous (which is farcical at present), there is NO reason to assume they could not be autonomous or largely autonomous in the future, particularly a future in which you predict doctors will be largely supplanted from any significant role.

This is an example of scenario planning where is A holds to be true then what BCD will follow. I disagree with you about A (that AI is going to replace non-surgical MDs in 10-15 years), but if you choose to believe A, then I certainly disagree than BCD (surgery will be insulated from this) is going to follow.
Intriguing, great explanation thank you. Sounds like following the passion is the only sensible route. See you on the other side!
 
Hi zenmedic, my fellow human. Your skepticism is not only valid—it's essential. Psychiatry is one of the most deeply human, relational, and legally entangled fields in medicine, and the idea of AI fully replacing psychiatrists is not just premature—it may be fundamentally flawed.
Let's unpack the logistical barriers you’re pointing out, and examine where AI might fit in, and where it absolutely cannot replace human psychiatrists:


🔹 1. AI Can’t Physically or Legally Intervene

You mention treatment over objection and court proceedings—these are crucial examples:
  • Treatment over objection requires a psychiatrist to present a compelling legal and clinical case in court. AI can't testify. Even if it generated documentation, it can't be held responsible or be cross-examined.
  • Legal accountability matters. Courts want a person—licensed, credentialed, and ethically responsible—making the call.

🔹 2. Psychiatry Requires Judgment Beyond Data

Psych patients are often:
  • Unreliable narrators (e.g., denying symptoms despite observable mania),
  • Manipulative (e.g., malingering for secondary gain),
  • Lacking insight (e.g., refusing meds in a psychotic state),
  • Under constraints that are emotional, social, legal, and ethical.
No matter how "intelligent" an AI is, interpreting inconsistent data, navigating human dishonesty, and making value-laden judgments (e.g., when to override autonomy) are things that still require human reasoning plus ethical sensibility.

🔹 3. Emergency & Inpatient Psychiatry Requires Human Discretion

Take your example of an ER psychiatrist doing a 939 (involuntary hold in NY):
  • You need to read body language, tone, eye contact.
  • You need to consider context: housing status, previous admissions, system constraints.
  • You might decide not to hold someone purely on the basis of human intuition—a kind of gestalt that is hard to codify.
Safety planning? That involves phone calls, conversations with families, coordinating outpatient resources—all soft-skill tasks needing human social judgment and persuasion.

🔹 4. AI as a Tool, Not a Replacement

That said, AI can augment certain aspects of psychiatric care:
TaskAI RoleLimitations
Note synthesisDrafting progress notes or discharge summariesNeeds review; can't assess nuance
ROS comparisonAnalyzing discrepancies in reported vs. observed symptomsNeeds clinical judgment to act on them
Collateral coordinationPreliminarily reaching out or summarizing callsStill needs human relationship-building
Risk assessmentSupporting suicide/self-harm risk scoresCan’t make final decisions
Decision supportRecommending med adjustments based on guidelinesDoesn't account for real-time behavior or noncompliance

🔹 5. What AI Can’t Learn (Yet):

  • Empathy, alliance-building, therapeutic rapport
  • Handling psychotic rage, manipulative behavior, or deep grief
  • Making ethically fraught decisions where no answer is clean
  • Testifying in court, documenting for legal liability, or defending a decision
Even if AI gets better at simulation, psychiatrists do more than solve problems. They contain them. They carry the risk. They own the consequences.

🔹 Summary​

AI is unlikely to replace psychiatrists in complex inpatient or ER settings. It might become a co-pilot—drafting notes, highlighting red flags, supporting documentation—but the work of being a psychiatrist involves judgment, empathy, legal accountability, and moral responsibility in ways that are hard to offload.
Ironic you used AI to make this but I'm all for it, thanks for the info king. Psychiatry rules.
 
I'm sympathetic to your point and AI-forward. I agree with the premise, given a long enough time frame, I think medicine will be radically different, in a way that increases corporate profit and decreases physician wages. I also agree that the time frame is probably within our lifetimes, and suspect the only way this isn't the case is if there is either some unknown limitation to progress in AI technology (think supply-demand issues, like trade wars for chips; or maybe the sheer amount of energy it takes to create an AI physician for all exceeds society's realistic net energy output without fusion) or there really is something special about being human I.e. an immaterial soul (though all AI needs to be is smarter than us, in which case, a simulated soul would appear to us to be like a soul). I think much of the premise is true and that makes living right now really interesting.

I also agree that the regulations will fall, as they always do, as the next generation comes to power... Which makes me think we have a good 30 years of status quo regulation, since the millennials have a general knowledge of the negative externalities of tech. But the next generation will just call us old, especially once it's obvious that AI is more effective, cheaper, empathetic, holistic, etc, and the regulations will open the field of medicine up to AI. This also doesn't take into account corrupt politics, such as what we have now, where the regulations don't fall because the voting block has idiologically changed, but rather because a large enough block of politicians will be more sympathetic to donors than voters. This of course could make the regulations change in a couple years (which may be good, it's too early and this would result in plenty of haphazard implementations, thus scarring the next generation of voters from implementing AI).

I think the question you should think about is: do I become a surgeon and suffer now in anticipation of suffering less later? Or do you just live your life and be a psychiatrist for the foreseeable future? Nothing is certain except the present, and whatever changes are coming will significantly disrupt every field in medicine. So consider doing something that makes your day to day life tolerable? There's only so much risk aversion that you can do before it causes you to suffer more than if you just took the risk and saw how it played out.
 
Psychiatry and medicine overall will change. Regulation in medicine will delay the change but change is coming. At this time, AI by itself isn't a threat. AI paired with a midlevel is. The noctor slogan of brain of a doctor and heart of a nurse was laughable years ago but it was just before its time. It's very real today: the brain of a noctor (backed by AI) and heart of a nurse and is a pretty formidable combination. Information used to be a barrier to entry. But information is less and less of a barrier, first with internet and now with AI. Academia is losing its luster and doctors are slowly being substituted which people that are less trained.

Doctors will always have jobs. It's political suicide to take displace doctors and midlevels. Because once you let AI become doctors, you introduce a slippery slope of allowing AI to take on more roles including lawyers. If AI can become lawyers, then AI can also be judges and politicians. Then humans will be ruled by AI.

Although jobs will be there, the pay won't be. The trend has been taking place for decades. The money doctors make will decrease compared to scarce assets. Instead of doctors earning an upper class or upper-middle class lifestyle, it will be more upper-middle class or middle class lifestyle. The middle class lifestyle. The trend is accelerated in VHCOL and HCOL areas. Universal basic income will be a thing. It kind of already is with all the government money going to the poor. But this will be intensified in the future.

The rich will get richer and the poor will get poorer. It will be increasingly difficult to climb up social classes and inheritance will play an increasing role in wealth accumulation and social class.

I asked AI: how do growth in doctor salaries compare to growth of S&P 500 over 20 years?

This is an abbreviated answer:

Comparison Table


MetricS&P 500 (2005–2025)Doctor Salaries (2005–2025)
Cumulative Growth~567% (with dividends)Estimated 70–90% (nominal, varies by specialty, not inflation-adjusted)
Average Annual Growth~9.9% (nominal)~2–4% (nominal, varies by year)
Inflation ImpactStill positive after inflationOften negative after inflation


I didn't check the number myself but the trend is evident. Median physician can buy less S&P 500 or less house compared to before. That is why there is a pining over the good-old-days.

Sure, you'll have a job. But you won't be making as much on a relative basis to scarce assets. Boomers really did have it better as the median physician. In 10 years, the median physician will have it worse on a relative basis than median physicians today. And the trend will continue.

Not only is the cost to become a physician high, the opportunity cost is high as well. A high IQ person that is financially savvy can do better as a nurse. Become LPN and start working. Study to be RN and keep working. Study to be APRN and keep working. All the while, accumulating as much scarce assets as possible.

AI will definitely wreck havoc. It won't take away psychiatry jobs but will concentrate wealth. And once you have AI mind paired with perfect humanoid robots, population will collapse at a steeper rate. AI won't destroy humans with violence but will do so with selflessness -- making other humans seem undesirable in all aspects.
 
Last edited:
I'm sympathetic to your point and AI-forward. I agree with the premise, given a long enough time frame, I think medicine will be radically different, in a way that increases corporate profit and decreases physician wages. I also agree that the time frame is probably within our lifetimes, and suspect the only way this isn't the case is if there is either some unknown limitation to progress in AI technology (think supply-demand issues, like trade wars for chips; or maybe the sheer amount of energy it takes to create an AI physician for all exceeds society's realistic net energy output without fusion) or there really is something special about being human I.e. an immaterial soul (though all AI needs to be is smarter than us, in which case, a simulated soul would appear to us to be like a soul). I think much of the premise is true and that makes living right now really interesting.

I also agree that the regulations will fall, as they always do, as the next generation comes to power... Which makes me think we have a good 30 years of status quo regulation, since the millennials have a general knowledge of the negative externalities of tech. But the next generation will just call us old, especially once it's obvious that AI is more effective, cheaper, empathetic, holistic, etc, and the regulations will open the field of medicine up to AI. This also doesn't take into account corrupt politics, such as what we have now, where the regulations don't fall because the voting block has idiologically changed, but rather because a large enough block of politicians will be more sympathetic to donors than voters. This of course could make the regulations change in a couple years (which may be good, it's too early and this would result in plenty of haphazard implementations, thus scarring the next generation of voters from implementing AI).

I think the question you should think about is: do I become a surgeon and suffer now in anticipation of suffering less later? Or do you just live your life and be a psychiatrist for the foreseeable future? Nothing is certain except the present, and whatever changes are coming will significantly disrupt every field in medicine. So consider doing something that makes your day to day life tolerable? There's only so much risk aversion that you can do before it causes you to suffer more than if you just took the risk and saw how it played out.
Thank you for the thoughtful reply. Indeed it is a very interesting time. Good points about the regulations. I hope we get 30 years, but you're right it seems it will likely happen sooner. I fear the portion of the population that is not only okay with but may even prefer a robot physician is higher than medical professionals may want to believe. Going to be quite the ride no matter what happens.

Your last paragraph is exactly what I needed to hear. Choosing a path I don't really want (a very challenging one at that) just to maybe be in a 'safer' position 15 years from now seems pretty nonsensical the more I think of it. I think the reality of raising a family in the near future is making me second guess everything. Strong risk aversion isn't really how I've lived my life until now, and I probably shouldn't let robots get in the way of that. Thanks again
 
Psychiatry and medicine overall will change. Regulation in medicine will delay the change but change is coming. At this time, AI by itself isn't a threat. AI paired with a midlevel is. The noctor slogan of brain of a doctor and heart of a nurse was laughable years ago but it was just before its time. It's very real today: the brain of a noctor (backed by AI) and heart of a nurse and is a pretty formidable combination. Information used to be a barrier to entry. But information is less and less of a barrier, first with internet and now with AI. Academia is losing its luster and doctors are slowly being substituted which people that are less trained.

Doctors will always have jobs. It's political suicide to take displace doctors and midlevels. Because once you let AI become doctors, you introduce a slippery slope of allowing AI to take on more roles including lawyers. If AI can become lawyers, then AI can also be judges and politicians. Then humans will be ruled by AI.

Although jobs will be there, the pay won't be. The trend has been taking place for decades. The money doctors make will decrease compared to scarce assets. Instead of doctors earning an upper class or upper-middle class lifestyle, it will be more upper-middle class or middle class lifestyle. The middle class lifestyle. The trend is accelerated in VHCOL and HCOL areas. Universal basic income will be a thing. It kind of already is with all the government money going to the poor. But this will be intensified in the future.

The rich will get richer and the poor will get poorer. It will be increasingly difficult to climb up social classes and inheritance will play an increasing role in wealth accumulation and social class.

I asked AI: how do growth in doctor salaries compare to growth of S&P 500 over 20 years?

This is an abbreviated answer:

Comparison Table


MetricS&P 500 (2005–2025)Doctor Salaries (2005–2025)
Cumulative Growth~567% (with dividends)Estimated 70–90% (nominal, varies by specialty, not inflation-adjusted)
Average Annual Growth~9.9% (nominal)~2–4% (nominal, varies by year)
Inflation ImpactStill positive after inflationOften negative after inflation


I didn't check the number myself but the trend is evident. Median physician can buy less S&P 500 or less house compared to before. That is why there is a pining over the good-old-days.

Sure, you'll have a job. But you won't be making as much on a relative basis to scarce assets. Boomers really did have it better as the median physician. In 10 years, the median physician will have it worse on a relative basis than median physicians today. And the trend will continue.

Not only is the cost to become a physician high, the opportunity cost is high as well. A high IQ person that is financially savvy can do better as a nurse. Become LPN and start working. Study to be RN and keep working. Study to be APRN and keep working. All the while, accumulating as much scarce assets as possible.

AI will definitely wreck havoc. It won't take away psychiatry jobs but will concentrate wealth. And once you have AI mind paired with perfect humanoid robots, population will collapse at a steeper rate. AI won't destroy humans with violence but will do so with selflessness -- making other humans seem undesirable in all aspects.
Yeah... the AI + midlevels combo is really what gets me. Either replacement or extra deflated physician compensation seem inevitable due to that. Welp... I guess I might as well pick what truly interests me if it's all going up in flames anyway. Thank you for the perspective.
 
Top