Thoughts on this AI in medicine video w/ respect to psychiatry?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I asked AI: how do growth in doctor salaries compare to growth of S&P 500 over 20 years?

This is an abbreviated answer:

Comparison Table


MetricS&P 500 (2005–2025)Doctor Salaries (2005–2025)
Cumulative Growth~567% (with dividends)Estimated 70–90% (nominal, varies by specialty, not inflation-adjusted)
Average Annual Growth~9.9% (nominal)~2–4% (nominal, varies by year)
Inflation ImpactStill positive after inflationOften negative after inflation


I didn't check the number myself but the trend is evident. Median physician can buy less S&P 500 or less house compared to before. That is why there is a pining over the good-old-days.
I'm sorry but this is a completely wild comparison. There are almost no jobs other than C-suite and maybe software engineer that have kept up with S&P 500 over the past 20 years. The relevant comparison for all jobs is always versus inflation. Docs are relatively stagnant and some fields have gained while others have lost versus inflation. We are certainly not in a boom industry, but the overall salaries have been remarkably stable and much higher than almost every other country on Earth (with some specific exceptions for certain specialists in Australia).

Members don't see this ad.
 
I'm sorry but this is a completely wild comparison. There are almost no jobs other than C-suite and maybe software engineer that have kept up with S&P 500 over the past 20 years. The relevant comparison for all jobs is always versus inflation. Docs are relatively stagnant and some fields have gained while others have lost versus inflation. We are certainly not in a boom industry, but the overall salaries have been remarkably stable and much higher than almost every other country on Earth (with some specific exceptions for certain specialists in Australia).

Maybe it's a wild comparison, but it certainly is a valid comparison -- especially if investments in S&P 500 is supposed to replace someone's income down the road. Money if fungible, whether it is from job or from investments.

If you want to compare growth of physician income to CPI, be my guest. That doesn't stop the reality of inflation for those who want to be in upper or upper-middle class. CPI greatly reduces actual inflation for those on the upper part of society. The higher you rise up socioeconomically as a physician, the higher percentage of your income should be purchasing scarce assets like S&P 500 and real estate.

You are correct that wage increase of many jobs cannot compete with growth of S&P 500. Add on higher taxation of wages over investments. This is a race against time. The young (Gen Z) will likely not buy houses anytime soon and must inherit them. They'll own nothing and be happy. The inability for the young to buy their own house is already happening for decades in other countries. Be happy we're in the US. If we were physicians in another country, upward social mobility and even having enough for retirement will be a bigger uphill battle.

I hope you're leaving inheritance for your kids. Seriously.
 
Last edited:
I definitely fall in the camp that agrees (and hopes) that AI will take over most jobs, including ours, in the next 10 years. I have no doubt that AI could do a better job with psychiatry right now than NPs and PAs do. I think most specialties could be taken over before too long. My father’s GP who is a PA is terrible and AI could definitely perform better. Lawyers and accountants should be able to be replaced fairly soon. Teachers for sure. I’m not sure why most data analysts aren’t already replaced. It would be very nice to not have to work anymore, and people would receive better services overall.
 
Members don't see this ad :)
Hi zenmedic, my fellow human. Your skepticism is not only valid—it's essential. Psychiatry is one of the most deeply human, relational, and legally entangled fields in medicine, and the idea of AI fully replacing psychiatrists is not just premature—it may be fundamentally flawed.
Let's unpack the logistical barriers you’re pointing out, and examine where AI might fit in, and where it absolutely cannot replace human psychiatrists:


🔹 1. AI Can’t Physically or Legally Intervene

You mention treatment over objection and court proceedings—these are crucial examples:
  • Treatment over objection requires a psychiatrist to present a compelling legal and clinical case in court. AI can't testify. Even if it generated documentation, it can't be held responsible or be cross-examined.
  • Legal accountability matters. Courts want a person—licensed, credentialed, and ethically responsible—making the call.

🔹 2. Psychiatry Requires Judgment Beyond Data

Psych patients are often:
  • Unreliable narrators (e.g., denying symptoms despite observable mania),
  • Manipulative (e.g., malingering for secondary gain),
  • Lacking insight (e.g., refusing meds in a psychotic state),
  • Under constraints that are emotional, social, legal, and ethical.
No matter how "intelligent" an AI is, interpreting inconsistent data, navigating human dishonesty, and making value-laden judgments (e.g., when to override autonomy) are things that still require human reasoning plus ethical sensibility.

🔹 3. Emergency & Inpatient Psychiatry Requires Human Discretion

Take your example of an ER psychiatrist doing a 939 (involuntary hold in NY):
  • You need to read body language, tone, eye contact.
  • You need to consider context: housing status, previous admissions, system constraints.
  • You might decide not to hold someone purely on the basis of human intuition—a kind of gestalt that is hard to codify.
Safety planning? That involves phone calls, conversations with families, coordinating outpatient resources—all soft-skill tasks needing human social judgment and persuasion.

🔹 4. AI as a Tool, Not a Replacement

That said, AI can augment certain aspects of psychiatric care:
TaskAI RoleLimitations
Note synthesisDrafting progress notes or discharge summariesNeeds review; can't assess nuance
ROS comparisonAnalyzing discrepancies in reported vs. observed symptomsNeeds clinical judgment to act on them
Collateral coordinationPreliminarily reaching out or summarizing callsStill needs human relationship-building
Risk assessmentSupporting suicide/self-harm risk scoresCan’t make final decisions
Decision supportRecommending med adjustments based on guidelinesDoesn't account for real-time behavior or noncompliance

🔹 5. What AI Can’t Learn (Yet):

  • Empathy, alliance-building, therapeutic rapport
  • Handling psychotic rage, manipulative behavior, or deep grief
  • Making ethically fraught decisions where no answer is clean
  • Testifying in court, documenting for legal liability, or defending a decision
Even if AI gets better at simulation, psychiatrists do more than solve problems. They contain them. They carry the risk. They own the consequences.

🔹 Summary​

AI is unlikely to replace psychiatrists in complex inpatient or ER settings. It might become a co-pilot—drafting notes, highlighting red flags, supporting documentation—but the work of being a psychiatrist involves judgment, empathy, legal accountability, and moral responsibility in ways that are hard to offload.
Haha inpatient psychiatrists will be easily replaceable by AI relatively soon. Lawyers will be able to be replaced. AI psychiatrists will be able to testify in a legal setting better than many of my peers do now. I hope everyone here understands that ChatGPT does not represent the current state of AI. Also, we are in a race with other countries to have the most sophisticated AI.

Comparing AI to EMRs or radiology reading software (or basically any previous invention) is silly or disingenuous. I don’t know if it’s a special subset of docs, but most I know fully agree that we won’t be needed in the relatively short term. My good friend is a neuroradiologist and he fully expects to not be working in 10 years. He uses AI informally and he says it’s like having a good quality third or fourth year resident. And what he’s using is nowhere near at the current limit of AI.
 
Last edited:
I think one of the very large obstacles to using AI in psychiatry is adversarial robustness. Jailbreaking these LLMs is not very difficult at present and you can't have one in any decision-making capacity until you make it harder for someone to intone the right formulas and break it in a very profound way. The tendency towards sycophancy is also a problem, invalidating the invalid is not great therapeutically.
 
I definitely fall in the camp that agrees (and hopes) that AI will take over most jobs, including ours, in the next 10 years. I have no doubt that AI could do a better job with psychiatry right now than NPs and PAs do. I think most specialties could be taken over before too long. My father’s GP who is a PA is terrible and AI could definitely perform better. Lawyers and accountants should be able to be replaced fairly soon. Teachers for sure. I’m not sure why most data analysts aren’t already replaced. It would be very nice to not have to work anymore, and people would receive better services overall.
And when AI replaces us all people are going to earn money to survive how? You think jobs that require thinking are going to just disappear and menial jobs will be all that’s left? Why not just automate those?

You clearly don’t work with kids if you think teachers are going to be replaced. It’s basic lack of common sense like this that drives all the fear mongering.
 
And when AI replaces us all people are going to earn money to survive how? You think jobs that require thinking are going to just disappear and menial jobs will be all that’s left? Why not just automate those?

You clearly don’t work with kids if you think teachers are going to be replaced. It’s basic lack of common sense like this that drives all the fear mongering.

I know that this isn’t directed at me but just for fun:

At some point, we are essentially all replaceable except politicians. They’ll keep control. The world could move closer to communism in which everyone receives a monthly stipend. “Jobs” cease to exist. 20 years? 500 years?

I don’t love the teachers example as I believe AI is already superior outside of organizing the curriculum and discipline. I was 100% public school. Our house is even in a better district, but once I saw what middle school has become, my children were pulled for private. The teachers are better and more in control in local private school, but they still make mistakes and all children don’t learn the same way. When I’m trying to review homework, I use AI to check answers, re-educate myself on the material, and it’ll provide multiple learning strategies if my child is confused. It is becoming a more customizable education. I currently continue private school for the quality socialization and reinforcing positive values. When/if those fall short, I am much more confident that I could homeschool my children better than public school. With online resources and opportunities, it is already possible to obtain a bachelors by age 18 with homeschooling. Local public school has already joined with a community college to create tracks to obtain an associates by high school graduation. High school teachers are slowly becoming obsolete.

Who knows what the future holds or how quickly change is coming?
 
I don’t love the teachers example as I believe AI is already superior outside of organizing the curriculum and discipline. I was 100% public school. Our house is even in a better district, but once I saw what middle school has become, my children were pulled for private. The teachers are better and more in control in local private school, but they still make mistakes and all children don’t learn the same way. When I’m trying to review homework, I use AI to check answers, re-educate myself on the material, and it’ll provide multiple learning strategies if my child is confused. It is becoming a more customizable education. I currently continue private school for the quality socialization and reinforcing positive values. When/if those fall short, I am much more confident that I could homeschool my children better than public school. With online resources and opportunities, it is already possible to obtain a bachelors by age 18 with homeschooling. Local public school has already joined with a community college to create tracks to obtain an associates by high school graduation. High school teachers are slowly becoming obsolete.

You could homeschool your children better while working a full time job all day? I mean I guess if all these people are unemployed because AI took their jobs they can all hang out and homeschool their kids all day but otherwise who's supervising that 8 year old? I say this being a person who was homeschooled for part of my life and didn't have any issue with it...but there are logistical considerations there.

Self directed learning works for a specific subsection of the population that is relatively well resourced, has adequate internal motivation and/or a good support system/external motivator and have put effort into other ways to socialize. So generally kids/parents that would be fairly successful anyway in most settings. I don't disagree that this can be a helpful tool to re-check answers but in that sense it's just a fancy search engine or really quick way to move through textbooks, it makes it easier to teach yourself or your child but you still have to be willing to do this.

A huge part of school for the majority of kids is just getting them to show up and do the work. I also hear complaints from parents about the amount of time kids are spending on electronics at school nearly every day and the amount of distraction it causes...schools are terrible at locking down electronics adequately and I wouldn't expect that to get any better. Cheating is also becoming an even bigger problem than it was before and I wouldn't be surprised if we soon see a complete reversal with some forms of test administration/quizzes going back to pen and paper cause you know what's hard to cheat on? A paper test where I have to physically write out chemistry equations.
 
And when AI replaces us all people are going to earn money to survive how? You think jobs that require thinking are going to just disappear and menial jobs will be all that’s left? Why not just automate those?

You clearly don’t work with kids if you think teachers are going to be replaced. It’s basic lack of common sense like this that drives all the fear mongering.
I never said menial jobs would not be replaced by AI. I believe they will be.

It's not fear mongering if I (and many people) want it to happen.

Texasphysician's post responds to your kid comment well. I did child and adolescent fellowship and I fully stand by what I said.
 
I never said menial jobs would not be replaced by AI. I believe they will be.

It's not fear mongering if I (and many people) want it to happen.

Texasphysician's post responds to your kid comment well. I did child and adolescent fellowship and I fully stand by what I said.
So let's take this out to the endgame then. When most all jobs are replaced by AI what do you think humans will do? Do you think the gov is just going to subsidized everything to people so we can live in a society where we do nothing or do you think we'll see mass homelessness and a spiral into dystopia? What do you see as the outcome here? Or have you all just not thought about the basic frameworks of society and the requirement of working classes for societies to exist? These are very basic common sense sociology topics that silicon valley types seem to blow off as an afterthought or completely lack comprehension of while chasing the next project.

Why do you want this to happen? You may be financially well off enough to weather the collapse of working society, but how do you think most people will respond to having no income? How about Gen Z and Gen Alpha? They just going to be batteries for the Matrix, lol? Again, what do you think the near complete implementation of AI leads to? Either you all are grossly overestimating the role AI will play in the near future, you're cheering for the road to dystopia, or you're completely delusional that we'll maintain a functional society. I'm legitimately curious how you (and others) envision what you're hoping for going well.

I completely disagree with Texas on this one and am pretty shocked you're CAP if you didn't see how detrimental the shift to technology-based learning during COVID was. You think going all-in on that is going to go well? Or are you assuming that kids just won't need school anymore because AI is going to do everything for us?
 
You could homeschool your children better while working a full time job all day? I mean I guess if all these people are unemployed because AI took their jobs they can all hang out and homeschool their kids all day but otherwise who's supervising that 8 year old? I say this being a person who was homeschooled for part of my life and didn't have any issue with it...but there are logistical considerations there.

Self directed learning works for a specific subsection of the population that is relatively well resourced, has adequate internal motivation and/or a good support system/external motivator and have put effort into other ways to socialize. So generally kids/parents that would be fairly successful anyway in most settings. I don't disagree that this can be a helpful tool to re-check answers but in that sense it's just a fancy search engine or really quick way to move through textbooks, it makes it easier to teach yourself or your child but you still have to be willing to do this.

A huge part of school for the majority of kids is just getting them to show up and do the work. I also hear complaints from parents about the amount of time kids are spending on electronics at school nearly every day and the amount of distraction it causes...schools are terrible at locking down electronics adequately and I wouldn't expect that to get any better. Cheating is also becoming an even bigger problem than it was before and I wouldn't be surprised if we soon see a complete reversal with some forms of test administration/quizzes going back to pen and paper cause you know what's hard to cheat on? A paper test where I have to physically write out chemistry equations.

I’m not saying AI is there yet to homeschool children autonomously. I have advantages that other families don’t have. I don’t work a singular FT job. I have flexibilities. Same with my wife. We don’t homeschool either. This is more of an academic, theoretical debate.

The problem with electronics is using them to waste time. They need to be appropriately managed when used. If we teach our children coding, typing, SEO, etc, they are probably better off for the future. It will take more electronic monitoring programs and oversight by AI to get there. AI already manages one of my child’s math homework. AI evaluates and grades it. The teacher just sees the homework grade. She never reviews it herself.

With online combinations of high school/college and online NP degrees, it is possible for my children to become psych NP’s with my further educating them before they can drink alcohol. I’m not saying that there is interest or any of that will happen, but AI and online schooling makes it possible.

We are still a long ways off from school being fully AI driven. Being CAP, the detrimental aspects of social media in electronics is a problem. I don’t plan on homeschooling my children at this time. A big component of the value of school right now is socialization and I believe in in-person learning.

AI has already taken jobs at my local fast food places and it’s the better ones. You either order via a touchscreen or AI voice that inputs data for the cooks. AI sounds quite similar to a person, speaks in many languages, and transcribes my order perfectly.

AI has much to be excited about and some I’m concerned about.
 
I know that this isn’t directed at me but just for fun:

At some point, we are essentially all replaceable except politicians. They’ll keep control. The world could move closer to communism in which everyone receives a monthly stipend. “Jobs” cease to exist. 20 years? 500 years?

I don’t love the teachers example as I believe AI is already superior outside of organizing the curriculum and discipline. I was 100% public school. Our house is even in a better district, but once I saw what middle school has become, my children were pulled for private. The teachers are better and more in control in local private school, but they still make mistakes and all children don’t learn the same way. When I’m trying to review homework, I use AI to check answers, re-educate myself on the material, and it’ll provide multiple learning strategies if my child is confused. It is becoming a more customizable education. I currently continue private school for the quality socialization and reinforcing positive values. When/if those fall short, I am much more confident that I could homeschool my children better than public school. With online resources and opportunities, it is already possible to obtain a bachelors by age 18 with homeschooling. Local public school has already joined with a community college to create tracks to obtain an associates by high school graduation. High school teachers are slowly becoming obsolete.

Who knows what the future holds or how quickly change is coming?
Yes, because we see how well communism has worked. You're in Texas, you really think Americans are going to be good with that? Sounds like the suggestion is to build the bunker now and make sure we're stocked up on firearms for the impending revolutions. Only semi-sarcastic here.

A few things with your school example. My wife just finished her graduate degree in education and we've talked a lot about AI and it's future role. AI is absolutely not ready for what you're describing. AI can be certainly be utilized in many ways, but getting kids to actually do that like CH68 said is a completely different story. AI also currently only puts out information based on what's put in. It's not good at all in regards to filtering what is most relevant or the veracity of information. Just this past week I've found Google's AI to be obviously incorrect multiple times. One time was so bad that the question I asked (most seasons on SNL by cast) was completely different from the top 3 hits Google gave and different from the source it pulled from! How do you know what you're reteaching yourself is even correct?

There's also major problems with "customizable education". There were multiple studies on this in the 60's and 70's with "open classrooms" where students could move to different areas on their own to study what they wanted and what they were weakest at. They failed miserably. This is becoming popular again and people seem to have forgotten that this didn't work previously (or are twisting why it didn't work). "Teaching to a kids strengths" is a great idea, but failing to provide a broader education, not teaching critical thinking skills, and not educating kids on how to examine the validity and veracity of information only leads us further down the current path of stupidity of the general public. How will AI teach these things? I'll also add that we've had a few kids who were homeschooled and got early bachelor's apply for med school here and holy crap were they terrible applicants. Missing the most basic social skills needed to navigate an interview, one was so bad I couldn't even rate them.


I’m not saying AI is there yet to homeschool children autonomously. I have advantages that other families don’t have. I don’t work a singular FT job. I have flexibilities. Same with my wife. We don’t homeschool either. This is more of an academic, theoretical debate.
Seems far less theoretical and more of an actual outcome according to Wilf since they're actively hoping this all will be happening in the next decade...

The problem with electronics is using them to waste time. They need to be appropriately managed when used. If we teach our children coding, typing, SEO, etc, they are probably better off for the future. It will take more electronic monitoring programs and oversight by AI to get there. AI already manages one of my child’s math homework. AI evaluates and grades it. The teacher just sees the homework grade. She never reviews it herself.

With online combinations of high school/college and online NP degrees, it is possible for my children to become psych NP’s with my further educating them before they can drink alcohol. I’m not saying that there is interest or any of that will happen, but AI and online schooling makes it possible.
The bolded is a failure by the teacher. I say this as someone who was a high school and college math tutor. If she's not looking at the work at all then she's not really teaching, at least not regarding the homework.

The second point about NPs just furthers the race to the bottom in terms of medical care. Most of us here already complain about how poor the quality of care from many NPs is and the biggest deficit is lack of actual clinical experience. This sounds like a great we to continue to churn out "providers" who can't function clinically. If that's the plan, why not just replace them with AI?

We are still a long ways off from school being fully AI driven. Being CAP, the detrimental aspects of social media in electronics is a problem. I don’t plan on homeschooling my children at this time. A big component of the value of school right now is socialization and I believe in in-person learning.
This is why I find it bizarre when people in our field are so pro-technology/AI. The negative consequences of excessive technology on mental health keeps mounting as we do more studies, so this idea that immersing our society in technological advancements is going to be some great renaissance with minimal consequences is baffling to me. Touch grass (preferably with other kids) is both an insult to kids and great therapeutic advice.
 
Yes, because we see how well communism has worked. You're in Texas, you really think Americans are going to be good with that? Sounds like the suggestion is to build the bunker now and make sure we're stocked up on firearms for the impending revolutions. Only semi-sarcastic here.

A few things with your school example. My wife just finished her graduate degree in education and we've talked a lot about AI and it's future role. AI is absolutely not ready for what you're describing. AI can be certainly be utilized in many ways, but getting kids to actually do that like CH68 said is a completely different story. AI also currently only puts out information based on what's put in. It's not good at all in regards to filtering what is most relevant or the veracity of information. Just this past week I've found Google's AI to be obviously incorrect multiple times. One time was so bad that the question I asked (most seasons on SNL by cast) was completely different from the top 3 hits Google gave and different from the source it pulled from! How do you know what you're reteaching yourself is even correct?

There's also major problems with "customizable education". There were multiple studies on this in the 60's and 70's with "open classrooms" where students could move to different areas on their own to study what they wanted and what they were weakest at. They failed miserably. This is becoming popular again and people seem to have forgotten that this didn't work previously (or are twisting why it didn't work). "Teaching to a kids strengths" is a great idea, but failing to provide a broader education, not teaching critical thinking skills, and not educating kids on how to examine the validity and veracity of information only leads us further down the current path of stupidity of the general public. How will AI teach these things? I'll also add that we've had a few kids who were homeschooled and got early bachelor's apply for med school here and holy crap were they terrible applicants. Missing the most basic social skills needed to navigate an interview, one was so bad I couldn't even rate them.



Seems far less theoretical and more of an actual outcome according to Wilf since they're actively hoping this all will be happening in the next decade...


The bolded is a failure by the teacher. I say this as someone who was a high school and college math tutor. If she's not looking at the work at all then she's not really teaching, at least not regarding the homework.

The second point about NPs just furthers the race to the bottom in terms of medical care. Most of us here already complain about how poor the quality of care from many NPs is and the biggest deficit is lack of actual clinical experience. This sounds like a great we to continue to churn out "providers" who can't function clinically. If that's the plan, why not just replace them with AI?


This is why I find it bizarre when people in our field are so pro-technology/AI. The negative consequences of excessive technology on mental health keeps mounting as we do more studies, so this idea that immersing our society in technological advancements is going to be some great renaissance with minimal consequences is baffling to me. Touch grass (preferably with other kids) is both an insult to kids and great therapeutic advice.
A couple issues with your post:
1. AI is going to improve exponentially over the coming years. Your critiques of current AI (which we don’t even have access to) have little bearing on what AI will be in 5-10 years.
2. This conversation is more about what will happen, not what we want to happen (although in my case I believe they are the same.) The US and EU are in a race with China to have the most advanced AI.
3. I have not advocated that kids not be in a classroom type of setting and to socialize, even if it is AI that is the teacher. Also, the importance of education will be very different because we won’t need doctors, lawyers, researchers (or not nearly as many of them), teachers, accountants/finance, architects, etc.
4. I know this all sounds sort of science fiction like to many people here, especially those who are a little older, but it is not. The exact timetable is up for debate but every expert I’ve read agree there will be very large changes within 10 years and monumental changes within 15-20.
 
Members don't see this ad :)
A couple issues with your post:
1. AI is going to improve exponentially over the coming years. Your critiques of current AI (which we don’t even have access to) have little bearing on what AI will be in 5-10 years.
2. This conversation is more about what will happen, not what we want to happen (although in my case I believe they are the same.) The US and EU are in a race with China to have the most advanced AI.
3. I have not advocated that kids not be in a classroom type of setting and to socialize, even if it is AI that is the teacher. Also, the importance of education will be very different because we won’t need doctors, lawyers, researchers (or not nearly as many of them), teachers, accountants/finance, architects, etc.
4. I know this all sounds sort of science fiction like to many people here, especially those who are a little older, but it is not. The exact timetable is up for debate but every expert I’ve read agree there will be very large changes within 10 years and monumental changes within 15-20.

AI may improve exponentially… Or it won’t. But let’s just say it does - the gap between what is possible with the technology and what is actually available for the consumer will only grow. There are very big, very real problems regarding the economics of AI. Supply chain and power consumption to start. These problems are not going away any time soon.
 
Yes, because we see how well communism has worked. You're in Texas, you really think Americans are going to be good with that? Sounds like the suggestion is to build the bunker now and make sure we're stocked up on firearms for the impending revolutions. Only semi-sarcastic here.

A few things with your school example. My wife just finished her graduate degree in education and we've talked a lot about AI and it's future role. AI is absolutely not ready for what you're describing. AI can be certainly be utilized in many ways, but getting kids to actually do that like CH68 said is a completely different story. AI also currently only puts out information based on what's put in. It's not good at all in regards to filtering what is most relevant or the veracity of information. Just this past week I've found Google's AI to be obviously incorrect multiple times. One time was so bad that the question I asked (most seasons on SNL by cast) was completely different from the top 3 hits Google gave and different from the source it pulled from! How do you know what you're reteaching yourself is even correct?

There's also major problems with "customizable education". There were multiple studies on this in the 60's and 70's with "open classrooms" where students could move to different areas on their own to study what they wanted and what they were weakest at. They failed miserably. This is becoming popular again and people seem to have forgotten that this didn't work previously (or are twisting why it didn't work). "Teaching to a kids strengths" is a great idea, but failing to provide a broader education, not teaching critical thinking skills, and not educating kids on how to examine the validity and veracity of information only leads us further down the current path of stupidity of the general public. How will AI teach these things? I'll also add that we've had a few kids who were homeschooled and got early bachelor's apply for med school here and holy crap were they terrible applicants. Missing the most basic social skills needed to navigate an interview, one was so bad I couldn't even rate them.



Seems far less theoretical and more of an actual outcome according to Wilf since they're actively hoping this all will be happening in the next decade...


The bolded is a failure by the teacher. I say this as someone who was a high school and college math tutor. If she's not looking at the work at all then she's not really teaching, at least not regarding the homework.

The second point about NPs just furthers the race to the bottom in terms of medical care. Most of us here already complain about how poor the quality of care from many NPs is and the biggest deficit is lack of actual clinical experience. This sounds like a great we to continue to churn out "providers" who can't function clinically. If that's the plan, why not just replace them with AI?


This is why I find it bizarre when people in our field are so pro-technology/AI. The negative consequences of excessive technology on mental health keeps mounting as we do more studies, so this idea that immersing our society in technological advancements is going to be some great renaissance with minimal consequences is baffling to me. Touch grass (preferably with other kids) is both an insult to kids and great therapeutic advice.

I’ve been telling everyone willing to listen in medicine that AI is mostly something to avoid, especially with documentation help. Like NP’s, we are currently training our replacements by allowing AI (machine learning) to study exactly how we do things.

What you see in currently released AI’s is already far outdated. We have the technology right now to replace every fast food employee outside of cleaning staff. We can replace master’s level counselors. Every waiter at every restaurant is now a luxury. Call centers replaced. Outside of restocking, stores don’t need staff to check out or ask questions. The voices are so good, you can’t identify whether the voice is real or AI. It all exists. We are now waiting on prices to drop for mass roll-outs or politicians to grant rights or the right companies to package the items for sale.

We are close to a big paradigm shift. The roll-out in fast food joints near me is already happening. I couldn’t tell I was talking to AI by voice quality. It had better communication skills than anytime I’ve ordered fast food in the past. It is multi-lingual. It was shocking how good it was. I literally asked about the new sales training as it was too good for a fast food joint.

My fear is that as we accept this technology as better than people, there will be a push to expand in all directions. If the nation could drop the cost of healthcare insurance by 50% tomorrow but everyone needs to speak/trial treatment with an AI prescriber first, the country would vote in favor.

I’m not saying we are all out of a job in a few years, but my generation will be impacted significantly. My children won’t have access to many of the entry level jobs in high school that I had access too if the current rates of technology continue.
 
I’ve been telling everyone willing to listen in medicine that AI is mostly something to avoid, especially with documentation help. Like NP’s, we are currently training our replacements by allowing AI (machine learning) to study exactly how we do things.

What you see in currently released AI’s is already far outdated. We have the technology right now to replace every fast food employee outside of cleaning staff. We can replace master’s level counselors. Every waiter at every restaurant is now a luxury. Call centers replaced. Outside of restocking, stores don’t need staff to check out or ask questions. The voices are so good, you can’t identify whether the voice is real or AI. It all exists. We are now waiting on prices to drop for mass roll-outs or politicians to grant rights or the right companies to package the items for sale.

We are close to a big paradigm shift. The roll-out in fast food joints near me is already happening. I couldn’t tell I was talking to AI by voice quality. It had better communication skills than anytime I’ve ordered fast food in the past. It is multi-lingual. It was shocking how good it was. I literally asked about the new sales training as it was too good for a fast food joint.

My fear is that as we accept this technology as better than people, there will be a push to expand in all directions. If the nation could drop the cost of healthcare insurance by 50% tomorrow but everyone needs to speak/trial treatment with an AI prescriber first, the country would vote in favor.

I’m not saying we are all out of a job in a few years, but my generation will be impacted significantly. My children won’t have access to many of the entry level jobs in high school that I had access too if the current rates of technology continue.

Who is "we"? People keep saying this but where's the evidence that OpenAI or Google or Meta are all hanging on to some undisclosed super AI models they're not releasing? Sure, they're continuing to do research but they're also releasing new models as fast as they can reasonably do so because we're in a capitalist society where if they bleed users because the other guy's model is better, they're toast.

I think you're also confusing artificial intelligence with like anything electronic. Self checkout lines at lowes are not some cutting edge example of artificial intelligence. You could have ordered off a kiosk at your table for forever at restaurants....this was already a thing during COVID with QR codes like 3 years ago and had nothing to do with artificial intelligence. What is this LLM that is going to replace a employee making a hamburger or walk your plate to your table?
 
Cheating is also becoming an even bigger problem than it was before and I wouldn't be surprised if we soon see a complete reversal with some forms of test administration/quizzes going back to pen and paper cause you know what's hard to cheat on? A paper test where I have to physically write out chemistry equations.
The reversal is already happening. I wish I bought stocks in Blue Books haha

 
Who is "we"? People keep saying this but where's the evidence that OpenAI or Google or Meta are all hanging on to some undisclosed super AI models they're not releasing? Sure, they're continuing to do research but they're also releasing new models as fast as they can reasonably do so because we're in a capitalist society where if they bleed users because the other guy's model is better, they're toast.

I think you're also confusing artificial intelligence with like anything electronic. Self checkout lines at lowes are not some cutting edge example of artificial intelligence. You could have ordered off a kiosk at your table for forever at restaurants....this was already a thing during COVID with QR codes like 3 years ago and had nothing to do with artificial intelligence. What is this LLM that is going to replace a employee making a hamburger or walk your plate to your table?

Many companies are developing AI’s for commercial purposes that won’t be released to the public.

I guess my response is somewhat AI and somewhat overall technology. We don’t need check-out lines soon. The technology exists to have a credit card on file. Cameras in a store can track your movement and everything you leave the store with for automatic billing. No kiosk or scanning yourself needed.

Kiosks at tables don’t need to be click and drag options. You can now push a button and speak the order. AI translates everything in almost any language to place the order.

Machines are built to make all of the food. AI robots can bring the food to you and ask what else it can order on your behalf. Companies are testing and fine tuning prototypes now.

AI counseling companies are already doing double blind studies - results are promising for AI to be better than live counselors. This will extrapolate to HR departments as well.
 
Also, someone commented above that robots won't be flipping burgers? They will be doing more, as they are already. Even diners on yelp say the food cooked by these robot chefs is quality.


What I hope AI "teachers" do is make learning more streamlined. In a lot of HCOL cities where kids are competing against each other to get into the best colleges, many kids burn out from all the hours they have to be in school and homework.
 
Also, someone commented above that robots won't be flipping burgers? They will be doing more, as they are already. Even diners on yelp say the food cooked by these robot chefs is quality.


.....you mean this place with people behind the counter? A person literally still has to put the ingredients in the bowl in a store specifically DESIGNED for these machines haha check out the video.


Like yes will there be more automated systems for food? Sure. Is that "AI"? Not really.
 
.....you mean this place with people behind the counter? A person literally still has to put the ingredients in the bowl in a store specifically DESIGNED for these machines haha check out the video.


Like yes will there be more automated systems for food? Sure. Is that "AI"? Not really.
A few years ago I wanted to check out AI so I used a former version of ChatGPT and gave it a fairly standard child with ADHD (new diagnosis) clinical scenario and asked it for a treatment plan. It gave me basically a Google handout about stimulants in general. I was not impressed.
I did the same thing with the new version and with a similar clinical scenario and it said options are available but they would start Concerta 18mg for this child based on weight and concerta being a good first line medication (in my program it is almost always our first line.) It then gave specific augmentation strategies for different scenarios such as it not kicking in fast enough in the morning, etc.
Commercially available AI is already at least as good as the average midlevel.

A few years ago my mother received terrible care by a doctor as an inpatient on the medical floor. He completely missed the fairly obvious diagnosis and it caused at least a 2-3 day delay in appropriate treatment. I typed her clinical scenario into ChatGPT and its first diagnosis and plan on the differential was correct. Had AI been my mother’s doctor she wouldn’t have been delirious in restraints for 2 extra days.

I’m wondering why some people here do not like the fact that soon AI will be better than the best doctors (if it’s not already) and it’s most definitely already as good as midlevels and lower tier doctors. If you or a family member is sick and needs care, wouldn’t you want the best care possible? And at first doctors or midlevels will be using the AI so our jobs aren’t going away immediately (although the powers that be might choose the cheaper option of midlevels.)
 
Last edited:
Seems people feel strongly on both sides. I think remembering that AI right now is the worst it will ever be (while it is already mindbogglingly good) makes it pretty obvious none of us can predict what is coming. Look into Google's new VEO-3 video creation tool if you have any doubts that AI is about turn society upside down. VEO-3 Examples

Nevertheless, all of your comments have reminded me to follow my interests either way. Either it's all overblown and I get to do what I actually want, or we're all indeed screwed, and at least I got to try out my true field of interest for a bit.
 
A few years ago I wanted to check out AI so I used a former version of ChatGPT and gave it a fairly standard child with ADHD (new diagnosis) clinical scenario and asked it for a treatment plan. It gave me basically a Google handout about stimulants in general. I was not impressed.
I did the same thing with the new version and with a similar clinical scenario and it said options are available but they would start Concerta 18mg for this child based on weight and concerta being a good first line medication (in my program it is almost always our first line.) It then gave specific augmentation strategies for different scenarios such as it not kicking in fast enough in the morning, etc.
Commercially available AI is already at least as good as the average midlevel.

A few years ago my mother received terrible care by a doctor as an inpatient on the medical floor. He completely missed the fairly obvious diagnosis and it caused at least a 2-3 day delay in appropriate treatment. I typed her clinical scenario into ChatGPT and its first diagnosis and plan on the differential was correct. Had AI been my mother’s doctor she wouldn’t have been delirious in restraints for 2 extra days.

I’m wondering why some people here do not like the fact that soon AI will be better than the best doctors (if it’s not already) and it’s most definitely already as good as midlevels and lower tier doctors. If you or a family member is sick and needs care, wouldn’t you want the best care possible? And at first doctors or midlevels will be using the AI so our jobs aren’t going away immediately (although the powers that be might choose the cheaper option of midlevels.)


I have had very similar experiences asking various LLMs to roleplay as an experienced and thoughtful psychiatrist (often in as many words) and feeding them de-identified patient notes and asking for the advice they'd offer to a junior colleague. GPT 3.5 regurgitated patient handout level information at me in a list format. Claude Sonnet 3.7 gave me some thoughtful analyses that it was able to arbitrarily pivot into different theoretical frameworks when requested. These were good enough that at least one of them did influence my formulation of a particularly difficult patient. I've yet to try this out with the current generation (Claude Opus 4, o3, Gemini Pro 2.5), but the newest opus in particular has been extremely good at other tasks that require lateral thinking so far.

The LLMs powering google search results are tiny, underpowered, antiquated models because Google needs to run them cheap to avoid ruinous compute costs. These are not the A-team.
 
Seems people feel strongly on both sides. I think remembering that AI right now is the worst it will ever be (while it is already mindbogglingly good) makes it pretty obvious none of us can predict what is coming. Look into Google's new VEO-3 video creation tool if you have any doubts that AI is about turn society upside down. VEO-3 Examples

Nevertheless, all of your comments have reminded me to follow my interests either way. Either it's all overblown and I get to do what I actually want, or we're all indeed screwed, and at least I got to try out my true field of interest for a bit.

The "Prompt Theory" videos are genuinely impressive and pretty unnerving. Maybe be the first genuinely artistically creative things produced by AI video models. Links here:

 
So let's take this out to the endgame then. When most all jobs are replaced by AI what do you think humans will do? Do you think the gov is just going to subsidized everything to people so we can live in a society where we do nothing or do you think we'll see mass homelessness and a spiral into dystopia? What do you see as the outcome here? Or have you all just not thought about the basic frameworks of society and the requirement of working classes for societies to exist? These are very basic common sense sociology topics that silicon valley types seem to blow off as an afterthought or completely lack comprehension of while chasing the next project.

Why do you want this to happen? You may be financially well off enough to weather the collapse of working society, but how do you think most people will respond to having no income? How about Gen Z and Gen Alpha? They just going to be batteries for the Matrix, lol? Again, what do you think the near complete implementation of AI leads to? Either you all are grossly overestimating the role AI will play in the near future, you're cheering for the road to dystopia, or you're completely delusional that we'll maintain a functional society. I'm legitimately curious how you (and others) envision what you're hoping for going well.

I completely disagree with Texas on this one and am pretty shocked you're CAP if you didn't see how detrimental the shift to technology-based learning during COVID was. You think going all-in on that is going to go well? Or are you assuming that kids just won't need school anymore because AI is going to do everything for us?
Understand all your points but why are you saying everyone wants AI to do XYZ to society? Seems most people are just trying to objectively assess what is happening and be honest with themselves about the impending future of humanity and react accordingly.
 
Understand all your points but why are you saying everyone wants AI to do XYZ to society? Seems most people are just trying to objectively assess what is happening and be honest with themselves about the impending future of humanity and react accordingly.
I didn't say everyone wants this to happen. I know many people who'd love for AI to just disappear. I was responding to the below comment by Wilf that they and many people want it.

I never said menial jobs would not be replaced by AI. I believe they will be.

It's not fear mongering if I (and many people) want it to happen.


Texasphysician's post responds to your kid comment well. I did child and adolescent fellowship and I fully stand by what I said.
 
Machines are built to make all of the food. AI robots can bring the food to you and ask what else it can order on your behalf. Companies are testing and fine tuning prototypes now.
The biggest impediment to this is that generally robots are very expensive. It's usually cheaper to hire humans to do the last mile stuff. Humans are also way more compact than the sort of robots you'd need to do the cooking in anything that's not a straight up fast-food joint.

I think the greatest threat to our field is if most consumers want sycophancy and expedience rather than expertise and human connection.

I view most AI innovations as being tools that are most likely to make humans more productive or higher quality. You'll still see the PA, they'll still do a physical exam or gather initial history, but they'll also have the AI on the side suggesting any exams or questions they didn't cover yet and recommending next steps for workup.

The chaos of the real world and real patients is best handled by a real human. AI struggles when things get really messy.
 
A couple issues with your post:
1. AI is going to improve exponentially over the coming years. Your critiques of current AI (which we don’t even have access to) have little bearing on what AI will be in 5-10 years.
2. This conversation is more about what will happen, not what we want to happen (although in my case I believe they are the same.) The US and EU are in a race with China to have the most advanced AI.
3. I have not advocated that kids not be in a classroom type of setting and to socialize, even if it is AI that is the teacher. Also, the importance of education will be very different because we won’t need doctors, lawyers, researchers (or not nearly as many of them), teachers, accountants/finance, architects, etc.
4. I know this all sounds sort of science fiction like to many people here, especially those who are a little older, but it is not. The exact timetable is up for debate but every expert I’ve read agree there will be very large changes within 10 years and monumental changes within 15-20.
To respond to this:
1. Fair. We don't know what advancements will be in a decade, so I'll defer there. My question for you is how do YOU know the state of AI and potential progress if we don't have access to it? Is this an appeal to authority? If so, I'd question many statements from authority in tech given their propensity for exaggeration and grandiosity of claims historically.

2. In terms of what will happen (not what we want), why are you so certain about the application of these programs being so monumental? As I referenced earlier, many people in these fields are able to describe wild concepts and theoretical applications but fail miserably in the practical application of said ideas. Look at what has happened with companies like Tesla, Meta, and Apple. I'd argue Apple has been the most successful in the application of ideas but that has mainly been by creating new systems/devices, not the replacement of entire sectors.

3. Genuinely curious what you think the future of education will look like, because it sounds like you're advocating for the end of higher education and jobs that require critical thinking. Also, are you suggesting elementary ed is going to be taught by AI with a type of babysitter/CO there to keep kids on task? What will the human role in education be? If none, how will you herd the cats we call children?

4. I'm probably on the younger side of people here (Millennial), I grew up with as the internet was growing up and in the generation that is able to remember the "old technologies" like landlines while being the ones who ushered in current tech. I'm sure things are going to continue to change dramatically as we see further advancements in technology. What I'm highly doubtful of is the complete (or near complete) replacement of humans by AI as you're suggesting because of the practical aspects of how human societies function at a base level. I never thought I'd say this, but it would do tech heads wonders to take a basic sociology and government/history courses before making grandiose claims (not you necessarily, those 'experts' claiming AI will replace us all).
 
The biggest impediment to this is that generally robots are very expensive. It's usually cheaper to hire humans to do the last mile stuff. Humans are also way more compact than the sort of robots you'd need to do the cooking in anything that's not a straight up fast-food joint.

I think the greatest threat to our field is if most consumers want sycophancy and expedience rather than expertise and human connection.

I view most AI innovations as being tools that are most likely to make humans more productive or higher quality. You'll still see the PA, they'll still do a physical exam or gather initial history, but they'll also have the AI on the side suggesting any exams or questions they didn't cover yet and recommending next steps for workup.

The chaos of the real world and real patients is best handled by a real human. AI struggles when things get really messy.
And the bolded is a huge barrier currently. It is not difficult to trick or derail AI, even quite advanced ones. I posted the article somewhere, but a fairly complex AI that creates companions for humans "forgot" their human spouse during an argument. AI are designed to create ordered information from data input and more advance models from collection, but they're all still dependent on the input. They are not particularly good and dealing with chaos or identifying when they're being manipulated at this point. The old(ish) adage of "sarcasm doesn't travel well through electrons" is still a very real barrier, and it's part of why I'm not concerned about psychiatry as a field anytime soon.
 
4. I'm probably on the younger side of people here (Millennial), I grew up with as the internet was growing up and in the generation that is able to remember the "old technologies" like landlines while being the ones who ushered in current tech. I'm sure things are going to continue to change dramatically as we see further advancements in technology. What I'm highly doubtful of is the complete (or near complete) replacement of humans by AI as you're suggesting because of the practical aspects of how human societies function at a base level. I never thought I'd say this, but it would do tech heads wonders to take a basic sociology and government/history courses before making grandiose claims (not you necessarily, those 'experts' claiming AI will replace us all).
I think the greatest threat to our field is if most consumers want sycophancy and expedience rather than expertise and human connection.

I view most AI innovations as being tools that are most likely to make humans more productive or higher quality. You'll still see the PA, they'll still do a physical exam or gather initial history, but they'll also have the AI on the side suggesting any exams or questions they didn't cover yet and recommending next steps for workup.

The chaos of the real world and real patients is best handled by a real human. AI struggles when things get really messy.
I'll tackle point four very briefly.
  • Be 18
  • Apply to community college for an associates in "Medical Linguistics and AI Prompt Engineering."
  • Finish in 2 years (skip medical school).
  • Have a patient sitting in front of you.
  • AI + EHR integration + Video Camera analysis (so the AI has eyeballs and earholes).
  • Ask the patient in front of you all the questions that the AI instructs, with occasional non-sequiturs. Be paid 60k a year so all profit goes to the tech company.

It would looks something like this: AlphaGo Scene (Skip to 2:30); Lee Sedol representing the patient, and the 20 year old being simply the interface between AI and the patient, providing the "human touch."

I could easily see non-severe mental illness treated well in this way, and physicians would be completely undermined because it would be much cheaper and replicable, and likely better quality care at a broad level in the short term because it's easy to practice below your training in psychiatry while pretending to be above it. I think us millenial's have learned from the roll out of social media that large scale tech implementation has major negative externalities. But the next several generations don't know that, and won't know what life was like before tech integration, so they won't even comprehend the question other than occasional nostalgia for something that looked simpler.

I wonder if the best case scenario for this being halted is an early roll out, with a ton of harms occuring. Much like how nuclear fission, despite being a net positive for very similar reasons was halted in development despite the following decades of safety improvements.
 
Last edited:
Top