It's kind of sad that the "worst" psychiatrists get the best reviews and feedback

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I suppose the future might be instead of a few token medical directors supervising the army of midlevels at big box places, it now becomes supervision of an army of AI.
A lot of the post I think is doom and gloom, but this I could see. If for no other reason because when lawsuits start rolling, AI companies are going to want physicians as meat shields to take that liability hit instead of paying out themselves. A lot easier and more profitable and lower risk for AI companies to contract with hospital systems and physicians to use their product as a tool vs actually developing AI programs to become physicians and try and bill patients for encounters.

Members don't see this ad.
 
If you have actual data or references on this I'd love to see them, because honestly I don't believe it based on the few AIs I've looked into thus far for psychiatric uses (not that in depth admittedly). I'm sure that this will be something AI will eventually be capable of, but not anytime soon enough to change the landscape of our field like NPs have in some areas. That being said, AI seems to be developing/growing at a faster rate than experts had previously predicted, so maybe I'm wrong. I just think there is a very significant jump from being able to make determinations based on information that is put into the algorithm vs the program itself gathering and parsing through the information itself and then applying it.

I'm not familiar with any work on psychiatry in particular on this. I will say that we absolutely have seen LLMs being deployed as agents performing various tasks. They're not to where they need to be for a lot of use cases yet but they can definitely do things like monitor your screen, use web browsers, create files of various types, do scheduling, solve very difficult math problems, analyze scientific literature etc. It is also not really accurate to say that what is happening is the models spitting back information that is somehow "put into" the algorithm.

As as finding information and applying it, you might want to take a look at Perplexity or DeepSearch if you have access to it. The later is particularly interesting and between the two of them I use google less and less these days.


This is where I see the current and upcoming uses being applied which goes against this idea that AI is going to replace us anytime soon. If Experts or at least knowledgeable individuals are necessary to actually make these programs effective in the ways we expect them to be then there should be no (immediate) concerns about these programs replacing us. That said, when I was pre-med and med school everyone was talking about how NPs were going to be physician extenders and that physicians would be like coaches with NPs working under them, but we see what happened with FPA in many states and systems replacing physicians with mid-levels. Idk if history will repeat with AI, but I don't think it will be that soon.

At the risk of being a broken record, these AIs are currently as bad as they will ever be at these tasks. There is one direction this goes. Possible they hit some kind of wall and they don't get beyond a certain point but I just don't think most people really understand what big changes may come just from widespread adoption of existing models. Most people played with GPT3.5 for like an hour and base their understanding of what these things can do on that. Really enormous progress has happened in the couple years since then, really and truly.
 
I think the other aspect not being considered is that the generation that is young now will for sure be more willing to talk with an AI agent, and as time goes on, the willingness will increase. Eventually, it will be irresponsible NOT to use AI, and that horizon is not far away. Pick your poison: liable for not listening to AGI, or liable for listening to it.
Give an LLM the bulk of your writings, and then question it for insight, and you may be surprised at how effective it really is at picking up on things you didn't know yourself.
 
Members don't see this ad :)
If it's the ADHD or self diagnosed autism that's bugging you, transition to county/urban inpatient. No risk of tiktok if you don't have a phone.
Very true, and I've kind of had passing thoughts of some inpatient time but I do like my current gig still and the potential for higher income is still there. I get plenty of time with my family, 3 day weekend, great income for psychiatry. It just feels that we are battling a cultural war with patients coming in self diagnosing and then if they can't "have it your way" burger king style, bad reviews are another part of the equation as many have stated on here. I'm planning on staying where I'm at for the foreseeable future. Just bought a house as well and kids are pretty entrenched in the school system we're at which is a good one, good church we go to and all of this (my work, kids school, church) is literally within a 15 min radius of my house, have some family (my sister) within a little over an hour that we are close with (and their kids are literally the same age/gender breakdown as my kids so cousins are close as well) and my parents may be moving here close to me and my sibling which will be awesome if they do ever make it here (easy built in baby sitter lol). I am truly happy with where I'm at in life, I just see these things I mentioned (plus a few others) as annoyances more than anything else. Not even close to where I was with the military where I could absolutely not see myself going a second more than what I committed to when I signed up. Still loving my decision to go into psychiatry and can see myself doing this for quite a few more years.
 
I'm not familiar with any work on psychiatry in particular on this. I will say that we absolutely have seen LLMs being deployed as agents performing various tasks. They're not to where they need to be for a lot of use cases yet but they can definitely do things like monitor your screen, use web browsers, create files of various types, do scheduling, solve very difficult math problems, analyze scientific literature etc. It is also not really accurate to say that what is happening is the models spitting back information that is somehow "put into" the algorithm.

As as finding information and applying it, you might want to take a look at Perplexity or DeepSearch if you have access to it. The later is particularly interesting and between the two of them I use google less and less these days.
That's great, but that's all analysis and searching through electrons is not the same as observing and analyzing a physical and organic world. That's where I see a huge leap needing to happen for this to really be applicable in the way we're discussing (ie, working with patients to encephalopathic or psychotic to interact with technology).

I think the other aspect not being considered is that the generation that is young now will for sure be more willing to talk with an AI agent, and as time goes on, the willingness will increase. Eventually, it will be irresponsible NOT to use AI, and that horizon is not far away. Pick your poison: liable for not listening to AGI, or liable for listening to it.
Give an LLM the bulk of your writings, and then question it for insight, and you may be surprised at how effective it really is at picking up on things you didn't know yourself.
That's fine when we're talking about depression, anxiety, or other conditions where patients are cognitively intact and able to interact with these programs. Though the few cases I've seen where the chats were released the AI seems to just be more of a mirror or echo chamber for what the user wants to hear than something that is going to guide effective therapeutic interventions (for now). Again, this is different from an AI itself being an effective physician for a patient who is incapable of appropriate interaction d/t encephalopathy or some other cognitive impairment. What I believe you're describing still requires input and some level of interaction from either an expert or the patients themselves. I'm talking about patients who aren't capable of this.
 
Top