AI answers patient questions better than doctors

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DrMetal

To shred or not shred?
Lifetime Donor
15+ Year Member
Joined
Sep 16, 2008
Messages
3,009
Reaction score
2,495
Here it is, the beginning of the end . . .

Hopkins study finds ChatGPT answers patient questions better than doctors


Members don't see this ad.
 
well Chat CPT does not have to squeeze all these mundane questions into a 15 minute annual well visit in which the primary care doctor also has to chase down that BIRADS3 that has not been followed up in years, that tubulovillous adenoma that no one has followed up in years, that 1.5cm lung nodule that no one paid attention to... or that prior auth request for Ozempic for weight loss, flector patch because pills are evil, that Ambien/Klonopin dual combo renewal or else ill melt down, etc....

so there you go. if doctoring were akin to palm reading / tarot card reading then ChatCPT has us all beat. good job A/I.

ChatCPT, give me my Oxycodone or I'll **** you up!!!


on second thought, I would totally be okay with having Dr ChatGPT talk to those patients. 3 points swish. Curry ~
 
  • Like
Reactions: 3 users
3 points swish. Curry ~

Game 1 tonight, going down, Lakers are hot.

I don't think we're quite there yet with AI, but we will be soon. Question is, does the customer (the patient) care who they're talking to, as long as they get the advice they want.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
The chatgpt response was, on average, 4x longer than the physician response which is probably why. The real question that should have been asked is we're the responses the same? Like did chatgpt actually give appropriate advice or just advice that was long that patients liked even if it was wrong.
 
  • Like
Reactions: 4 users
ChatGPT is extremely overrated. I’ve seen a lot of “medical” output from ChatGPT, including “answers” where it cited fabricated journal articles that never actually existed…yeah.

AI is coming, but it’s still a ways off. This ain’t it.
 
  • Like
Reactions: 2 users
Easy to give verbose answers and hand-hold for 10 minutes when you're not using the minutes in your mid-adult life trying to make up for the time, money, and opportunity costs of med school and residency (+/- fellowship).
 
  • Like
Reactions: 1 users
ChatGPT is extremely overrated. I’ve seen a lot of “medical” output from ChatGPT, including “answers” where it cited fabricated journal articles that never actually existed…yeah.

AI is coming, but it’s still a ways off. This ain’t it.

Here's the problem with Western medicine (especially in the United States). Much of our medicine is "nothing-burger" medicine (patients seeking medical care for vague Msk pains, psychosomatic symptoms, crashing the ER for nonsense, administrative things, etc)

Hence why in medical education and training, simple "reassurance" is often the right answer (not the scripting of antibiotics, advanced imaging, etc).

Artificial Intelligence bots can very (formulaically) construct such "reassurance", or handle such non-complex, nothing-burger medicine.

If you're a salaried physician, you'd love to lose some of this silly volume. But if you bill and depend on it, then there may be a problem.
 
  • Like
Reactions: 1 users
Here's the problem with Western medicine (especially in the United States). Much of our medicine is "nothing-burger" medicine (patients seeking medical care for vague Msk pains, psychosomatic symptoms, crashing the ER for nonsense, administrative things, etc)

Hence why in medical education and training, simple "reassurance" is often the right answer (not the scripting of antibiotics, advanced imaging, etc).

Artificial Intelligence bots can very (formulaically) construct such "reassurance", or handle such non-complex, nothing-burger medicine.

If you're a salaried physician, you'd love to lose some of this silly volume. But if you bill and depend on it, then there may be a problem.
The problem is that a lot of this type of “reassurance” has to be delivered with empathy (or, at least, pseudo-empathy) by someone in a white coat if it is to be taken seriously.

Reading boilerplate off a computer screen is not going to satisfy this contingent of needy, resource sucking patients. (And I say this as a rheumatologist who has little desire to listen to needy people whine at him all day, and who does everything he can to block referrals when I get the sense that it’s this type of patient.)
 
  • Like
Reactions: 1 user
The problem is that a lot of this type of “reassurance” has to be delivered with empathy (or, at least, pseudo-empathy) by someone in a white coat if it is to be taken seriously.

Reading boilerplate off a computer screen is not going to satisfy this contingent of needy, resource sucking patients. (And I say this as a rheumatologist who has little desire to listen to needy people whine at him all day, and who does everything he can to block referrals when I get the sense that it’s this type of patient.)

That is the ultimate question, will patients accept it? It is without question, mostly pseudo-empathy. Doctors are not empathetic. We're good at faking it. So then, why can't a bot do the same?
 
  • Like
Reactions: 1 user
As I run a private practice and the objective is to make a profit (but my philosophy is never to cut corners and always to meet all patient needs first and foremost), I try not to have patients physically come into the office unless they are doing procedures.

Rather for any patient who needs a follow up on symptoms or who just has a lot of questions, I perform a lot of phone calls and I open myself up for emails at all odd hours of the day and night. The phone calls also help me generate some 99441-99443 revenue. If the patient tells me "thanks doc I'm all better. you were right! sleeping head of bed elevated 30 degrees with a bed wedge pillow cured my cough better than any PPI or inhaler", then I would say okay nice. RTC PRN. If still not doing well then i bring them in again for bronchoprovocation testing and possibly initiate PA on a CTC. Boom revenue generated.

If someone has vague dyspneic symptoms on day 1 after physical exam, EKG, echo (i have an in office echo tech - cardiologist writes report and bills not me but I get real time images, M mode, doppler measurements on a portal etc.. which I know my way around for a non cardiologist. I read lots of echo textbooks and videos for fun), PFTs, FENO and radiology imaging, then I call them in 2 weeks after some empiric Albuterol or something to check on them. if they are not better, I schedule them for CPET. Boom revenue regenerated. The patients also enjoy this extra phone call as they feel it shows I care (which I do... about the patient's well being, not bothering me with incessant phone calls, and generating revenue. Gotta have that cake and eat it too)

The email is the big one. I start gigantic email threads which do not take up precious office time (CPT code time) and I can link youtube videos, social media links, UpToDate patient education links, etc.... the patients who can communicate well with email appreciate this very much. This helps my patient satisfaction ratings quite a bit. Now I could care less about getting 5 stars. But I'm not letting any irritable patient 1 star me for any perceived slights and slander me.

Aside from the primary and secondary gain patients, I have found that most patients just appreciate an open avenue of communication with the doctor. They don't necessarily want you to spend one full hour with them in the exam room persay. They just want their questions answered and fears allayed. I make it clear to patients they can use email to contact me (so I can quickly address these nothing burger issues). While I do not (and cannot) bill for emails, this clears up my office time to do what office time was meant for. Evaluation and management and lots of procedures.

And the empathy vs pseudoempathy is big. Saying nice words and just working to do the right thing and taking the hard path for the patient (even if it is not because of true empathy and personal care for the patient) to ensure the best outcome for the patient are usually enough.

Thank goodness for the facial mask (even though COVID is not really as big of an issue as before, I mask up as I don't want the regular viruses, TB, or other respiratory bacteria taking me out of action) because I don't have to smile as much and no one can tell.

I'm sure the Family Medicine professors of mine from med school would be aghast lol. They demand pure empathy, hand holding, and patient pampering and cannot stand for this pseudo empathy. My retort is - you're a med school professor and I am not. *failed GSW come back. Lebron GOAT. Swish~"
 
As I run a private practice and the objective is to make a profit (but my philosophy is never to cut corners and always to meet all patient needs first and foremost), I try not to have patients physically come into the office unless they are doing procedures.

Rather for any patient who needs a follow up on symptoms or who just has a lot of questions, I perform a lot of phone calls and I open myself up for emails at all odd hours of the day and night. The phone calls also help me generate some 99441-99443 revenue. If the patient tells me "thanks doc I'm all better. you were right! sleeping head of bed elevated 30 degrees with a bed wedge pillow cured my cough better than any PPI or inhaler", then I would say okay nice. RTC PRN. If still not doing well then i bring them in again for bronchoprovocation testing and possibly initiate PA on a CTC. Boom revenue generated.

If someone has vague dyspneic symptoms on day 1 after physical exam, EKG, echo (i have an in office echo tech - cardiologist writes report and bills not me but I get real time images, M mode, doppler measurements on a portal etc.. which I know my way around for a non cardiologist. I read lots of echo textbooks and videos for fun), PFTs, FENO and radiology imaging, then I call them in 2 weeks after some empiric Albuterol or something to check on them. if they are not better, I schedule them for CPET. Boom revenue regenerated. The patients also enjoy this extra phone call as they feel it shows I care (which I do... about the patient's well being, not bothering me with incessant phone calls, and generating revenue. Gotta have that cake and eat it too)

The email is the big one. I start gigantic email threads which do not take up precious office time (CPT code time) and I can link youtube videos, social media links, UpToDate patient education links, etc.... the patients who can communicate well with email appreciate this very much. This helps my patient satisfaction ratings quite a bit. Now I could care less about getting 5 stars. But I'm not letting any irritable patient 1 star me for any perceived slights and slander me.

Aside from the primary and secondary gain patients, I have found that most patients just appreciate an open avenue of communication with the doctor. They don't necessarily want you to spend one full hour with them in the exam room persay. They just want their questions answered and fears allayed. I make it clear to patients they can use email to contact me (so I can quickly address these nothing burger issues). While I do not (and cannot) bill for emails, this clears up my office time to do what office time was meant for. Evaluation and management and lots of procedures.

And the empathy vs pseudoempathy is big. Saying nice words and just working to do the right thing and taking the hard path for the patient (even if it is not because of true empathy and personal care for the patient) to ensure the best outcome for the patient are usually enough.

Thank goodness for the facial mask (even though COVID is not really as big of an issue as before, I mask up as I don't want the regular viruses, TB, or other respiratory bacteria taking me out of action) because I don't have to smile as much and no one can tell.

I'm sure the Family Medicine professors of mine from med school would be aghast lol. They demand pure empathy, hand holding, and patient pampering and cannot stand for this pseudo empathy. My retort is - you're a med school professor and I am not. *failed GSW come back. Lebron GOAT. Swish~"
I am the exact opposite

I want as much communication and discussion to happen inside the exam room as possible (and as little outside as possible).

I address issues at visits. I do anything I can to avoid phone calls and MyChart messages etc. If it’s anything more than a one-liner or something else that can be relatively straightforwardly figured out, you’re coming in to see me.

I’ve heard of doctors doing this, and unless you’re DPC or concierge or something, I don’t know how these people keep their sanity. If I opened the communication door to needy rheumatology patients, they would take every last second of my spare time. I would lose my mind.

And academicians live in the twilight zone. So little of what they focus on or care about reflects reality.
 
  • Like
Reactions: 4 users
That is the ultimate question, will patients accept it? It is without question, mostly pseudo-empathy. Doctors are not empathetic. We're good at faking it. So then, why can't a bot do the same?
If the bots look like humans then bots can do the same.
Patient empathy sessions are not Turing Tests for a bunch of replicants in Blade Runner. You literally just need a well optimized chatbot with a humanoid appearance to fool 99.999999% of patients.
 
  • Haha
Reactions: 1 user
I am the exact opposite

I want as much communication and discussion to happen inside the exam room as possible (and as little outside as possible).

I address issues at visits. I do anything I can to avoid phone calls and MyChart messages etc. If it’s anything more than a one-liner or something else that can be relatively straightforwardly figured out, you’re coming in to see me.

I’ve heard of doctors doing this, and unless you’re DPC or concierge or something, I don’t know how these people keep their sanity. If I opened the communication door to needy rheumatology patients, they would take every last second of my spare time. I would lose my mind.

And academicians live in the twilight zone. So little of what they focus on or care about reflects reality.
I hear you. I answer emails at all hours of the day and on my time off. I rationalize it's the only way I can maintain a steady flow of patients for procedures in the office. Plus I find using multimedia much better explaining things than I ever could. Even though I do take the requisite time to outline the assessment and plan (with what if this, that, or that happens), sometimes its better for someone to see a video on OSA that is dynamic and easier to understand than my just talking with words for a few minutes..

plus doctor I have some vague nonspecific breathing issue. sure come on in for PArt 2 of procedures!

I understand for rheum how it would be painful to respond to all of those things....

I never do that "you're fine leave goodbye. can i have my money now?" routine that some private practice docs do....
 
Last edited:
Members don't see this ad :)
i am a little surprised at some of the aplomb here...maybe I am too pessimistic. I don't think AI will be independently admitting gomers anytime soon. The real erosion in our jobs is that some big health corp is going to use a language model, then hire 10 NPs and 1 MD. The AI will bridge the knowledge gap for 90% of cases and the midlevel will rubber stamp the AI findings. The MD will get involved if the patient is not improving, act as a liability sponge, or just chart check a bazillion charts a day. This will become more and more pervasive until we all get tricorders implanted into us.

I don't think AI hallucinations are going to be a problem in even the short term. ChatGPT gets all the glitz and glamour, but look at Med-PaLM2 from google. A LLM with nearly 100% accurate info and access to labs/imaging/notes will surely be a disruptive tool. it doesn't even have to mid levels attacking us... I think the majority of hospital medicine can be protocolized. The future may be that you see 70 patients a day,with the labs summarized by AI, note done by AI (probably better than most human docs tbh), a summarized/fast forwarded video patient encounter presented to you, and you just click sign to finalize everything.
 
  • Like
Reactions: 3 users
The future may be that you see 70 patients a day,with the labs summarized by AI, note done by AI (probably better than most human docs tbh), a summarized/fast forwarded video patient encounter presented to you, and you just click sign to finalize everything.
I’m not even saying I disagree with you… but in this future world what jobs would still exist in your mind?

I also wonder if the cost of using the AI will even make it worth the theoretical efficiency boost in exchange for what I suspect would be a massive overhead cost… for example even though the EMR should theoretically increase efficiency I recall the real push away from paper charts came from penalizing reimbursement. Would CMS have to require AI use also?
 
i am a little surprised at some of the aplomb here...maybe I am too pessimistic. I don't think AI will be independently admitting gomers anytime soon. The real erosion in our jobs is that some big health corp is going to use a language model, then hire 10 NPs and 1 MD. The AI will bridge the knowledge gap for 90% of cases and the midlevel will rubber stamp the AI findings. The MD will get involved if the patient is not improving, act as a liability sponge, or just chart check a bazillion charts a day. This will become more and more pervasive until we all get tricorders implanted into us.

I don't think AI hallucinations are going to be a problem in even the short term. ChatGPT gets all the glitz and glamour, but look at Med-PaLM2 from google. A LLM with nearly 100% accurate info and access to labs/imaging/notes will surely be a disruptive tool. it doesn't even have to mid levels attacking us... I think the majority of hospital medicine can be protocolized. The future may be that you see 70 patients a day,with the labs summarized by AI, note done by AI (probably better than most human docs tbh), a summarized/fast forwarded video patient encounter presented to you, and you just click sign to finalize everything.
If this AI scenario comes to fruition what would happen is that mid levels would cease to exist. The few positions required in hospital medicine would be filled by docs, who will work their butts off with the help of AI. These docs will likely take huge payouts in order to even have a job. Why would a hospital hire a big team of NPs and one doc when they can just have a small handful of desperate docs who will work 80 hours a week?

NPs would go back to the bedside since nursing pay is almost at parity with midlevels but with higher upside and flexibility.
 
i am a little surprised at some of the aplomb here...maybe I am too pessimistic. I don't think AI will be independently admitting gomers anytime soon. The real erosion in our jobs is that some big health corp is going to use a language model, then hire 10 NPs and 1 MD. The AI will bridge the knowledge gap for 90% of cases and the midlevel will rubber stamp the AI findings. The MD will get involved if the patient is not improving, act as a liability sponge, or just chart check a bazillion charts a day. This will become more and more pervasive until we all get tricorders implanted into us.

I don't think AI hallucinations are going to be a problem in even the short term. ChatGPT gets all the glitz and glamour, but look at Med-PaLM2 from google. A LLM with nearly 100% accurate info and access to labs/imaging/notes will surely be a disruptive tool. it doesn't even have to mid levels attacking us... I think the majority of hospital medicine can be protocolized. The future may be that you see 70 patients a day,with the labs summarized by AI, note done by AI (probably better than most human docs tbh), a summarized/fast forwarded video patient encounter presented to you, and you just click sign to finalize everything.
Look at all the ****ing ransomware **** that happened with COVID--hospitals had their entire EMR disabled because of crappy security (Trends in Ransomware Attacks on US Hospitals, Clinics, and Other Health Care Delivery Organizations, 2016-2021). You think they are going to let a machine make serious decisions with hospital level IT safeguarding it from being hacked? Imagine Dr. PALM2 being infected with chinese spyware--the hospital system could be bankrupted. Even worse, imagine the Chinese implant a virus that corrupt it to the point where it is making mistakes and hurting people--the entire system goes bankrupt again. No way in our lives this ever happens unless the nature of cybersecurity changes drastically.
 
  • Like
Reactions: 4 users
The future may be that you see 70 patients a day,with the labs summarized by AI, note done by AI (probably better than most human docs tbh), a summarized/fast forwarded video patient encounter presented to you, and you just click sign to finalize everything.

Yeah, this, pretty much. The only question that remains: is would you have to physically see the patient (do an exam). As soon as we admit that the physical exam is BS in 21st Century medicine (where all diagnoses hinge on objective labs/rads), then indeed we'll each be rounding on 75+ patients.

Look at all the ****ing ransomware **** that happened with COVID--hospitals had their entire EMR disabled because of crappy security

Interesting you brought this up. There's a lot of computer scientists who think AI will result in the end of the Internet and possibly any form of network computing. A hacker used to be a guy who sat at a terminal and hacked. Then he was able to write a 1000 script bots that could do the hacking.

Now imagine AI scripts that can generate more scripts (and that are adaptable). The hacking problem grows by orders of magnitude, 10^6 , 10^9, maybe even 10^12, essentially flooding all of our systems and rendering them useless. We'll go back to paper charting!
 
  • Like
Reactions: 3 users
Yeah, this, pretty much. The only question that remains: is would you have to physically see the patient (do an exam). As soon as we admit that the physical exam is BS in 21st Century medicine (where all diagnoses hinge on objective labs/rads), then indeed we'll each be rounding on 75+ patients.



Interesting you brought this up. There's a lot of computer scientists who think AI will result in the end of the Internet and possibly any form of network computing. A hacker used to be a guy who sat at a terminal and hacked. Then he was able to write a 1000 script bots that could do the hacking.

Now imagine AI scripts that can generate more scripts (and that are adaptable). The hacking problem grows by orders of magnitude, 10^6 , 10^9, maybe even 10^12, essentially flooding all of our systems and rendering them useless. We'll go back to paper charting!
So how can we simultaneously believe that and also that AI is going to perform all of these critical functions?

Intent is why human control over crucial functions remains important and things like launching nukes and driving cars is not delegated to computers. When I see planes flying themselves without pilots using AI I'll start to sweat it out on healthcare, until then ai automation in healthcare won't ever move up to the MD level to any serious degree in this country, our liability is way too high. I can see it happening in countries where liability doesn't exist and volume is king (India, China) but not here.
 
So how can we simultaneously believe that and also that AI is going to perform all of these critical functions?

Intent is why human control over crucial functions remains important and things like launching nukes and driving cars is not delegated to computers. When I see planes flying themselves without pilots using AI I'll start to sweat it out on healthcare, until then ai automation in healthcare won't ever move up to the MD level to any serious degree in this country, our liability is way too high. I can see it happening in countries where liability doesn't exist and volume is king (India, China) but not here.

We do have self driving cars, and nukes are very much controlled by computers.

It's going to happen (correction, is happening) in medicine as well. What's driving it? Cost savings. If your hospital pays 15 hospitalists to round on 200 patients, wouldn't it love to implement some combo of AI and mid-levels, to then only employ 4 hospitalists, each carrying a list of 50 patients?

It sure would. Liability? It's still there. Now that physician is liable for 50 patients.
 
  • Like
Reactions: 1 user
We do have self driving cars, and nukes are very much controlled by computers.

It's going to happen (correction, is happening) in medicine as well. What's driving it? Cost savings. If your hospital pays 15 hospitalists to round on 200 patients, wouldn't it love to implement some combo of AI and mid-levels, to then only employ 4 hospitalists, each carrying a list of 50 patients?

It sure would. Liability? It's still there. Now that physician is liable for 50 patients.
Cars are not driving themselves... There is a facsimile of this but these only function under ideal conditions, have you seen a car driving itself in the snow or on an unmarked dirt/mountain road? Have you seen a plane land itself? Nuke launches require manual inputs (Two-man rule - Wikipedia). These are examples of critical tasks that we do not delegate to machines because of risk involved, healthcare decisions are similar. Would you fly on a plane if it was cheaper and had a remote pilot 'supervising' 100 planes at once?

I know you are the champion pessimist of the SDN IM community but even this seems a bit over the top--where is this happening? Which hospital is announcing that they have implemented AI as part of their healthcare delivery in a routine fashion? Until we see China/India replace their doctors we are going to be fine.
 
  • Like
Reactions: 1 users
I know you are the champion pessimist of the SDN IM community but even this seems a bit over the top--where is this happening? Which hospital is announcing that they have implemented AI as part of their healthcare delivery in a routine fashion? Until we see China/India replace their doctors we are going to be fine.

I've never been called a champion of anything, but I'll take any accolades I can get. And I prefer 'realist' over pessimist.

It's certainly not going to happen overnight, but it'll be slowly implemented. @end stage fibro 's example above is a really good basic implementation: we already have auto-generated notes (rads, labs autopopulate, we have dot phrases for CHF, COPD, ESRD, etc)--sometimes you can hardly tell where the human comes in and writes something. Implement a simple AI chat box that writes more humanly and scripts all of it, and you have an automatic note generator. A human doctor can QC 50 of these, hence 'rounding' on 50 patients. Less doctors needed, means cost savings for the machine.

I can see this very easily implemented in the next 10 years.
 
I've never been called a champion of anything, but I'll take any accolades I can get. And I prefer 'realist' over pessimist.

It's certainly not going to happen overnight, but it'll be slowly implemented. @end stage fibro 's example above is a really good basic implementation: we already have auto-generated notes (rads, labs autopopulate, we have dot phrases for CHF, COPD, ESRD, etc)--sometimes you can hardly tell where the human comes in and writes something. Implement a simple AI chat box that writes more humanly and scripts all of it, and you have an automatic note generator. A human doctor can QC 50 of these, hence 'rounding' on 50 patients. Less doctors needed, means cost savings for the machine.

I can see this very easily implemented in the next 10 years.
And the privacy/security concerns will also be a non-issue then too? Patients will be ok seeing computers instead of physicians?
 
  • Like
Reactions: 1 user
And the privacy/security concerns will also be a non-issue then too?

Of course privacy/security concerns are an issue. But with the cost savings afforded by AI, the corporate machine will be willing to take that risk. In fact, you're probably going to need AI-based systems, to fight off the AI-based hackers (the "fight fire with fire" approach). [AI is going to disrupt privacy and security concerns across multiple sectors, medicine included. I'm way more concerned about the financial sector in this regard. Imagine your banking system getting pinged 10^9 times per day, instead of the 10^2 it sees today.]

Patients will be ok seeing computers instead of physicians?

Eh, you'd be shocked and surprised what patients will become accepting of, especially the younger generations (those born in this century). If they could present to a vending machine, get their vitals taken, and have a Z-pack dispensed to them, they'd be just as satisfied and content. Of course this isn't good medicine, but nobody cares anymore. The corporate machine doesn't care, especially if the cost savings is substantial.
 
  • Like
Reactions: 1 user
I’m not even saying I disagree with you… but in this future world what jobs would still exist in your mind?

I also wonder if the cost of using the AI will even make it worth the theoretical efficiency boost in exchange for what I suspect would be a massive overhead cost… for example even though the EMR should theoretically increase efficiency I recall the real push away from paper charts came from penalizing reimbursement. Would CMS have to require AI use also?

i am not even sure what my kid's college will cost in 18 years (free vs. $100 trillion?); I have no idea about the job market. The 70+ rounding patient scenario might seem a little unhinged, but i really don't think it is too far off in a dystopian, end-stage scenario. We are already seeing shots across the bow in the job market: writers' strike, music industry and AI singles (drake/weeknd), etc.

If this AI scenario comes to fruition what would happen is that mid levels would cease to exist. The few positions required in hospital medicine would be filled by docs, who will work their butts off with the help of AI. These docs will likely take huge payouts in order to even have a job. Why would a hospital hire a big team of NPs and one doc when they can just have a small handful of desperate docs who will work 80 hours a week?

NPs would go back to the bedside since nursing pay is almost at parity with midlevels but with higher upside and flexibility.

No argument here.

Look at all the ****ing ransomware **** that happened with COVID--hospitals had their entire EMR disabled because of crappy security (Trends in Ransomware Attacks on US Hospitals, Clinics, and Other Health Care Delivery Organizations, 2016-2021). You think they are going to let a machine make serious decisions with hospital level IT safeguarding it from being hacked? Imagine Dr. PALM2 being infected with chinese spyware--the hospital system could be bankrupted. Even worse, imagine the Chinese implant a virus that corrupt it to the point where it is making mistakes and hurting people--the entire system goes bankrupt again. No way in our lives this ever happens unless the nature of cybersecurity changes drastically.

hospital level IT is crap, no surprise. 0-day exploits against pacemakers have been suspected since the early 2010s and critical flaws demonstrated since then. I wish I could find the talk I heard but it mentioned that there probably is low level espionage against healthcare already. It does not involve your screen flashing red & yellow, demanding 50 bitcoin. more like changing the calibration of the lab analyzer so your sodium levels are off. I think the difference in this scenario is that every medical decision is backstopped by someone with MD/DO after their name. The risk probably isn't an issue when you're seeing 20 patients...but will it be when you are seeing 40? 50?


I've never been called a champion of anything, but I'll take any accolades I can get. And I prefer 'realist' over pessimist.

It's certainly not going to happen overnight, but it'll be slowly implemented. @end stage fibro 's example above is a really good basic implementation: we already have auto-generated notes (rads, labs autopopulate, we have dot phrases for CHF, COPD, ESRD, etc)--sometimes you can hardly tell where the human comes in and writes something. Implement a simple AI chat box that writes more humanly and scripts all of it, and you have an automatic note generator. A human doctor can QC 50 of these, hence 'rounding' on 50 patients. Less doctors needed, means cost savings for the machine.

I can see this very easily implemented in the next 10 years.

this path really is inexorable to me. even now, when I teach the residents, their default answer is click every box in the order set. i don't know if I am becoming a grizzled, old curmudgeon stereotype or they are getting dumber. Either way, if we are going to just check every box, no need for MD-level nuance and understanding. It is like the best practice advisories that pop up. i don't even know what they say, i hit Esc or click exit asap. Even the ones that need a response I type dsjfls and enter. No one has said ****.this has been for years across 3 hospital system. I see a clear path to this end point.

Come to think of it, it's definitely the youths that are getting dumber.

Eh, you'd be shocked and surprised what patients will become accepting of, especially the younger generations (those born in this century). If they could present to a vending machine, get their vitals taken, and have a Z-pack dispensed to them, they'd be just as satisfied and content. Of course this isn't good medicine, but nobody cares anymore. The corporate machine doesn't care, especially if the cost savings is substantial.

a friend of mine already does a lot of async online medicine. do you think his patient's are lining up to see him, waiting months on end? No! He is a means to an end. if they could click a button requesting meds, I am sure they would do that instead.



medicine is in a position where non-MD labor is getting more and more costly, both professional and hospital reimbursement is dropping (by design), and prestige is at an all time low. sure this may apply to a lot of industries, but I think the key difference is that the physician is in a unique spot of being the cornerstone of all this. before, that was probably an enviable spot to be in. i think now it just makes us a target. do more, with less, for less pay, with higher risk, don't forget the customer is always right, and no more cold pizza. The centrality of our position is what corporations like HCA will leverage against us. AI in my eyes just happens to be a very long, rigid, lever.


lakers in 4
nbalakersfan.gif
 
Last edited:
  • Like
Reactions: 2 users
lakers in 4


Denver will be tough, but I think we can do it if AD and Lebron play consistently. Definitely no room for mistakes. If we can get Game 1 or 2, we'll be in business.

Every time change comes about in medicine, we seem to take the "that's-never-going-to-happen-mindset". "We're never going to have mid-levels practicing independently, we're never going to carry more than 12 patients on a list, we're never going to do telehealth, remote health, [and now] We'll never be replaced by AI".

And then it happens, and we all stand aghast.

It would be nice if we could come together as a community of physicians and try to stop the freight train coming our way, or at least set down some ground rules. But we don't. We're a passive bunch more concerned about doing 25 questions a month to satisfy some BS MOC requirement.

Watch, in 10 years: we physicians are dumb enough, that we'll even create a Fellowship in AI and make a BC out of it. To see a fat patient, you'll have to be BC'd in 'Obesity Medicine'. To use the EMR, you'll have to be BC'd in AI.
 
Last edited:
  • Like
Reactions: 1 users
The counter argument is that people hype **** up in tech all the time that ends up being a nothingburger. If generative AI actually ends up being a thing all it will take it one colossal IT ****up for it to be legislated in to oblivion where it belongs. If it ever ends up being a thing we will see it emerge in less regulated countries first so bump this thread when we see that happening.
 
  • Like
Reactions: 2 users
Denver will be tough, but I think we can do it if AD and Lebron play consistently. Definitely no room for mistakes. If we can get Game 1 or 2, we'll be in business.

Every time change comes about in medicine, we seem to take the "that's-never-going-to-happen-mindset". "We're never going to have mid-levels practicing independently, we're never going to carry more than 12 patients on a list, we're never going to do telehealth, remote health, [and now] We'll never be replaced by AI".

And then it happens, and we all stand aghast.

It would be nice if we could come together as a community of physicians and try to stop the freight train coming our way, or at least set down some ground rules. But we don't. We're a passive bunch more concerned about doing 25 questions a month to satisfy some BS MOC requirement.

Watch, in 10 years: we physicians are dumb enough, that we'll even create a Fellowship in AI and make a BC out of it. To see a fat patient, you'll have to be BC'd in 'Obesity Medicine'. To use the EMR, you'll have to be BC'd in AI.

denver is the first team i am really worried about. Joker has just been unreal and they have played at a high clip all season. AD's consistency is the key weakness. i think Ham's adjustmenets and Lebron's on the floor coaching (like blowing up the zone when GS tried) will be interesting to watch

I agree about the never going to happen mindset. Medicine has the double edged sword of things taking so long to happen, it allows us to rest on our laurels a little too long. Physicians are too balkanized of a group to make any headway. which is bad because everyone else sees us as a monolith.

edit: these note support tools are nearly here. mmodal engage one, nuance has one I forgot the name (and they microsoft/openAI behind them). They are limited now due to the OIG and some HIM guidelines, but the unfettered software gets a majority of the diagnoses from reading the chart. You don't think the revenue cycle folks see that with eyes wide open? This is the software that is getting purchased and developed. Not a better version of epic.


The counter argument is that people hype **** up in tech all the time that ends up being a nothingburger. If generative AI actually ends up being a thing all it will take it one colossal IT ****up for it to be legislated in to oblivion where it belongs. If it ever ends up being a thing we will see it emerge in less regulated countries first so bump this thread when we see that happening.

I agree there is a long list of tech corpses, once promised to be the next big thing in healthcare. blockchain, offshore telehealth, Haven Healthcare, etc. AGI I think will be immediately actualized. again, I don't think R. Bot MD is going to be getting an H&P, then discharging a patient, then rounding on its own anytime soon. I for sure believe having AI decision and note support will be here sooner rather than later. That alone will be a big shock.

I dunno how i feel about one IT f-up killing AI by suffocating it with red tape. Healthcare is 18% of GDP. There are some big players in there trying to get a lot of the pie and trying to keep the pie big. I am also ambivalent about a less legislated country being the early adopter. I feel like they don't have the economic pressure and maybe not the resources for it either.
 
Last edited:
  • Like
Reactions: 1 users
bump this thread when we see that happening.

By then, I won't be able to bump this thread, b/c SDN and all it's users with also be AI-generated. Who knows, maybe I'm an AI-bot right now [and they said a computer could never emulate sarcasm or cynicism ]
 
  • Haha
Reactions: 1 user
Here it is, the beginning of the end . . .

Hopkins study finds ChatGPT answers patient questions better than doctors

This forum is for free? If they charge $200 for each answer, I am sure the human provider would perform much better than any machine......
 
Everyone is missing the best use for AI and we could do it tomorrow with what's currently available: responding to patient portal messages, especially the ones that are over 300 words.

If the message requires just a yes or no answer, I'm happy to do that.

Anything else, let Chat GPT handle it.
 
  • Like
  • Haha
  • Love
Reactions: 7 users
Everyone is missing the best use for AI and we could do it tomorrow with what's currently available: responding to patient portal messages, especially the ones that are over 300 words.

If the message requires just a yes or no answer, I'm happy to do that.

Anything else, let Chat GPT handle it.
Also give it instructions to write 3x the length of whatever the patient wrote
 
  • Like
  • Love
  • Haha
Reactions: 3 users
I hope it can answer their stupid ****ing questions so they don’t call me everyday asking me
 
Top