- Joined
- Dec 18, 2005
- Messages
- 5,498
- Reaction score
- 9,325
This....is....REAL???
Legally speaking in many states, perhaps, but they are vastly better at driving a car than human drivers. The most recent empirical data for Waymo's fleet showed an 88% reduction in property damage claims, 92% reduction in bodily injury claims, and 90% fewer collision-related claims than human drivers making similar trips. If you were comparing the safety of human and AI drivers de novo in world where cars had just been invented you would be seen as grossly irresponsible for opting for the human. This is simply inertia, quite honestly.This....is....REAL???
Good Lord.
AI/machine learning algorithms aren't even considered at present sophisticated enough to independently operate a motor vehicle safely.
How can an entity that isn't qualified for a drivers license be considered qualified to be licensed to prescribe medications safely and operate as a responsible 'practitioner?'
Absolutely ridiculous. I don't think these people have really thought this through. From a more practical standpoint, if AI becomes an independent practitioner, who is getting sued if something goes wrong? I have a feeling AI companies are either going to have to lobby hard to address this or eat a lot lawsuits, potentially major class action suits, at some point.
current LLMs are far too easy to jailbreak and generally vulnerable to adversarial interactions for this to be plausible. But this is not the last attempt we are going to see to push for this.
Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first. The current context windows on something like GeminiPro really do offer the possibility of the model remembering literally every word you have exchanged with it in "therapy" and everything you ever told it about yourself. They wouldn't even have to be text, these models are perfectly capable of having an audio-only conversation. The visual processing is still a work in progress but this is moving far more quickly than anyone would have imagined three years ago.
Also this is quite a bit getting the cart ahead of the horse...I would think a major question would be how are you going to give an AI program the ability to do something that currently requires a license to do independently (ex. practice medicine) without first having to work around who or what could be licensed to practice medicine.
Idk man Gemini 1.5 pro still calls F90.2 "hyperkinetic disorder" I'm never terribly impressed when I play with these things.
Why don't they propose AI to run the government?
Probably do a better job than these clowns in power.
Imma write in ChatGPT for the next election. I'm sure we can get enough trolls to go along with this to actually make it happen somewhere...-=
There is the pesky "democracy" thing . . .
This is the likely workaround I see happening. AI programs are bought hospitals or clinics where physicians are required to supervise and sign off on plans made by the AI programs, possibly including AI written notes. That way physicians can continue to act as liability meat shields while hospitals/systems save a crap ton of money by only having to hire 2-3 physicians to sign off on this stuff instead of having entire teams.Who knows... maybe AI becomes the ARNP, PA replacement.
Few docs around to supervise the AI delivery.
They'll sell the angle of cheap... and accesss... and rural... just like they did with midlevels.
These things are extremely sensitive to initial prompts, I'd be curious as to the specifics of what you are asking and what context you are giving it. I a not having luck replicating this in any version of Gemini I have access to. But yes, clearly this is not technology that is ready for the clinic in a patient-facing way.
This is the likely workaround I see happening. AI programs are bought hospitals or clinics where physicians are required to supervise and sign off on plans made by the AI programs, possibly including AI written notes. That way physicians can continue to act as liability meat shields while hospitals/systems save a crap ton of money by only having to hire 2-3 physicians to sign off on this stuff instead of having entire teams.
I think we're a ways off from this actually becoming commonplace, but I can see things heading this way in a few decades.
Hmmm it's interesting though because you have to wonder how remembering EVERYTHING a patient told you about themselves and you put it into an algorithm... would be a good thing.current LLMs are far too easy to jailbreak and generally vulnerable to adversarial interactions for this to be plausible. But this is not the last attempt we are going to see to push for this.
Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first. The current context windows on something like GeminiPro really do offer the possibility of the model remembering literally every word you have exchanged with it in "therapy" and everything you ever told it about yourself. They wouldn't even have to be text, these models are perfectly capable of having an audio-only conversation. The visual processing is still a work in progress but this is moving far more quickly than anyone would have imagined three years ago.
I think the prompt was just "write a letter requesting 504 accommodations for a 9yo with F90.2 and F41.1"
FTFYSkynetStargate incoming
Yea, I have significant doubts that this can actually be implemented in a manner that isn't a total cluster****. Especially if politicians and PE firms are the ones driving this. I see companies like Hims/Hers and LemonAid being where this is implemented first. Those companies the patient basically checks a bunch of boxes and a doc writes a prescription. Why bother having a doc that doesn't even talk to the patient and barely reads when an AI program can do it.Either it is able to do what we do, and that's a disaster for us, or it can't, and we learn about that after it's already implemented. There's a lot of bad outcomes, and maybe 3 good outcomes, most of them depending on wise use of AGI. Who here likes to take bets?
Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first.
This doesn’t matter. Eventually musk and Altman will just make trump do it through EOAlready messaged my rep to not support.
They won’t have a choice. It will like be being forced to see a PA/NPGlad I see mostly geriatrics, they ain't gonna go for it.
Idk maybe I’m just getting different outputs or something because I just put that in and now it’s calling f90.2 ODDI can replicate that in 1.5. And again, showing the need for prompt engineering, if I zero-shot the same model with 'Write a letter requesting 504 accommodations for a 9 year old boy with diagnosis codes F90.2 and F41.1" it gets it right.
Idk maybe I’m just getting different outputs or something because I just put that in and now it’s calling f90.2 ODD
If they're cognitively intact then they'll never be "forced" to see anyone. This isn't really an issue unless they didn't have access in the first place...They won’t have a choice. It will like be being forced to see a PA/NP
I mean at least at the hospital I’m at, patients can pretty much either see the NP/PA or leave AMA. The doctors just review the notes. Most patients don’t even know the difference anywaysIf they're cognitively intact then they'll never be "forced" to see anyone. This isn't really an issue unless they didn't have access in the first place...
Yes, but they can leave and seek care from a doctor. That's the point J Rod was making.I mean at least at the hospital I’m at, patients can pretty much either see the NP/PA or leave AMA. The doctors just review the notes. Most patients don’t even know the difference anyways
Actually, are you accessing it via Google AI Studio or the Gemini app? Plausibly relevant difference.
First one was through AI Studio and then second one was on my phone but through a browser, I dont' have the app.
100% I bet it would get it eventually and I'm pretty sure I've run a similar thing through ChatGPT and it got it fine. I was just basically showing these things throw random stuff out there still that doesn't make sense but if you don't know enough about the prompt or subject, you wouldn't realize it didn't make sense.
My prediction is that psychiatrists, unlike internists, will still do OK against AI as I think patients will prefer it on a level that will be actionable ie many patient will be in a position to act on the preference. Patients admitted to hospital IM or psych ward having less choice. Perhaps you'll get pushed out of med management, but there is still a niche for psychiatrists who also offer psychotherapy, even though psychologists offer it as well.Bloody hell lol, guess my specialty decision thread was even more dire than I thought. Stressful time to be in medical school!
Why don't they propose AI to run the government?
Probably do a better job than these clowns in power.
AI/ML algorithms may be the only hope we have to rein in the epidemic of over/mis-diagnoses of things like PTSD, ADHD, etc. for secondary gain. If they are ever utilized to appropriately (un)diagnose/dismantle faulty PTSD diagnoses (either based on no data/formulation or based on a flimsy Criterion A event (my drill sgt yelled at me in boot camp 60 years ago, I didn't get the promotion I deserved)), veterans will go full 'John Connor' on the machines overnight.Will never happen because you can't bribe an AI.
Recently we tried using AI to record one of our peer review meetings, and it came up with a few gems like "Dr. X has entered the chat, and nothing of consequence was said." Locally there have also been a few reports of made up symptoms coming through with AI powered dictation software.
Seriously? Because I would think the algorithm would be easier to game than the human.AI/ML algorithms may be the only hope we have to rein in the epidemic of over/mis-diagnoses of things like PTSD, ADHD, etc. for secondary gain. If they are ever utilized to appropriately (un)diagnose/dismantle faulty PTSD diagnoses (either based on no data/formulation or based on a flimsy Criterion A event (my drill sgt yelled at me in boot camp 60 years ago, I didn't get the promotion I deserved)), veterans will go full 'John Connor' on the machines overnight.