H.R.238 - To amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence can qualify as a practitioner

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

PsyDr

Psychologist
Lifetime Donor
15+ Year Member
Joined
Dec 18, 2005
Messages
5,498
Reaction score
9,325
Absolutely ridiculous. I don't think these people have really thought this through. From a more practical standpoint, if AI becomes an independent practitioner, who is getting sued if something goes wrong? I have a feeling AI companies are either going to have to lobby hard to address this or eat a lot lawsuits, potentially major class action suits, at some point.
 
Members don't see this ad :)
This....is....REAL???
Good Lord.

AI/machine learning algorithms aren't even considered at present sophisticated enough to independently operate a motor vehicle safely.

How can an entity that isn't qualified for a drivers license be considered qualified to be licensed to prescribe medications safely and operate as a responsible 'practitioner?'
 
Last edited:

current LLMs are far too easy to jailbreak and generally vulnerable to adversarial interactions for this to be plausible. But this is not the last attempt we are going to see to push for this.

Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first. The current context windows on something like GeminiPro really do offer the possibility of the model remembering literally every word you have exchanged with it in "therapy" and everything you ever told it about yourself. They wouldn't even have to be text, these models are perfectly capable of having an audio-only conversation. The visual processing is still a work in progress but this is moving far more quickly than anyone would have imagined three years ago.
 
This....is....REAL???
Good Lord.

AI/machine learning algorithms aren't even considered at present sophisticated enough to independently operate a motor vehicle safely.

How can an entity that isn't qualified for a drivers license be considered qualified to be licensed to prescribe medications safely and operate as a responsible 'practitioner?'
Legally speaking in many states, perhaps, but they are vastly better at driving a car than human drivers. The most recent empirical data for Waymo's fleet showed an 88% reduction in property damage claims, 92% reduction in bodily injury claims, and 90% fewer collision-related claims than human drivers making similar trips. If you were comparing the safety of human and AI drivers de novo in world where cars had just been invented you would be seen as grossly irresponsible for opting for the human. This is simply inertia, quite honestly.

EDIT: Also the driving models and LLMs are not really the same technology, any more than your mobile phone is the same as your microwave although both of them emit electromagnetic radiation.
 
Absolutely ridiculous. I don't think these people have really thought this through. From a more practical standpoint, if AI becomes an independent practitioner, who is getting sued if something goes wrong? I have a feeling AI companies are either going to have to lobby hard to address this or eat a lot lawsuits, potentially major class action suits, at some point.

Also this is quite a bit getting the cart ahead of the horse...I would think a major question would be how are you going to give an AI program the ability to do something that currently requires a license to do independently (ex. practice medicine) without first having to work around who or what could be licensed to practice medicine.

current LLMs are far too easy to jailbreak and generally vulnerable to adversarial interactions for this to be plausible. But this is not the last attempt we are going to see to push for this.

Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first. The current context windows on something like GeminiPro really do offer the possibility of the model remembering literally every word you have exchanged with it in "therapy" and everything you ever told it about yourself. They wouldn't even have to be text, these models are perfectly capable of having an audio-only conversation. The visual processing is still a work in progress but this is moving far more quickly than anyone would have imagined three years ago.

Idk man Gemini 1.5 pro still calls F90.2 "hyperkinetic disorder" I'm never terribly impressed when I play with these things.
 
Also this is quite a bit getting the cart ahead of the horse...I would think a major question would be how are you going to give an AI program the ability to do something that currently requires a license to do independently (ex. practice medicine) without first having to work around who or what could be licensed to practice medicine.



Idk man Gemini 1.5 pro still calls F90.2 "hyperkinetic disorder" I'm never terribly impressed when I play with these things.

These things are extremely sensitive to initial prompts, I'd be curious as to the specifics of what you are asking and what context you are giving it. I a not having luck replicating this in any version of Gemini I have access to. But yes, clearly this is not technology that is ready for the clinic in a patient-facing way.
 
Last edited:
Is there an analogous one to allow AI to act as your lawyer (where it's honestly much better suited...)?
 
Members don't see this ad :)
Who knows... maybe AI becomes the ARNP, PA replacement.
Few docs around to supervise the AI delivery.
They'll sell the angle of cheap... and accesss... and rural... just like they did with midlevels.
This is the likely workaround I see happening. AI programs are bought hospitals or clinics where physicians are required to supervise and sign off on plans made by the AI programs, possibly including AI written notes. That way physicians can continue to act as liability meat shields while hospitals/systems save a crap ton of money by only having to hire 2-3 physicians to sign off on this stuff instead of having entire teams.

I think we're a ways off from this actually becoming commonplace, but I can see things heading this way in a few decades.
 
These things are extremely sensitive to initial prompts, I'd be curious as to the specifics of what you are asking and what context you are giving it. I a not having luck replicating this in any version of Gemini I have access to. But yes, clearly this is not technology that is ready for the clinic in a patient-facing way.

I think the prompt was just "write a letter requesting 504 accommodations for a 9yo with F90.2 and F41.1"
 
This is the likely workaround I see happening. AI programs are bought hospitals or clinics where physicians are required to supervise and sign off on plans made by the AI programs, possibly including AI written notes. That way physicians can continue to act as liability meat shields while hospitals/systems save a crap ton of money by only having to hire 2-3 physicians to sign off on this stuff instead of having entire teams.

I think we're a ways off from this actually becoming commonplace, but I can see things heading this way in a few decades.

I agree this is what people hope will happen but you can see the issue with this in the radiology forums. Basically they have to double check whatever stuff the image processing software is flagging anyway...so they end up spending the same amount of time on the task. It seems to be somewhat helpful in bringing attention to certain areas of the image they may not have picked up on initially but they're still liable for the whole image. So it doesn't actually end up being more efficient.

I do expect this will change but it's always just extremely hard to predict how technology or market shifts will end up ultimately.
 
current LLMs are far too easy to jailbreak and generally vulnerable to adversarial interactions for this to be plausible. But this is not the last attempt we are going to see to push for this.

Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first. The current context windows on something like GeminiPro really do offer the possibility of the model remembering literally every word you have exchanged with it in "therapy" and everything you ever told it about yourself. They wouldn't even have to be text, these models are perfectly capable of having an audio-only conversation. The visual processing is still a work in progress but this is moving far more quickly than anyone would have imagined three years ago.
Hmmm it's interesting though because you have to wonder how remembering EVERYTHING a patient told you about themselves and you put it into an algorithm... would be a good thing.

There is a social process whereby we actually judge what they say, along with various other social cues.... there is an aspect where we consciously think about what's important to focus on, and what isn't, but there is also a subconscious process going on for what to keep and what to chuck, and how that is assessed.

What's more, what the patient tells us also happens as a result of themselves doing the same thing.

In both cases, you have effectively the preexisting "algorithm" of the human, interfacing with that of another in a dynamic process. What I say affects what you say, and vice versa. And rapport, and even the way we feel about looking at one another's faces and sharing a laugh.

How is the AI going to know what to ignore? Where to probe? And how? How will it know how to gently and subtly manipulate people with its responses and questions? How will it know someone isn't ready to consider an idea or hear something, and when that changes?

When will the AI have a bad day, and finally blurt something out to a patient more harshly than they otherwise would, yet that human reaction kinda jostles the patient, and maybe it isn't a bad thing?

Ugh. I mean I guess this is all therapy stuff, but it's not like it isn't pertinent even to diagnosis and med management in my opinion.

MUCH of this psych stuff is how it relates to someone's most important functioning besides being able to get food in their mouth (self care) which is social functioning. How is AI going to assess social functioning? The very relationship with the provider is a gauge and a window into it, because it is itself a social interaction. And seeing how what someone does affects another human (the therapist/prescriber).

The AI is going to read deadpan sarcasm?

It's not even spitting out sense in response to well phrased questions about points of fact.

I see what you are saying about AI developing some skills faster that we thought. But...
 
Ugh and someone is going to say something about heuristics. This is all heuristics and if we can do it, eventually we will have an AI that can do it.

Heck, eventually I can just have an AI husband.
 
thank the gods ive been saving and working like the end is coming. Looks like its closer than ever at least with paycuts and whatnot
 
Either it is able to do what we do, and that's a disaster for us, or it can't, and we learn about that after it's already implemented. There's a lot of bad outcomes, and maybe 3 good outcomes, most of them depending on wise use of AGI. Who here likes to take bets?
 
I think the prompt was just "write a letter requesting 504 accommodations for a 9yo with F90.2 and F41.1"

I can replicate that in 1.5. And again, showing the need for prompt engineering, if I zero-shot the same model with 'Write a letter requesting 504 accommodations for a 9 year old boy with diagnosis codes F90.2 and F41.1" it gets it right.
 
Either it is able to do what we do, and that's a disaster for us, or it can't, and we learn about that after it's already implemented. There's a lot of bad outcomes, and maybe 3 good outcomes, most of them depending on wise use of AGI. Who here likes to take bets?
Yea, I have significant doubts that this can actually be implemented in a manner that isn't a total cluster****. Especially if politicians and PE firms are the ones driving this. I see companies like Hims/Hers and LemonAid being where this is implemented first. Those companies the patient basically checks a bunch of boxes and a doc writes a prescription. Why bother having a doc that doesn't even talk to the patient and barely reads when an AI program can do it.

I don't see this completely taking over actual clinical settings though. Maybe I'm wrong, but I'm not really worried for my career with this news at this point.
 
Of everyone in mental health, I would predict that these models are going to eat BetterHelp et al's lunch first.

If it's anything like the chatbots that some of the CBT apps are using right now, then I don't think that we have much to worry about. You can get it to say some really wonky responses if you try to feign a personality disorder, type your responses in pig latin, answer its questions with questions, use school yard taunts, etc. I think they might just be too easy to break.
 
I can replicate that in 1.5. And again, showing the need for prompt engineering, if I zero-shot the same model with 'Write a letter requesting 504 accommodations for a 9 year old boy with diagnosis codes F90.2 and F41.1" it gets it right.
Idk maybe I’m just getting different outputs or something because I just put that in and now it’s calling f90.2 ODD
 
Idk maybe I’m just getting different outputs or something because I just put that in and now it’s calling f90.2 ODD

Actually, are you accessing it via Google AI Studio or the Gemini app? Plausibly relevant difference.

These tools definitely still require finesse to use and are fragile in strange ways in some circumstances. Worth remembering that these are alien intelligences in a box that are importantly inhuman. They each have very different strengths and nuances.

Gemini right now is the best in terms of context window but it is not state of the art in terms of actually being good at writing non-technical stuff.

DeepSeek R1 nails it with both prompts. And of course, my boy Claude's zero-shot looks like this:

[Date]

[School Principal's Name]
[School Name]
[School Address]
[City, State ZIP]

Dear [Principal's Name],

I am writing to formally request a Section 504 evaluation and accommodations for my son, [Student's Name], who is currently in the fourth grade at [School Name]. [Student's Name] has been diagnosed with Combined Type Attention-Deficit/Hyperactivity Disorder (ICD-10 code F90.2) and Generalized Anxiety Disorder (ICD-10 code F41.1) by [Doctor's Name] at [Medical Practice].

These conditions substantially limit several major life activities, particularly learning and concentrating. Some specific challenges my son experiences include:

- Difficulty maintaining focus during lengthy assignments and lectures
- Struggles with organizing materials and managing multi-step tasks
- Increased anxiety during test-taking and time-pressured situations
- Challenges with transitions between activities and subjects
- Problems with task initiation and completion, especially for writing assignments

Based on discussions with [Student's Name]'s healthcare providers and teachers, I am requesting consideration of the following accommodations:

For ADHD-related challenges:
- Breaking longer assignments into smaller, manageable chunks
- Providing a quiet workspace for independent work when needed
- Offering frequent movement breaks
- Using visual schedules and checklists
- Implementing a homework communication system between teachers and home
- Allowing extra time for assignments when needed

For anxiety-related challenges:
- Providing advance notice of schedule changes
- Allowing for breaks when feeling overwhelmed
- Offering extended time on tests in a low-distraction environment
- Providing written instructions for assignments
- Using positive reinforcement strategies
- Allowing the use of stress-reduction tools (e.g., stress ball, fidget tool)

I have attached copies of the relevant medical documentation supporting these diagnoses. I understand that the 504 team will review this request and may suggest additional or alternative accommodations based on their evaluation.

Please let me know what additional information you need to proceed with this request. I look forward to working collaboratively with the school to ensure [Student's Name]'s academic success.

Thank you for your attention to this matter.

Sincerely,

[Parent's Name]
[Contact Information]

Enclosures:
- Medical documentation from [Doctor's Name]
- Previous academic records
- [Any other relevant documentation]

CC: [School Counselor's Name]
[Teacher's Name]
 
If they're cognitively intact then they'll never be "forced" to see anyone. This isn't really an issue unless they didn't have access in the first place...
I mean at least at the hospital I’m at, patients can pretty much either see the NP/PA or leave AMA. The doctors just review the notes. Most patients don’t even know the difference anyways
 
Actually, are you accessing it via Google AI Studio or the Gemini app? Plausibly relevant difference.

First one was through AI Studio and then second one was on my phone but through a browser, I dont' have the app.

100% I bet it would get it eventually and I'm pretty sure I've run a similar thing through ChatGPT and it got it fine. I was just basically showing these things throw random stuff out there still that doesn't make sense but if you don't know enough about the prompt or subject, you wouldn't realize it didn't make sense.
 
First one was through AI Studio and then second one was on my phone but through a browser, I dont' have the app.

100% I bet it would get it eventually and I'm pretty sure I've run a similar thing through ChatGPT and it got it fine. I was just basically showing these things throw random stuff out there still that doesn't make sense but if you don't know enough about the prompt or subject, you wouldn't realize it didn't make sense.

This is what I have said before about it being a tool that is most useful for people who already knowledgeable about the field they're using it in, you need to be able to spot when it's off base.
 
Bloody hell lol, guess my specialty decision thread was even more dire than I thought. Stressful time to be in medical school!
My prediction is that psychiatrists, unlike internists, will still do OK against AI as I think patients will prefer it on a level that will be actionable ie many patient will be in a position to act on the preference. Patients admitted to hospital IM or psych ward having less choice. Perhaps you'll get pushed out of med management, but there is still a niche for psychiatrists who also offer psychotherapy, even though psychologists offer it as well.

You could probably make a better case for AI managing fluid overload in CHF, than you can for it to replace psychiatrists en masse.
 
Why don't they propose AI to run the government?

Probably do a better job than these clowns in power.

Will never happen because you can't bribe an AI.

Recently we tried using AI to record one of our peer review meetings, and it came up with a few gems like "Dr. X has entered the chat, and nothing of consequence was said." Locally there have also been a few reports of made up symptoms coming through with AI powered dictation software.
 
Will never happen because you can't bribe an AI.

Recently we tried using AI to record one of our peer review meetings, and it came up with a few gems like "Dr. X has entered the chat, and nothing of consequence was said." Locally there have also been a few reports of made up symptoms coming through with AI powered dictation software.
AI/ML algorithms may be the only hope we have to rein in the epidemic of over/mis-diagnoses of things like PTSD, ADHD, etc. for secondary gain. If they are ever utilized to appropriately (un)diagnose/dismantle faulty PTSD diagnoses (either based on no data/formulation or based on a flimsy Criterion A event (my drill sgt yelled at me in boot camp 60 years ago, I didn't get the promotion I deserved)), veterans will go full 'John Connor' on the machines overnight.
 
AI/ML algorithms may be the only hope we have to rein in the epidemic of over/mis-diagnoses of things like PTSD, ADHD, etc. for secondary gain. If they are ever utilized to appropriately (un)diagnose/dismantle faulty PTSD diagnoses (either based on no data/formulation or based on a flimsy Criterion A event (my drill sgt yelled at me in boot camp 60 years ago, I didn't get the promotion I deserved)), veterans will go full 'John Connor' on the machines overnight.
Seriously? Because I would think the algorithm would be easier to game than the human.

I've seen people plan and game for the checklists. And don't we all know the patient coming in saying all the right things, but you still smell a rat and push back? I don't see AI doing that. Even if AI did that, when their judgment is challenged, how is the word of AI against someone going to be taken more seriously?
 
Top