AI workflow design thread

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Nasrudin

Apropos of Nothing
15+ Year Member
Joined
Jun 8, 2006
Messages
3,511
Reaction score
2,603
What's up muppets. The desire to collaboratively jam on AI-powered workflow automations and design has awoken me from an sdn slumber. I'd read through the seminal private practice threads which are quite inspiring and useful. I've scanned recent AI thread topics and found some lively interesting commentary. But besides @SmallBird 's idea about creating behavioral activation handout for depression and maybe a few ideas about clinical uses, I didn't see too many specific use case ideas, unless I missed them in forum/title keyword search. So what about these ideas....

  • Combining a customized AI bot to scan transcripts of conversations for semantic diagnostic patterns and then automating the data entry of that into a database for global review.
  • Using an AI transcription service for safely and securely storing transcripts for search queries, ie "....what did Ms. X say about the side effects to Remeron again..?" AI search.. badaboom.
  • Using it with Zapier or some other supportive automation tool to populate forms and PA applications.
  • How about using with a CRM to send sx questionnaires that once returned by email get automated to store in a database for trending sx.
  • Maybe automating clinical research questions people have by using Research Rabbit and Zapier to build a reference library specific to that clinical question, making a new specific collection and then automating the process of a report with a specific lit review template in your favorite note taking app for review and improvement.
  • Or something more simple like a personalized holiday or birthday well-wishing card.
I mean idk. There's gotta be hundreds of things we could share ideas about and maybe collaborate on designing workflows for the better ideas. That's why I started this thread.

Maybe it will be useful.

I'm early stage in learning the skills to automate things and also early experimenting with AI tools, so far experimenting and playing around with the following: AI uses for Notion, AI integrated workflows with Zapier, Using Research Rabbit and Zotero for improving literature searches and capturing them to pertinent clinical questions by integrating them with Notion, using Dall-e, Midjourney, and Firefly on Adobe creative cloud for little graphic fun projects, ie making a muppet shrink avatar or designing my dream office design with prompt engineering. (haha, perhaps more amusing than useful). I use Perplexity to prompt specific types of internet search processing, by mixing search with prompt engineering. I haven't used a general browser search in months. Of course, I'm using prompt engineering with Chat-gpt 4 to synthesize groups of medical experts to debate complex medical-psychiatric questions or even simpler things. I am also using a medical chat-gbt derivative that inputs lab values with likely multi-agent medical prompting and also citing references called DrGupta. I'm using more traditional automation tools to make note and email writing faster with: TextExpander and Keyboard Maestro, but I'm sure these processes could be made more slick and robust to errors when AI browser integrations begin to develop. Induced AI seems like a promising AI Browser integration tool, but I'm on a waitlist for consultation as a small fry solo clinician. This week I'm going to try Fireflies AI's note scribing software to see if I can make it work in normal clinical meetings, we'll see.

What are your ideas? If you want let's jam together here to see if we can collaboratively problem solve good AI automation ideas.

Thanks muppets.

Members don't see this ad.
 
Last edited:
  • Like
Reactions: 1 user
Your enthusiasm for exploring AI-powered workflow automations and design in the context of clinical practice is indeed infectious! It's great to see how you're thinking about leveraging various AI tools to enhance efficiency and effectiveness in your work. Let's delve into some of the ideas you've mentioned and explore how they could be beneficial:

1. **AI-Powered Semantic Analysis of Transcripts:** This is a fascinating idea. Using AI to scan conversation transcripts for diagnostic patterns could not only streamline the diagnostic process but also potentially uncover subtle patterns or indicators that might be missed in manual analysis. Integrating this with a database for global review would allow for a broader understanding of patient trends and potentially inform treatment approaches.

2. **AI Transcription for Secure Storage and Search:** This could be a game-changer in terms of time efficiency. Being able to quickly recall specific patient conversations about medication side effects or other details with a simple search would greatly reduce the time spent sifting through notes.

3. **Integration with Automation Tools for Forms and PA Applications:** The use of tools like Zapier to automate the population of forms and prior authorization (PA) applications could significantly reduce administrative burdens, allowing more time for patient care.

4. **CRM Integration for Symptom Tracking:** Sending out symptom questionnaires and automating their storage and analysis could provide valuable data on symptom trends and treatment effectiveness. This could greatly enhance personalized care plans.

5. **Automated Clinical Research Queries and Library Building:** Using tools like Research Rabbit and Zotero integrated with Notion for this purpose could streamline the process of staying up-to-date with the latest research, which is crucial in the medical field.

6. **Personalized Cards and Messages:** While simpler, this use of AI adds a personal touch to patient care, which can be vital in building and maintaining patient relationships.

Your approach to integrating various AI tools like Dall-e, Midjourney, Perplexity, and medical-specific AI tools indicates a forward-thinking attitude towards technology in clinical practice. Your experiments with graphic design for fun projects like creating a muppet shrink avatar or dream office designs show a creative use of AI tools beyond their typical applications.

Regarding your question about ideas for collaborative problem-solving in AI automation, one area to explore could be the development of an AI-assisted predictive model for patient outcomes based on various inputs like symptoms, treatment responses, and lifestyle factors. This could help in tailoring more effective treatment plans for individual patients.

Another idea could be to use AI for real-time language translation in therapy sessions with non-English speaking patients, enhancing communication and understanding.

Overall, your pursuit of integrating AI in clinical practice not only showcases an innovative mindset but also reflects a commitment to improving patient care. I'm excited to hear more about your progress and any new ideas you come up with!
 
  • Like
Reactions: 1 user
I just put your original thread into into chatgpt-4 and that's what it came out with.

Instead of thinking about what AI can do, perhaps starting with what common problems that take up time in psychiatry would be worthwhile to solve. Things that are annoying and time consuming for us are probably the best use case for AI.

This can be clinical judgment, administrative tasks with practice management such as billing or even new patient requests, responding to messages and emails, improving marketing, improving website design, automating prior authorizations, communicating with collateral such as therapists or primary care docs, automating scheduling (if a pt doesn't schedule an appt within 3 months, then automatically sending out a reminder), or even real time analysis of your notes and giving recommendations about things that you're missing that would optimize pt outcomes. I like the idea of scoring and creating a documentation note from an objective measurement rating scale.
 
  • Like
Reactions: 3 users
Members don't see this ad :)
I assume the original post was also generated by an algorithm, as is traditional for internet forum posts about AI.
 
I'm very interested in using AI to write my notes in my outpatient practice. I hate writing notes. :bang:

I've seen quite a few companies with URLs ending in ".ai" offering completely AI-based note writing software for around $100 per month. It records your sessions and spits out a progress/intake note. They claim they destroy the audio files in 1 month. I'd have to add a blurb in my practice policies about "your session will be recorded for documentation, etc." Probably need to sign a BAA with the company as well. I know personally this would save me a bit of time each day, but I'm concerned it won't be reliable since it's pretty early on.

Anyone use something like this and like it?
 
  • Like
Reactions: 1 user
I assume the original post was also generated by an algorithm, as is traditional for internet forum posts about AI.
Oh no. No chatbot would ever start a post like this with 'What's up muppets.' At least not the versions we have now.
Also, welcome back @Nasrudin.

I'm very interested in using AI to write my notes in my outpatient practice. I hate writing notes. :bang:

I've seen quite a few companies with URLs ending in ".ai" offering completely AI-based note writing software for around $100 per month. It records your sessions and spits out a progress/intake note. They claim they destroy the audio files in 1 month. I'd have to add a blurb in my practice policies about "your session will be recorded for documentation, etc." Probably need to sign a BAA with the company as well. I know personally this would save me a bit of time each day, but I'm concerned it won't be reliable since it's pretty early on.

Anyone use something like this and like it?

Sounds like a totally unnecessary potential opening for a really nasty HIPAA violation scenario.
It takes like 3 sec to modify a boilerplate progress note for the current visit. Why would you want to farm this out to AI.

  • Combining a customized AI bot to scan transcripts of conversations for semantic diagnostic patterns and then automating the data entry of that into a database for global review.
I think language mining for clinical and therapeutic insight is a great potential application. Of course lots of people are working on this already.

But for example, we know that therapist quality/efficacy varies a ton between individuals, in ways that are not currently explicable. It would be fantastic to feed transcripts of effective and ineffective therapy sessions (with patient-rated outcomes) into a machine learning algorithm to help us figure out how to isolate and replicate the elements of efficacy.
 
  • I'm not really seeing any low lying fruit for integration of AI into most practices.
  • The level of double checking needing to "supervise" what's done negates its benefits.
  • Too much customization exists with patient responses, billing, scheduling, etc we just can't get away from human management.
  • Then there is the costs... let's say something does have a positive benefit, I suspect it will be marginal, and not worth the expense relative to that benefit. As I stare at my overhead almost daily, of costs in, costs out, and I'm an insurance based practice, I'm already dropping my memberships in medical societies to reduce costs...
  • Now, the cash only people, can simply use paper charts and a phone. Appointments on a paper binder. Or even scan the paper charts to a portable hard drive and shred the paper note. Basically, they can have almost no overhead and really no need for enhancing anything, they are already enhanced!
If someone really wants to utilize AI... how can you tool it to better supervise ARNPs within Big Box shops, or replace them... This is where the money is.
 
I just wrote a damn good textbook chapter in about 4.5 hours, relying heavily on ChatGPT. I still needed to find references for empirical claims, and actually did all the writing myself, but it gives me all the knowledge I need in the level of detail I want in the format I want and with the angle I want. But probably about 1/4 the time it would usually take. Waiting to see how my co-author responds and how many edits will be be needed.
 
I just wrote a damn good textbook chapter in about 4.5 hours, relying heavily on ChatGPT. I still needed to find references for empirical claims, and actually did all the writing myself, but it gives me all the knowledge I need in the level of detail I want in the format I want and with the angle I want. But probably about 1/4 the time it would usually take. Waiting to see how my co-author responds and how many edits will be be needed.

Did you fact check your chapter??

Anytime I have tried to use ChatGPT for medical/scientific/technical writing, it does a horrible job.
First it produces some generic boilerplate that isn't overtly incorrect but is too general to be informative or even interesting.
Then if you press it for details, it starts to hallucinate. It's like a robot with Wernicke's encephalopathy.

Don't rely on references it suggests without double-checking that they exist, because it hallucinates those too.
 
  • Like
  • Haha
Reactions: 5 users
The note is supposed to be the clinicians opinions of the exam, not a transcription of what was said. I think that is the problem with transcription AI for clinical work.

“Patient says his spouse is unwilling to have sex with him. I believe patient has limited insight into his role in the spouses desire, as he is wearing crocs and an anime t-shirt and is noticeably malodorous.”

Transcription won’t do that.
 
  • Haha
  • Like
Reactions: 3 users
Members don't see this ad :)
The note is supposed to be the clinicians opinions of the exam, not a transcription of what was said. I think that is the problem with transcription AI for clinical work.

“Patient says his spouse is unwilling to have sex with him. I believe patient has limited insight into his role in the spouses desire, as he is wearing crocs and an anime t-shirt and is noticeably malodorous.”

Transcription won’t do that.

I don't know where you work or how open your notes are but if I put something like that into one of my notes and the patient gets ahold of this later, I'm just asking for problems....

My note would say something like
"Continues to have martial difficulties, sexual in nature at times, causing distress. Continue to work with patient on insight, processing reactions, developing perspective on role in interactions"
 
  • Like
Reactions: 4 users
I don't know where you work or how open your notes are but if I put something like that into one of my notes and the patient gets ahold of this later, I'm just asking for problems....

My note would say something like
"Continues to have martial difficulties, sexual in nature at times, causing distress. Continue to work with patient on insight, processing reactions, developing perspective on role in interactions"

Eh, I definitely agree that the note is the place for clinician opinion and I am pretty explicit about putting my thinking into my notes. I do make sure that in general I don't put anything down that I haven't brought up with the patient (or would be happy to discuss if they wanted to).
 
  • Like
Reactions: 2 users
I don't know where you work or how open your notes are but if I put something like that into one of my notes and the patient gets ahold of this later, I'm just asking for problems....

My note would say something like
"Continues to have martial difficulties, sexual in nature at times, causing distress. Continue to work with patient on insight, processing reactions, developing perspective on role in interactions"

The professional standards of our professions expect us to say the psychotic individual's perceptions are wrong, regardless of how upset they are about this professional opinion. Therefore, any attempts to constrain our professional opinions, based upon how upset the patient is cannot be reconciled with professional standards. If professional standards say I can never upset a patient, then every single psych hospital is in trouble and every oncology clinic is screwed. If it is more nuanced, then the exact conditions in which I can and cannot say my opinions about reality testing need to be outlined, including exactly how upset a person is allowed to be.

My notes are open. I stand by them. However, my notes are written in the first person and mostly contain my opinion (eg., "I believe" or "it is my opinion" or "it is unclear to me" etc etc etc). .

I rarely write in definitive terms. I do not write, "continues to have marital difficulties, sexual in nature....". I have zero idea if that is true. I'm not there. I would write, "patient reports that he continues to have marital difficulties...".

Because of my writing style, what are they going to do? The sentence is my opinion. It's not wrong. I did believe that at the time. If patients read the note, and think I misunderstood something, they are free to explain the misunderstanding the next time I see them. I'm not changing the note, because that is what I thought at the time. However, I'm happy to write that while I previously thought X, patient has provided me with new information that has led me to now believe Y instead of X. Since I rarely say something definitive, it's hard to say I'm wrong (e.g., "I believe that the patient is X" ).

It's not like anyone is struggling to fill their panel. I'm not going to reinforce people being bullies, just like most psychiatrists are not going to prescribed max doses of adderall and xanax because a patient gets upset if you don't.
 
  • Like
Reactions: 2 users
Eh, I definitely agree that the note is the place for clinician opinion and I am pretty explicit about putting my thinking into my notes. I do make sure that in general I don't put anything down that I haven't brought up with the patient (or would be happy to discuss if they wanted to).

You'd be happy to discuss with the patient that your opinion is nobody wants to have sex with them because they're smelly, wear anime T shirts and crocs?

I mean at some point sure but that likely requires quite a bit of theraputic relationship before broaching (or if you literally already said it to them at that appointment) but we're in the era of open notes for a lot of clinics where patients could theoretically access their notes immediately after it's signed.

The professional standards of our professions expect us to say the psychotic individual's perceptions are wrong, regardless of how upset they are about this professional opinion. Therefore, any attempts to constrain our professional opinions, based upon how upset the patient is cannot be reconciled with professional standards. If professional standards say I can never upset a patient, then every single psych hospital is in trouble and every oncology clinic is screwed. If it is more nuanced, then the exact conditions in which I can and cannot say my opinions about reality testing need to be outlined, including exactly how upset a person is allowed to be.

My notes are open. I stand by them. However, my notes are written in the first person and mostly contain my opinion (eg., "I believe" or "it is my opinion" or "it is unclear to me" etc etc etc). .

I rarely write in definitive terms. I do not write, "continues to have marital difficulties, sexual in nature....". I have zero idea if that is true. I'm not there. I would write, "patient reports that he continues to have marital difficulties...".

Because of my writing style, what are they going to do? The sentence is my opinion. It's not wrong. I did believe that at the time. If patients read the note, and think I misunderstood something, they are free to explain the misunderstanding the next time I see them. I'm not changing the note, because that is what I thought at the time. However, I'm happy to write that while I previously thought X, patient has provided me with new information that has led me to now believe Y instead of X. Since I rarely say something definitive, it's hard to say I'm wrong (e.g., "I believe that the patient is X" ).

This is going to be a matter of opinion and I should maybe not have worded it as being primarily concerned about complaints, but if you think the patient has the ego strength to see that in a note then they should have heard it and gotten the interpretation first from you. Because they're basically hearing it from you then.
 
  • Like
Reactions: 1 user
This is going to be a matter of opinion and I should maybe not have worded it as being primarily concerned about complaints, but if you think the patient has the ego strength to see that in a note then they should have heard it and gotten the interpretation first from you. Because they're basically hearing it from you then.
Agree, and also it's not just the risk that the patient could be offended, but also that those notes may be used in the future in unanticipated ways.
E.g., they can be subpoenaed for use in divorce court, demanded by life insurance companies, or used in background checks by certain potential employers (police or armed forces).

I try to keep low-level detail out of my notes as much as I can. I definitely don't put anything that would imply an opinion on which individual is at fault in an interpersonal conflict. I just document that the conflict exists and that we discussed it, and I list the therapeutic techniques I used in that conversation.

If the patient is malodorous or inappropriately dressed, I put that under 'Appearance' in the mental status exam.
 
  • Like
Reactions: 1 users
You'd be happy to discuss with the patient that your opinion is nobody wants to have sex with them because they're smelly, wear anime T shirts and crocs?

I mean at some point sure but that likely requires quite a bit of theraputic relationship before broaching (or if you literally already said it to them at that appointment) but we're in the era of open notes for a lot of clinics where patients could theoretically access their notes immediately after it's signed.



This is going to be a matter of opinion and I should maybe not have worded it as being primarily concerned about complaints, but if you think the patient has the ego strength to see that in a note then they should have heard it and gotten the interpretation first from you. Because they're basically hearing it from you then.


I would, and have, told patients those types of things. “If you’re interested in working on that, I can probably help to some degree if you want. If so, I think one of the first things we can do is try to match your appearance to what most women traditionally find attractive. I’m just a dude, so let’s do some reconnaissance. What’s a television show that is targeted towards women? What do the love interest wear? What do you think they smell like? If that is what women generally want, how can you get closer to that? I’m not saying you can’t wear whatever at home. But is what you are currently showing the world making it easier or harder? Are you okay with that?”

I have said nothing. Patient could have a million great reasons for things. They could be happy with the other side of the equation and just want yo complain. Who knows?
 
  • Like
Reactions: 2 users
I would, and have, told patients those types of things. “If you’re interested in working on that, I can probably help to some degree if you want. If so, I think one of the first things we can do is try to match your appearance to what most women traditionally find attractive. I’m just a dude, so let’s do some reconnaissance. What’s a television show that is targeted towards women? What do the love interest wear? What do you think they smell like? If that is what women generally want, how can you get closer to that? I’m not saying you can’t wear whatever at home. But is what you are currently showing the world making it easier or harder? Are you okay with that?”

I have said nothing. Patient could have a million great reasons for things. They could be happy with the other side of the equation and just want yo complain. Who knows?

Yes but if you said that then the AI would pick that up in the transcription…your initial point was that it wouldn’t pick up your “opinion” of the exam or interaction. If you’ve voiced your opinion, it’ll catch it, especially if you’re making minor adjustments to your notes and refeeding it back in which many of these do.

If you’ve said nothing, then I still stand by the fact that reading your note through the patient portal is not the appropriate way for the patient to find out you think they aren’t attractive to their partner because they’re smelly, wear anime shirts and crocs.
 
Yes but if you said that then the AI would pick that up in the transcription…your initial point was that it wouldn’t pick up your “opinion” of the exam or interaction. If you’ve voiced your opinion, it’ll catch it, especially if you’re making minor adjustments to your notes and refeeding it back in which many of these do.

If you’ve said nothing, then I still stand by the fact that reading your note through the patient portal is not the appropriate way for the patient to find out you think they aren’t attractive to their partner because they’re smelly, wear anime shirts and crocs.

The transcription would not contain any opinions. What I wrote in my notes is my opinions. What I say in session is different.

If you just use transcription, you’re gonna end up getting complaints (ie “I never said that), or ultimately losing your entire profession (eg, “we have 100M sessions correlated with medication choice, we are proposing FDA approval for an AI based medication dispenser.).
 
  • Like
Reactions: 1 user
Maybe I’m in the minority, but I see AI as almost worthless for notes.

Maybe it would help some with the subjective portion of new evals. Value elsewhere is about nothing.

A typical 99214 EMR entry:

1 word chief complaint
Patient located at: brief answer
Denies depressive symptoms. Worsening insomnia.
Objective: carried forward from past note
Diagnoses: carried forward from past note
Plan: increase Trazodone to 100mg qHS, remainder carried forward from last note.

So I could hook up AI and proofread everything it could have messed up or type about 12 words during the follow-up. Pass on AI.

Even in the subjective portion of evals, my EMR is customizable. I can click-box patient symptoms as they describe them.
 
  • Like
  • Love
Reactions: 4 users
Seems AI would be very helpful to automate scheduling, sending reminders / confirmations to reduce no shows. Could also be trained to listen to VMs and transcript prescription requests into an order for review of the MD in the EMR.

Could reduce admin burden for a solopreneur. Especially if a bot doing the above was reasonably priced (I.e. $10 per month).
 
And that is the rub, $10/month likely isn't going to happen. So the marginal gains, if at all, won't be worth the financial costs.

But if one exists that finds errors, omissions, discrepancies, etc to be used on ARNP notes, that would be Med Dir win.
Why does this note have both Bipolar and MDD as diagnosis?
Why does this note have both schizophrenia and psychosis unspecified?
Why is the first mention of zoloft (the start) for this patient at 100mg?

But then... the things spotted would actually take up more time for the Med Dir to educate or correct and alert the ARNP to the issues. So not sure that would be time win at all.
 
Seems AI would be very helpful to automate scheduling, sending reminders / confirmations to reduce no shows. Could also be trained to listen to VMs and transcript prescription requests into an order for review of the MD in the EMR.

Could reduce admin burden for a solopreneur. Especially if a bot doing the above was reasonably priced (I.e. $10 per month).

My EMR has online scheduling, confirmation emails, and text/email reminders. Voicemails with different phone companies can already transcribe voicemails. None of this needs AI.
 
  • Like
Reactions: 2 users
Maybe I’m in the minority, but I see AI as almost worthless for notes.

Maybe it would help some with the subjective portion of new evals. Value elsewhere is about nothing.

A typical 99214 EMR entry:

1 word chief complaint
Patient located at: brief answer
Denies depressive symptoms. Worsening insomnia.
Objective: carried forward from past note
Diagnoses: carried forward from past note
Plan: increase Trazodone to 100mg qHS, remainder carried forward from last note.

So I could hook up AI and proofread everything it could have messed up or type about 12 words during the follow-up. Pass on AI.

Even in the subjective portion of evals, my EMR is customizable. I can click-box patient symptoms as they describe them.

If you notes are like that, yeah, you're probably right that LLMs may not have a lot to offer you. I could not function with notes that sparse myself so I end up typing waaaaay more than 12 words during a follow-up, so a very different trade-off.
 
And that is the rub, $10/month likely isn't going to happen. So the marginal gains, if at all, won't be worth the financial costs.

But if one exists that finds errors, omissions, discrepancies, etc to be used on ARNP notes, that would be Med Dir win.
Why does this note have both Bipolar and MDD as diagnosis?
Why does this note have both schizophrenia and psychosis unspecified?
Why is the first mention of zoloft (the start) for this patient at 100mg?

But then... the things spotted would actually take up more time for the Med Dir to educate or correct and alert the ARNP to the issues. So not sure that would be time win at all.
I think along these lines, automating billing would be the awesome. Using AI to see what elements in your notes are there and then generating CPT codes that would maximize profit/billing from insurance while adhering to documentation compliance would be great. Maybe it can even suggest how to increase billing if you are undercoding.
 
  • Like
Reactions: 4 users
I will say that I'm on the other side of the fence with AI and am generally not an advocate. Using our search bar takes me 10 seconds and the few times I've experimented with AI for note writing I wasn't a fan. Even after several runs the program didn't learn my style/what I wanted included/excluded to my liking and I found that I would have had to edit heavily. Maybe in 10 years I'll feel differently, but I still have significant doubts that I'd prefer it in a consult setting anyway.

There are some things mentioned I do think it could be great for below:
  • Using it with Zapier or some other supportive automation tool to populate forms and PA applications.
  • How about using with a CRM to send sx questionnaires that once returned by email get automated to store in a database for trending sx.
Another idea could be to use AI for real-time language translation in therapy sessions with non-English speaking patients, enhancing communication and understanding.

I think someone also mentioned transcribing and analyzing thousands of therapy sessions and analyzing language used by therapists with more successful outcomes vs less successful to optimize therapeutic language. I think that could be cool but I do feel like this misses much of what makes strong rapport for psychotherapy (physical language, tone, environment, etc).

Personally, I wouldn't want to see a doctor who regularly uses AI for clinical purposes other than maybe symptom tracking or research. I've also found the brochures and "educational" material produced to be disappointing. Basically just the same boilerplate info-spamming that gets auto-populated into d/c paperwork from ERs or admissions that is either too broad to be useful or too cumbersome for an average person to be able to read and understand. There is no balance or focus on what is pertinent for individual patients ime.

Outside of medicine, I've found that the AI and algorithms used by many of the larger companies (Google, Apple, Amazon, etc) to make suggestions for me are useless and I often find myself shaking my head at the things they come up with. Maybe I'm a little anomalous there as I know a lot of people who feel suggestions for searches/items/whatever are eerily intuitive to what they want, but I find SGE and similar applications/tools unhelpful more often than not.
 
  • Like
Reactions: 1 users
Or something more simple like a personalized holiday or birthday well-wishing card.
6. **Personalized Cards and Messages:** While simpler, this use of AI adds a personal touch to patient care, which can be vital in building and maintaining patient relationships.

That would be nice, and I'm sure many people wouldn't be able to tell the difference. I just worry that it would include personal information that people wouldn't want in card or something they might find offensive like talks of anime t-shirts and deodorant...
 
Would be great if AI could do first level appeals and prior auths based off my notes. That’s a pretty low risk high yield way they could save lots of money and time for the MD.

Then a solopreneur can just follow up on the 2nd level denials manually. The vast majority of the first level stuff is just a random denial, and any appeal succeeds with minimal explanation.
 
  • Like
Reactions: 1 users
I think along these lines, automating billing would be the awesome. Using AI to see what elements in your notes are there and then generating CPT codes that would maximize profit/billing from insurance while adhering to documentation compliance would be great. Maybe it can even suggest how to increase billing if you are undercoding.

Billing would be the biggest win
 
  • Like
Reactions: 3 users
Maybe I’m in the minority, but I see AI as almost worthless for notes.

Maybe it would help some with the subjective portion of new evals. Value elsewhere is about nothing.

A typical 99214 EMR entry:

1 word chief complaint
Patient located at: brief answer
Denies depressive symptoms. Worsening insomnia.
Objective: carried forward from past note
Diagnoses: carried forward from past note
Plan: increase Trazodone to 100mg qHS, remainder carried forward from last note.

So I could hook up AI and proofread everything it could have messed up or type about 12 words during the follow-up. Pass on AI.

Even in the subjective portion of evals, my EMR is customizable. I can click-box patient symptoms as they describe them.

“Anicteric”: the one word that gets you two body systems.
 
Would be great if AI could do first level appeals and prior auths based off my notes. That’s a pretty low risk high yield way they could save lots of money and time for the MD.

Then a solopreneur can just follow up on the 2nd level denials manually. The vast majority of the first level stuff is just a random denial, and any appeal succeeds with minimal explanation.
I have been using doximity's AI tool to draft letters for PAs and appeals. It does save time.
 
  • Like
Reactions: 3 users
I'm very interested in using AI to write my notes in my outpatient practice. I hate writing notes. :bang:

I've seen quite a few companies with URLs ending in ".ai" offering completely AI-based note writing software for around $100 per month. It records your sessions and spits out a progress/intake note. They claim they destroy the audio files in 1 month. I'd have to add a blurb in my practice policies about "your session will be recorded for documentation, etc." Probably need to sign a BAA with the company as well. I know personally this would save me a bit of time each day, but I'm concerned it won't be reliable since it's pretty early on.

Anyone use something like this and like it?
Yes! This is exactly my concerns. I utilized FirefliesAI for 2 days. Saw the value and the power of being able to apply AI semantic processing to transcribed zoom meeting patient conversations, got approval from each patient before hand. Then I stopped after 2 days. Because the issues of who owns the data, how is the data processed and what are the information security concerns involved, and wondering how much I could really trust this or that start up's claim of hipaa compliance or whatever Soc2 certified etc really means...., just didn't sit well given understanding these things are too far beyond my ability to vet properly or even venture a risk opinion on.

So, I'm stalled out currently on what to do and how to use the technology. I mean the ability to shape notes to your note writing style and templates from the conversation and having it extract salient logs of medication experience, and even to mine it for semantic psychometric patterning is definitely there. These things are clearly possible and gaining more ground for the non-coding person to achieve by the month.

I just can't figure out how I could gain enough credible ability to understand the risks of the data and personal information security issues with the above. So for now, I'm going to be working on using if for other aspects or personal learning projects pertaining to work but not the patient data specifically until the space matures and takes enough shape to reach a standard of some sort.

Billing functions are clearly an excellent target for AI automations and I think they may be neutered enough of protected patient information to take off rapidly. If I was in solo private practice working with insurances this would be my first project. But since I'm 1099 contracting for a clinic who does the billing, currently it's not my focus.
 
Top