AI transcription services

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

xthine

Full Member
15+ Year Member
Joined
Feb 27, 2007
Messages
79
Reaction score
6
Thoughts on traditional documentation versus AI? This was brought to my attention recently and the clinic is considering purchasing to integrate with the current EHR. Cuts down on documentation time by 95%, press record and it automatically formats a SOAP note (that can be edited) is how it was explained to me. Currently skeptical but I have not tried it yet.
Anyone already using Deepscribe or something similar?

Members don't see this ad.
 
  • Like
Reactions: 1 user
I actually was considering posting a question about this very recently. I too am skeptical and am persnickety enough to have exacting standards about exactly what information was captured and how, but it would be nice to not have to play stenographer sometimes during encounters. I write long notes and have gotten efficient about it but my working life would massively improve if I could outsource the cognitively less demanding bits of the note. Color me super interested in anyone who has had experience with these things.
 
  • Like
Reactions: 1 user
No experience with AI transcription, but I have thoughts. Seems like it would be great for those practices wanting to do the 15 minute med checks or for initial H&Ps in the ER or inpatient if they can format that kind of note. I'd be interested in being able to use this in the ER with some patients to cut down on documentation time for patients requiring longer assessments.

It does not seem like something that I would want to use for outpatient follow-ups, especially if you're doing therapy regularly and try to avoid including certain statements into notes. If I have to go back and heavily edit notes, then why bother in the first place?
 
  • Like
Reactions: 2 users
Members don't see this ad :)
No experience with AI transcription, but I have thoughts. Seems like it would be great for those practices wanting to do the 15 minute med checks or for initial H&Ps in the ER or inpatient if they can format that kind of note. I'd be interested in being able to use this in the ER with some patients to cut down on documentation time for patients requiring longer assessments.

It does not seem like something that I would want to use for outpatient follow-ups, especially if you're doing therapy regularly and try to avoid including certain statements into notes. If I have to go back and heavily edit notes, then why bother in the first place?

Agree that the utility is dependent entirely on how trainable and customizable the AI is. If it can't get 90% of the way to what I want with some tinkering, hard pass.
 
  • Like
Reactions: 2 users
I don't feel a need to use AI to write notes at all given that I can use smartphrases.

Now writing other stuff such as forensic reports and my D&D party's adventure journals that's different. I sometimes spend over 10 hours writing a forensic report.
 
I tested one of the options by speaking as if I was in a patient encounter (as the patient and the doc). It was pretty impressive overall. It would be extremely helpful for intake appointments and helpful generally for follow-ups. But you have to be comfortable with it not formatting or phrasing things exactly the way you might.

I'm overall optimistic it would shorten time spent writing notes by 50-75%, especially some of the more tedious parts. I'd probably still do most of the assessment paragraph myself and there would potentially be minor other parts that would need to be edited.
 
  • Like
Reactions: 1 users
try to avoid including certain statements into notes. If I have to go back and heavily edit notes, then why bother in the first place?

These are my thoughts 100%. The biggest variation in what we do vs. others is what questions we accept as a no and what answers we accept as no.

Patient: "I mean, have I thought about killing myself? Yea, who doesn't? I'm not like planning it or anything. I mean have there been days in my life where I've fantasized about jumping out of a plane to slowly free fall to my death, sure. I think that was like last last year or something, maybe last week. I don't know. Not right now or anything. I'm not going to shoot myself. I'm too much of a chicken to do that. And I don't even have a gun or even know the process of what it takes to buy a gun. Would that be the easiest way? Probably."

Psych NP/PA or ER Physician: "Patient reporting SI with plan to shoot himself or jump out of an airplane. Involuntary admission to inpatient psychiatry"

Me in real-life documentation: "Patient denies current SI. Previously considered jumping from an airplane last year, but never formulated a realistic plan. Discharge to home care."

My guess is the AI bot beep-booping in the background spits out a note more like the ER/NP than how I would word the response to that question.
 
These are my thoughts 100%. The biggest variation in what we do vs. others is what questions we accept as a no and what answers we accept as no.

Patient: "I mean, have I thought about killing myself? Yea, who doesn't? I'm not like planning it or anything. I mean have there been days in my life where I've fantasized about jumping out of a plane to slowly free fall to my death, sure. I think that was like last last year or something, maybe last week. I don't know. Not right now or anything. I'm not going to shoot myself. I'm too much of a chicken to do that. And I don't even have a gun or even know the process of what it takes to buy a gun. Would that be the easiest way? Probably."

Psych NP/PA or ER Physician: "Patient reporting SI with plan to shoot himself or jump out of an airplane. Involuntary admission to inpatient psychiatry"

Me in real-life documentation: "Patient denies current SI. Previously considered jumping from an airplane last year, but never formulated a realistic plan. Discharge to home care."

My guess is the AI bot beep-booping in the background spits out a note more like the ER/NP than how I would word the response to that question.
You can try it out yourself. Here's one that offers a free trial: Nabla Copilot · Enjoy care again

I haven't thoroughly tested. Will be interested if you give it a shot to hear what you find out.
 
You can try it out yourself. Here's one that offers a free trial: Nabla Copilot · Enjoy care again

I haven't thoroughly tested. Will be interested if you give it a shot to hear what you find out.
Trying it out, already concerned that the first specialties listed to choose from are "addictology", "algology", "allergology", and my favorite: "Anesthesiology and reanimation". Anyway, just tried it and I can see how it would be useful. I used @psych_0 's statement and it put the following in the note:

"They have experienced thoughts of suicide in the past, such as fantasizing about jumping out of a plane, but they deny any recent or current suicidal ideation. The patient does not have any plans or intentions to harm themselves and does not possess or know how to obtain a firearm. They describe themselves as "too much of a chicken" to attempt suicide."

Then I went back and just did a basic question and used that statement word for word and it spit out the following:

"The patient has also expressed having suicidal thoughts, although they clarify that these are not active plans but rather occasional fantasies. The patient recalls having such thoughts as recently as last week and as far back as last year. The patient denies any current intent or plan to commit suicide, citing a lack of courage and access to means (specifically a gun) as deterrents."

So seems like it has potential. My problem comes from what it does with the rest of the note. With only asking about SI, it still auto-populates a ton of other information that was never explored into the note but just says "patient does not report" as below:

"The patient does not report any specific triggers, risk factors, or alleviating/aggravating factors for these feelings of sadness and suicidal thoughts. There is no mention of any functional or emotional impairments, challenges or obstacles, significant events or traumas, or specific thoughts, beliefs, feelings, aspirations, or fears. The patient does not report any dreams or nightmares. There is no mention of previous treatments, interventions, therapies, or past and current medications."

It did the same thing with my first attempt which was a more in-depth simulated conversation, but even auto-populated stuff into the MSE that was false (for example said "HI: None" when I didn't ask about HI at all). Probably not a huge deal most of the time, but if something went to court parts of the note read like you talked about things that you did not, especially with negatively reporting things in the MSE that weren't discussed. The MSE format is also just weird (included "anxiety, depression, affect" as a line) and some things were in the HPI portion that I generally wouldn't ask about like "Patient did not discuss aspirations, fears, and dreams". It also seemed to have a hard time discerning which role was saying what at times, though that could just be because I was the only one talking. Again, I can see potential, but at this point it's seems so much easier to just use templates and smart phrases than have to go back and edit the note it creates.
 
So seems like it has potential. My problem comes from what it does with the rest of the note. With only asking about SI, it still auto-populates a ton of other information that was never explored into the note but just says "patient does not report" as below:

"The patient does not report any specific triggers, risk factors, or alleviating/aggravating factors for these feelings of sadness and suicidal thoughts. There is no mention of any functional or emotional impairments, challenges or obstacles, significant events or traumas, or specific thoughts, beliefs, feelings, aspirations, or fears. The patient does not report any dreams or nightmares. There is no mention of previous treatments, interventions, therapies, or past and current medications."

It did the same thing with my first attempt which was a more in-depth simulated conversation, but even auto-populated stuff into the MSE that was false (for example said "HI: None" when I didn't ask about HI at all). Probably not a huge deal most of the time, but if something went to court parts of the note read like you talked about things that you did not, especially with negatively reporting things in the MSE that weren't discussed.

That's annoying but not that big of a deal. This happens like every day in most of the notes in the country...the EMR just autopopulates a bunch of "normal" stuff esp around physical exam/MSE. Notoriously in the ROS, especially back in the days when you had to do so many "point" ROS to get to certain coding levels, still carries over in a lot of templates and EMRs.

I bet there'd be some way to customize and get rid of a common parts of the note/exam you never do.
 
Trying it out, already concerned that the first specialties listed to choose from are "addictology", "algology", "allergology", and my favorite: "Anesthesiology and reanimation". Anyway, just tried it and I can see how it would be useful. I used @psych_0 's statement and it put the following in the note:

"They have experienced thoughts of suicide in the past, such as fantasizing about jumping out of a plane, but they deny any recent or current suicidal ideation. The patient does not have any plans or intentions to harm themselves and does not possess or know how to obtain a firearm. They describe themselves as "too much of a chicken" to attempt suicide."

Then I went back and just did a basic question and used that statement word for word and it spit out the following:

"The patient has also expressed having suicidal thoughts, although they clarify that these are not active plans but rather occasional fantasies. The patient recalls having such thoughts as recently as last week and as far back as last year. The patient denies any current intent or plan to commit suicide, citing a lack of courage and access to means (specifically a gun) as deterrents."

So seems like it has potential. My problem comes from what it does with the rest of the note. With only asking about SI, it still auto-populates a ton of other information that was never explored into the note but just says "patient does not report" as below:

"The patient does not report any specific triggers, risk factors, or alleviating/aggravating factors for these feelings of sadness and suicidal thoughts. There is no mention of any functional or emotional impairments, challenges or obstacles, significant events or traumas, or specific thoughts, beliefs, feelings, aspirations, or fears. The patient does not report any dreams or nightmares. There is no mention of previous treatments, interventions, therapies, or past and current medications."

It did the same thing with my first attempt which was a more in-depth simulated conversation, but even auto-populated stuff into the MSE that was false (for example said "HI: None" when I didn't ask about HI at all). Probably not a huge deal most of the time, but if something went to court parts of the note read like you talked about things that you did not, especially with negatively reporting things in the MSE that weren't discussed. The MSE format is also just weird (included "anxiety, depression, affect" as a line) and some things were in the HPI portion that I generally wouldn't ask about like "Patient did not discuss aspirations, fears, and dreams". It also seemed to have a hard time discerning which role was saying what at times, though that could just be because I was the only one talking. Again, I can see potential, but at this point it's seems so much easier to just use templates and smart phrases than have to go back and edit the note it creates.
Yeah I'm not saying it's the best of these programs or perfect. I was actually super skeptical when I heard about these. The one I linked was just the one I'm aware of that has a free trial and still did a better job than I was expecting when I tried it out. I've heard even better things from some other companies that don't have such easy free trials (because they do a more thorough job integrating into your EMR, note templates, linking prior patient info, etc.)

I have a hard enough time getting our non-Dragon dictation software to recognize what I'm saying yet somehow they're able to populate pretty decent notes based on just what's said? I think the tech is really promising.
 
  • Like
Reactions: 1 user
When we use this technology, do we have to get consent from the patient as if it were an audio recording?
 
  • Like
Reactions: 1 user
When we use this technology, do we have to get consent from the patient as if it were an audio recording?
The few that I've looked at don't save any information on their servers so part of me wants to say probably not. There may be some technicality about having run data through their servers. But that's also true of dictation software.
 
Top