Psychology and AI: Friend or foe?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

hum1

Full Member
10+ Year Member
Joined
Oct 4, 2014
Messages
114
Reaction score
31
I am curious to know what others think about the increase of jobs in the area of psychology and AI.

I am seeing an increasing number of AI companies who are hiring psychologists to either train AI in doing diagnoses, assessment, psychotherapy, and other clinical tasks, or to correct and provide feedback to AI when AI is doing such tasks.

I was surprised to see in the last job post that more than 100 psychologists applied to this role. Are we digging our own graves or will AI work with us? Will AI be another area of work for psychologists, or will it be our end?
 
Last edited:
As long as diploma mills exist, you'll never have a shortage of people willing to do work like this, or to take those ****ty third party comp and pen exams, or any other ****ty job that pays poorly and incentivizes half-assed work.
 
I am curious to know what others think about the increase of jobs in the area of psychology and AI.

I am seeing an increasing number of AI companies who are hiring psychologists to either train AI in doing diagnoses, assessment, psychotherapy, and other clinical tasks, or to correct and provide feedback to AI when AI is doing such tasks.

I was surprised to see in the last job post that more than 100 psychologists applied to this role. Are we digging our own graves or will AI work with us? Will AI be another area of work for psychologists, or will it be our end?
I think that the future prospects of utilizing AI/ machine learning tools in certain areas - like classification of psychopathology and broadband assessment of personality/psychopathology (e.g., under the dimensional approaches like HiTOP/RDOC and revising instruments like the MMPI-3 or PAI) or computerized adaptive assessment/ interviewing are boundless. However, I think that applied/clinical psychologists will be one of the last professional classes to be "replaced" by AI/ machine learning algorithms and techniques. But, it's an empirical question in the end.

And native human 'intelligence' should not be underestimated. Think of the millions of years of evolutionary pressures that honed the human organism and central nervous system to be in its present state (as a species) compounded by the many decades of life and learning/reinforcement history that 'pruned' precise neural connections and pathway/circuitry strengths ('weights') within a particular individual. The 'artificial intelligence' approach leaves much to be desired and facile pseudo-sophisticated predictions of the demise of the relevance/potency of human intelligence (despite its weaknesses, biases, and follies) are likely premature and ill-founded. Eliminative materialism and reductionism have their utility as philosophical precepts, but they should not be blindly praised and worshiped as new gods that are going to solve all human problems.

Yes, machine learning will be useful to psychologists going forward just like null hypothesis testing and factor analysis were. The model going forward will be a "both-and" model and not an "either-or" model.

Human intelligence and artificial "intelligence" are complementary--not contradictory--resources to be leveraged against problems.
 
Last edited:
As I see it, there are two options:

1) AI work product still has to be signed by a human that accepts liability/licensed professional.

a. Thankfully, lawyers make the rules. And there is 700ish years of common law that says that the adult human is responsible for the actions of their horse, kid, and...(sorry)... wife.
b. Corporations have taken actions that seem to indicate that they want to maintain their valuation/investment in corporate real estate (i.e., The only reason to make people return to office, is their valuation is partially defined by their real estate holdings, and if this value falls, their share price goes down, even if WFH is a lower expense).

2) The courts decide that AI is legally a "human" that can be licensed in and of itself.

a. For this to happen, the legal profession would have to file a suit that attorneys are the same thing as an AI. And a judge would have to agree.
b. If #2 happens, every profession that does "cognitive work" is in danger of being replaced. We might be first, but AI can do most of the jobs for: truck drivers, train conductors, pilots, air traffic controllers, attorneys, judges, insurance adjusters, psychiatrists, radiology, most of dermatology, most of ophthalmology, most of internal medicine, CPAs, stock traders, etc. Imagine how that would affect society, retail spending, etc.

c. Be sure, the older psychologists and bad psychologists are happy to sign their name to something for a cash grab. Pearson has been working on this for decades. Even some people on SDN have said they are working on this. I can't see how they find that ethical.

Society is about to have an interesting time. You can't make everyone's jobs obsolete and expect economic and societal stability.
 
The other day I spent several minutes searching for a Portlandia episode that didn't exist because AI told me it did.

And that's ignoring the environmental impact, which IMO is not insignificant.

There's lots of issues with AI and many bad things you can say about it, but so far it accounts for a trivial fraction of US electricity consumption, and the water involved in manufacturing a single piece of paper is the amount used by about 2500 chatbot prompts. A pair of jeans uses enough water for 5.4 million chatbot prompts. So if you really feel bad, buy one fewer pairs of jeans in your lifetime and you're pretty well net-neutral when it comes to water use from using AI.
 
I was surprised to see in the last job post that more than 100 psychologists applied to this role. Are we digging our own graves or will AI work with us? Will AI be another area of work for psychologists, or will it be our end?
Also, this is 100% made up. The company has access to and often manipulates this number.
 
Last edited:
I welcome AI to try and replace expert work bc between AI “hallucinations” and the hacks going for a quick $-grab, there are more bad cases going around.

As for clinical work, the larger threat is still mid level clinicians bc they don’t know what they don’t know. Reading the “clinical” notes in a trash PI case read like bad AI notes already. In a good one, they still are largely poorly documented and argued.

We’ve already seen lawyers get reprimanded for passing off bogus citations hallucinated by AI. Just wait until clinicians get skewered for AI generated notes that fabricate “facts”. It still blows my mind that AI/tech companies are requiring clinicians to upload ALL of their clinical notes to “feed the beast”. What will they do with that information? The answer is whatever potentially could make them the most money.

AI is half-baked technology, but the greater threat are the humans using it. Tech always outpaces the laws surrounding AI, so a lot of damage will occur before adequate guardrails are established. Greed & Capitalism are already the downfall of this country, AI is just another tool being used to accelerate the process.
 
I give the same two examples each time I am asked about this. I asked a very sophisticated AI medical tool, our other physicians love to utilize, the following two case scenarios, for diagnostic and treatment recommendations:

1) A 15-year-old female patient who underwent an extensive and thorough evaluation of seizures. Determination of seizures was found to be non-epileptic and was attributed to FNsD. Patient largely denies any relevant stress or trauma history, but onset of seizures appears to coincide with sudden and prolonged physical illness symptoms.

2) a 65-year-old female patient who presents to the emergency department with sudden onset of confusion, visual hallucinations, disorganized speech and thought patterns, aggression, and is afebrile. Family notes no history of psychiatric concerns.

Basic examples, for sure, but done on purpose. On scenario one, all the AI could recommend was generic CBT stuff, but remember, I gave it explicit information of denial of stress or trauma history, which it kept coming back to. After about five minutes of me actually pointing out the appropriate steps to addressing the concerns with the most up to date evidence-based approaches, the AI, and I am not kidding, apologized to me, for continuing to give me unhelpful or incorrect approaches.

On scenario two, it began by recommending antipsychotics, benzos, and cholinesterase inhibitors. I asked it one simple question: 'Why did you not conduct a urinalysis before beginning any of that?"

AI can be a helpful tool, when used judiciously and is driven by expert input, to effectively manage decision-making. The blind following of it's recommendations, makes me very leery, especially in our field, where "data" is more nebulous.
 
I think LLM-based AI is dead in the water because it likely won't get much better than it currently is. People can create platforms to refine things, but it's about as solid as it's going to get. Even if it competently completes a task correctly 95% of the time, that's a dealbreaker for most projects. It requires pulling human staff off of their assigned work to babysit the AI. The errors are weird and unpredictable, which makes things worse.

There was the belief that if we throw more data at it, it'll keep having huge leaps forward. That has not been the case. GPT-5 was a big letdown and the cracks are showing. This is also as user-friendly as it will ever be. The VCs are looking for things to become profitable. We're already seeing the companies eye high volume/low margin money-makers like ads and adult content. Not great signs for "revolutionary" technology. I would put it in the realm of spellcheck for how useful it is to me. I would be a little slower without it, but I could still easily do my job.

I support anyone who wants to replace me with a chatbot. If the chatbot is a satisfying experience, I'm probably not the therapist for them.
 
But should psychologists apply for these jobs to train AI? I find AI to be both useful as well as a threat, but psychologists training AI sounds a bit like turning to the dark side of the empire. Unless, of course, AI is being trained not to replace psychologists but to assist and develop the field.

Society is about to have an interesting time. You can't make everyone's jobs obsolete and expect economic and societal stability.

Society had the same fear and dream regarding the industrial revolution, that machines would do all the work and people could live an idle leisurely life. But what ended up happening was that people were bend to be and work like machines. I believe that the biggest threat about AI is that we will begin to think like machines in order to dialogue with AI. For example, when I am talking to a health insurance company on the phone and I am talking to an AI agent, I need to communicate my thoughts in the way that the AI is understanding me, so I am becoming like a machine. When someone is applying to a job interview and their application is being triaged by AI or are having an interview with AI, they need to meet the criteria of the algorithm to have the next job interview, so they are thinking like machines. When someone is doing "therapy" with an AI bot, the person is internalizing the machine and becoming more machine like. I believe that this could potentially make us less human.
 
But should psychologists apply for these jobs to train AI? I find AI to be both useful as well as a threat, but psychologists training AI sounds a bit like turning to the dark side of the empire. Unless, of course, AI is being trained not to replace psychologists but to assist and develop the field.

Should they or will they? Same group of people that pushed for mandatory low paid post-docs in many states that they never needed. Same folks that were willing to torture people in gitmo if the government paid them enough money. Are you asking if those folks would take money to train AI?


Society had the same fear and dream regarding the industrial revolution, that machines would do all the work and people could live an idle leisurely life. But what ended up happening was that people were bend to be and work like machines. I believe that the biggest threat about AI is that we will begin to think like machines in order to dialogue with AI. For example, when I am talking to a health insurance company on the phone and I am talking to an AI agent, I need to communicate my thoughts in the way that the AI is understanding me, so I am becoming like a machine. When someone is applying to a job interview and their application is being triaged by AI or are having an interview with AI, they need to meet the criteria of the algorithm to have the next job interview, so they are thinking like machines. When someone is doing "therapy" with an AI bot, the person is internalizing the machine and becoming more machine like. I believe that this could potentially make us less human.

Many problems with AI, that is the least of my worries. A better question is what happens to society if people prefer talking to AI? Folks never leave their house now.
 
"AI" is generally disliked. When people think of AI, they're usually talking about LLMs, I think.

There was a Pew study about it recently.

Even if a therapist wanted to train their replacement, it's not happening with LLMs. They're just not sophisticated enough. The GPTs are great a generalities (describing CBT sessions for notes or providing a list of potential differential diagnoses), but they're just not that good at precision. It likely won't get much better. The other AI systems that might be more usable, but these tend to be very targeted. They're not trying to replace an entire clinician. They're marketed as tools for the clinicians who will oversee the output and change it to be more accurate. One example would be some of the documentation applications out there. They listen to the session, do a pretty good job of labeling specific therapeutic techniques, and spit out a decent note for editing. After seeing the end product, I appreciate that they capture stuff I forget about, but it's still up to me to adjust the things it gets wrong. It's giving me raw data and some basic labels from the session, not reliable interpretation.
 
Many problems with AI, that is the least of my worries. A better question is what happens to society if people prefer talking to AI? Folks never leave their house now.

Gary V has spoken about this on podcasts. He was asked about humans having AI/robots as girlfriends, and he talked about how younger generations engage with technology much differently than older generations. Basically, he talked about how kids NOW use chat bots to engage, to have an AI girlfriend, to socialize online INSTEAD of in person, etc. I forget the %, but greater than 0% disclose they have online/AI girlfriends NOW. The surprising part (as an Elder Millennial) was how okay the kids were with basically living an online simulation. He said that they were fine with it being AI v human bc it was meeting one or more needs for them.

Many problems with this, but capitalism will continue to prioritize AI because all of the investors are doing a smash and grab with the technology. We already see the cracks, so I expect these AI companies to get MORE aggressive with launching products and selling companies on promises, with the hope they can get it to eventually work, maybe. Look at where the investments are in the stock market, and it quickly becomes apparent much of the "value" in the market is connected to AI and projecting future advances. It's a super dicey house of cards, not unlike the crypto bubble. No one wants to be left holding the bag.
 
I wonder if AI is uniquely bad or if it's just another flavor of online culture. I have had a lot of cases of young folks being intensely distressed by compulsive behavior around OF. They will pay eye-watering amounts of money just to have their username stated by the individual running the stream. AI is cheaper at this stage, for sure.
 
Gary V has spoken about this on podcasts. He was asked about humans having AI/robots as girlfriends, and he talked about how younger generations engage with technology much differently than older generations. Basically, he talked about how kids NOW use chat bots to engage, to have an AI girlfriend, to socialize online INSTEAD of in person, etc. I forget the %, but greater than 0% disclose they have online/AI girlfriends NOW. The surprising part (as an Elder Millennial) was how okay the kids were with basically living an online simulation. He said that they were fine with it being AI v human bc it was meeting one or more needs for them.

Many problems with this, but capitalism will continue to prioritize AI because all of the investors are doing a smash and grab with the technology. We already see the cracks, so I expect these AI companies to get MORE aggressive with launching products and selling companies on promises, with the hope they can get it to eventually work, maybe. Look at where the investments are in the stock market, and it quickly becomes apparent much of the "value" in the market is connected to AI and projecting future advances. It's a super dicey house of cards, not unlike the crypto bubble. No one wants to be left holding the bag.

Yeah stock market is basically propped up on an AI bubble. Otherwise, we would likely be in a recession already if we ever get the numbers.

That said, I am doubling down on the fact that when all this blows up, I am going to be waist deep in social anxiety cases well into retirement.
 
Society had the same fear and dream regarding the industrial revolution, that machines would do all the work and people could live an idle leisurely life. But what ended up happening was that people were bend to be and work like machines

If only there were some discussion about the shift of income, from those performed the work to those who controlled the means of production during the industrial revolution. Because someone definitely developed an idle and leisurely life, while everyone else did not.

Someone could write a short book about the shift in income. We could call it "Mr. Proletariat and Dr. Bourgeois' Excellent Adventure".
 
If only there were some discussion about the shift of income, from those performed the work to those who controlled the means of production during the industrial revolution. Because someone definitely developed an idle and leisurely life, while everyone else did not.

Someone could write a short book about the shift in income. We could call it "Mr. Proletariat and Dr. Bourgeois' Excellent Adventure".

I am pretty sure someone wrote extensively about this in the past.
 
I think LLM-based AI is dead in the water because it likely won't get much better than it currently is. People can create platforms to refine things, but it's about as solid as it's going to get. Even if it competently completes a task correctly 95% of the time, that's a dealbreaker for most projects. It requires pulling human staff off of their assigned work to babysit the AI. The errors are weird and unpredictable, which makes things worse.

There was the belief that if we throw more data at it, it'll keep having huge leaps forward. That has not been the case. GPT-5 was a big letdown and the cracks are showing. This is also as user-friendly as it will ever be. The VCs are looking for things to become profitable. We're already seeing the companies eye high volume/low margin money-makers like ads and adult content. Not great signs for "revolutionary" technology. I would put it in the realm of spellcheck for how useful it is to me. I would be a little slower without it, but I could still easily do my job.

I support anyone who wants to replace me with a chatbot. If the chatbot is a satisfying experience, I'm probably not the therapist for them.
I think its enough right now to transform a lot of stuff in the field 10x over.
 
I think its enough right now to transform a lot of stuff in the field 10x over.
I try to keep an eye on the new stuff just because I find it interesting. Is there something specific that seems like it'll make big changes?
 
"AI" is generally disliked. When people think of AI, they're usually talking about LLMs, I think.

There was a Pew study about it recently.

Even if a therapist wanted to train their replacement, it's not happening with LLMs. They're just not sophisticated enough. The GPTs are great a generalities (describing CBT sessions for notes or providing a list of potential differential diagnoses), but they're just not that good at precision. It likely won't get much better. The other AI systems that might be more usable, but these tend to be very targeted. They're not trying to replace an entire clinician. They're marketed as tools for the clinicians who will oversee the output and change it to be more accurate. One example would be some of the documentation applications out there. They listen to the session, do a pretty good job of labeling specific therapeutic techniques, and spit out a decent note for editing. After seeing the end product, I appreciate that they capture stuff I forget about, but it's still up to me to adjust the things it gets wrong. It's giving me raw data and some basic labels from the session, not reliable interpretation.
If you think about this like a scientist and research-informed clinician, you’re missing the actual risk.

Consumers of mental health services aren’t able to discern feeling better from getting better at high rates. If they were, incompetent midlevels wouldn’t exist and almost no one would be in therapy for more than a couple weeks at a time. You don’t need marked specificity or precision to make people feel better without caring if they get better.

And why would large scaled AI therapy companies care if people got better? In my PP (coming back online in a few weeks! For like 4 patients a week) why would I care if someone got better and left? I have a waitlist as long as my arm, I can just replace them. But getting better on an ai therapy service means terminating a subscription. Why would a company with a sole profit motive want to treat their patients out of being customers when their satisfaction rates would be just as high, on average, providing sycophantic slop?

Just to be clear, I think there are ways ai can and will revolutionize health care. But slapping it across any business model isn’t one of them.
 
AI has been useful in helping me with searches for information. It’s an improvement over standard search engine in my opinion. i also enjoy testing it and finding the limitations and flaws. This second part is pretty much psychotherapy. The main difference is that patients can learn and develop new patterns of responding and interacting, at this point AI can’t. It merely simulates it and I can tell the difference. I would probably enjoy being involved in developing this tech and it is interesting to think about the ethics of training my replacement. I also believe that like any tool, some humans will use it for their benefit at a cost to others. As psychologists, we could potentially be involved in how to minimize and safeguard against that, but given current state of human thought and our field, AI might do a better job of that itself.
 
I try to keep an eye on the new stuff just because I find it interesting. Is there something specific that seems like it'll make big changes?
I'm not sure that its any one thing, rather the onset of its modern capacity has been extremely quick. What the limits of it are with current power is still a bit impressive. As the ability and access to it grows, so will its influence across our field. Secure AI transcription devices to save all sessions and completion of note in real time, seems like just one minor and realistic change. The recent novel scientific breakthroughs are enough to cement my views that the influence in our field will be greater than transcription.
 
AI has been useful in helping me with searches for information. It’s an improvement over standard search engine in my opinion. i also enjoy testing it and finding the limitations and flaws. This second part is pretty much psychotherapy. The main difference is that patients can learn and develop new patterns of responding and interacting, at this point AI can’t. It merely simulates it and I can tell the difference. I would probably enjoy being involved in developing this tech and it is interesting to think about the ethics of training my replacement. I also believe that like any tool, some humans will use it for their benefit at a cost to others. As psychologists, we could potentially be involved in how to minimize and safeguard against that, but given current state of human thought and our field, AI might do a better job of that itself.
For run-of-the-mill Google searches, I've actually found the results to be less trustworthy than what would normally come up (e.g., by overemphasizing info from what are probably popular, but questionably accurate/robust, websites). But I'm sure there are more sophisticated AI search tools than Google's "AI overview."
 
I’ll be a little frank here. If all you’re doing is replacing google search w a ChatGPT search, I don’t think you fully understand what ai/llms are and can do.

If you think just because LLMs presently aren’t making responses you think are sophisticated enough, you sound a bit like graphic artists who said ai art always messes up hands.
Go ask a graphic artist how their job market is.
 
If you think about this like a scientist and research-informed clinician, you’re missing the actual risk.

Consumers of mental health services aren’t able to discern feeling better from getting better at high rates. If they were, incompetent midlevels wouldn’t exist and almost no one would be in therapy for more than a couple weeks at a time. You don’t need marked specificity or precision to make people feel better without caring if they get better.

And why would large scaled AI therapy companies care if people got better? In my PP (coming back online in a few weeks! For like 4 patients a week) why would I care if someone got better and left? I have a waitlist as long as my arm, I can just replace them. But getting better on an ai therapy service means terminating a subscription. Why would a company with a sole profit motive want to treat their patients out of being customers when their satisfaction rates would be just as high, on average, providing sycophantic slop?

Just to be clear, I think there are ways ai can and will revolutionize health care. But slapping it across any business model isn’t one of them.
Honestly, I am looking at this from a business perspective and don't think LLM-based AI will revolutionize healthcare. ChatGPT loses money with every prompt. OpenAI is the most successful of all these companies and they're still deeply in the hole with no real viable, profitable product.

OpenAI, Anthropic, and even Grok are starting the process of putting guardrails on their platforms because many signs point to them becoming more advertiser focused. These systems are incredibly expensive to run and are really hard to control. Even minor tweaks have unpredictable and often not easily fixable issues. For example, people strongly dislike GPT-5. Efforts to bring back the old models has been largely mixed or unsuccessful because it's difficult to make those kinds of granular changes in these massive systems. I would hate to build my company around that kind of instability. I would not want to be "Ash" or "Abby" right now in the middle of the turmoil.

We're working from the assumption that LLMs will stabilize and be adopted by companies long-term. I am still skeptical that will be the case, especially as the prices go up. Many serious AI-based startups have significant limits on the number of prompts a consumer can use because of the expense. There are limitations even with a subscription. That's with all the large AI platforms deeply discounting the costs to be the winner in the growth game. VCs are starting to lose their appetite for AI proposals without a concrete plan to make money and I think the pullback will be the next phase. I may be overly focused on the financial side, but the numbers just don't make sense to me. Grifters want money and there's only money from investors at the moment, not consumers. I bet they're all crossing their fingers that someone buys them out before everything folds.
 
Honestly, I am looking at this from a business perspective and don't think LLM-based AI will revolutionize healthcare. ChatGPT loses money with every prompt. OpenAI is the most successful of all these companies and they're still deeply in the hole with no real viable, profitable product.

OpenAI, Anthropic, and even Grok are starting the process of putting guardrails on their platforms because many signs point to them becoming more advertiser focused. These systems are incredibly expensive to run and are really hard to control. Even minor tweaks have unpredictable and often not easily fixable issues. For example, people strongly dislike GPT-5. Efforts to bring back the old models has been largely mixed or unsuccessful because it's difficult to make those kinds of granular changes in these massive systems. I would hate to build my company around that kind of instability. I would not want to be "Ash" or "Abby" right now in the middle of the turmoil.

We're working from the assumption that LLMs will stabilize and be adopted by companies long-term. I am still skeptical that will be the case, especially as the prices go up. Many serious AI-based startups have significant limits on the number of prompts a consumer can use because of the expense. There are limitations even with a subscription. That's with all the large AI platforms deeply discounting the costs to be the winner in the growth game. VCs are starting to lose their appetite for AI proposals without a concrete plan to make money and I think the pullback will be the next phase. I may be overly focused on the financial side, but the numbers just don't make sense to me. Grifters want money and there's only money from investors at the moment, not consumers. I bet they're all crossing their fingers that someone buys them out before everything folds.
Did you ever see the movie Sherlock Holmes (the one with RDJr)? Moriarty isn’t interested in the main plot driver—he steals the wireless transmission technology.

OpenAI doesn’t exist to make money for the company. It exists to enrich and empower Altman, who will then have political leverage to do other things. That’s not a bug, it’s a feature.
 
Did you ever see the movie Sherlock Holmes (the one with RDJr)? Moriarty isn’t interested in the main plot driver—he steals the wireless transmission technology.

OpenAI doesn’t exist to make money for the company. It exists to enrich and empower Altman, who will then have political leverage to do other things. That’s not a bug, it’s a feature.
I do think he is very motivated to enrich and empower himself. I am hesitant to flatten the complexity of what's happening at OpenAI to the desire of Altman to amass power.
 
Sure, and Trump shoes are high quality products. 😉
It's hard to give good faith responses to quips. We're talking about the difference between a 2 million dollar company and a company helping to prop up the US economy based on hopes and dreams. I understand your broader point. I think there is a richer story to parse out.
 
I do some research in this space (new R01 funded just before the shutdown!), albeit it is quite a bit more niche and wildly different from the focus here - which seems to be on LLMs and AI chatbots. AI has been around in a general sense for 50 years. The line between AI and traditional statistics we are all familiar with is actually quite blurry.

I think its silly to stick our heads in the sand, deny its uses and let the world move forward without us. As a profession, we've done that in the past on numerous occasions and it hasn't exactly gone well for us. That said, I agree with the above that right now I think its a "new tool" at best. I'm very certain companies will leverage it to reduce costs, but I'm even more certain it won't work quite right a LOT of the time and human intervention will be necessary. Many developers seem very focused on the idea that more data will somehow solve all problems, but engineers seem to have almost zero understanding of data quality issues. For example, there are TONS of folks doing EHR analysis right now. Think about the last time you went to the doctor. Did they ask you if you were still taking some medication you haven't taken in five years? That you told them you stopped last time and the time before that too? They sure do for me. Simplistic example, but I think its representative of the challenges this will face. Models will certainly continue to improve, but will max out well short of perfection. Embedding guardrails is starting to happen, but this is not going to be easy. Hallucinations are also very real and I'm not convinced this is a readily solvable problem given how they operate - its not like its a giant series of if-then statements you can manually fix. Just yesterday, ChatGPT referred me to a very specific and reasonable-seeming meta-analysis containing the correlation between two constructs I was interested in....I spent 30 minutes scouring every inch of it and ChatGPT 100% made that number up.

My approach to this is one of balanced skepticism. I'm very open to new technology and actively look for ways to leverage it. I'm also not worried about being replaced any time soon because I'm not a one-trick pony. If I drove for uber, I'd be far more worried about self-driving cars putting me out of work but even that I think is unlikely to happen within the next 20 years. Will it change how I do my job? Almost certainly. Will I need to pivot? Possibly. Am I worried about starving in the street because AI took my job? Not even a little bit.
 
I’ll be a little frank here. If all you’re doing is replacing google search w a ChatGPT search, I don’t think you fully understand what ai/llms are and can do.

If you think just because LLMs presently aren’t making responses you think are sophisticated enough, you sound a bit like graphic artists who said ai art always messes up hands.
Go ask a graphic artist how their job market is.
I appreciate the idea of "let's see where this tool can/has already/will continue to be." I too am very interested in it's ability to really assist with areas where analytical geometry and infinitesimal calculus can lead to newer discoveries. At the end of the day, it is still algorithmic, no?

To demonstrate, I asked Copilot to generate an "out of the box" thought related to AI's ability to generate it's own internalized line of questioning about the ability to question. The thing started to develop an answer and then broke. Literally gave me: "Sorry, it looks like I can't chat about this. Let's try a different topic."

We in psychology can talk about Descarte's posits all day long. Debate on end. rebuttal with witty banter. Muse. Question. Literally review multiple other threads that debate dualism within these forums, for evidence.

AI? See above.
 
This is smaller potatoes with AI currently, but my current center is angling toward buying those add-on AI packages to the EMR where it listens to your sessions and provides a summary/conceptualization.

So, hypothetical question for you all: if your job mandated that you use it, would you accept?
 
This is smaller potatoes with AI currently, but my current center is angling toward buying those add-on AI packages to the EMR where it listens to your sessions and provides a summary/conceptualization.

So, hypothetical question for you all: if your job mandated that you use it, would you accept?
This seems so small and predictable (the integration). Good to see an example here already to what i said would mainstream above.

The answer is yes, new technology will be adopted as normal practice.
 
This is smaller potatoes with AI currently, but my current center is angling toward buying those add-on AI packages to the EMR where it listens to your sessions and provides a summary/conceptualization.

So, hypothetical question for you all: if your job mandated that you use it, would you accept?

If it's like, you have to use it or you will be fired, then I would begrudgingly use it. Otherwise, I would resist as long as possible.

Did you guys see the AWS outage yesterday? I think that relying on technology to this extent is not the smartest idea.
 
It's hard to give good faith responses to quips. We're talking about the difference between a 2 million dollar company and a company helping to prop up the US economy based on hopes and dreams. I understand your broader point. I think there is a richer story to parse out.
Surely you’ve heard the word “oligarchy” get used a lot in the last ten months. That’s literally this.
 
This seems so small and predictable (the integration). Good to see an example here already to what i said would mainstream above.

The answer is yes, new technology will be adopted as normal practice.
Sure, but when is there a limit? I certainly would think twice about going to therapy if every single session was recorded and treated as part of the medical record.
 
Thats a question of how in application, not if. If is settled.
I’m not sure what you mean by the “if” already being settled. This has not been rolled out in my region and my understanding is that it’s in very early stages at some medical centers. Seems certainly far from a settled standard of care to me.
 
I’m not sure what you mean by the “if” already being settled. This has not been rolled out in my region and my understanding is that it’s in very early stages at some medical centers. Seems certainly far from a settled standard of care to me.
Our medical clinics all already utilize ambient scribes. There exists specific ones for outpatient services, including therapy sessions:


*Not an endorsement, just wanted to provide information that it exists.
 
Our medical clinics all already utilize ambient scribes. There exists specific ones for outpatient services, including therapy sessions:


*Not an endorsement, just wanted to provide information that it exists.
Thanks for the info! Do you happen to know if the recordings are considered part of the medical record, or how storage is handled?
 
You lucky dog! May I ask what institute?
I had one that was supposed to be funded after Council mtg in June but then got backtracked due to forward funding eating all the money, now in limbo and presumably dead 🙁
NIDA, though we had an NCI (not AI-related) app funded too.

I have no idea how they are prioritizing things now with forward-funding (or even determining what gets forward-funded since it isn't 100% of grants). NCI historically operated exclusively off percentiles, but I do not believe we'd have gotten it if that were the case here since fwd funding dropped paylines.

Don't want to derail discussion, feel free to PM if you want to discuss further.
 
Thanks for the info! Do you happen to know if the recordings are considered part of the medical record, or how storage is handled?
Let me ask one of my colleagues, as I do not use it myself, just to make sure I don't spread misinformation regarding it.

EDIT: After discussing it with my colleagues, they were able to show me that the system records and transcribes the entire conversation overheard in the room and places this in the secure chart of the patient. It than appears to use the onboard LLM to interpret and generate the note from this, suggesting it is all self-contained and therefore, theoretically, secure. I will say the encryption for the transcription appears robust, as it was difficult and multistep with numerous authentications for them to even show me where that was located.
 
Last edited:
Top