AI in psychology

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

calimich

Associate Professor
Lifetime Donor
10+ Year Member
Joined
Nov 22, 2013
Messages
658
Reaction score
1,013
How is AI showing up in your work?

We've just begun considering AI on campus, most concretely by including a detector within the online student paper submission portal. It attempts to detect, and highlight, % of the paper that was AI generated. For me this semester, out of ~120 student papers, ~800 pages, nearly all were marked at 0%. I had a few that were ~5%, and it looked like AI was most commonly used to generate topic sentences. There were also two final papers that had the whole abstract marked as AI. Nothing egregious and we don't have an official campus policy yet. I've played around with its ability to aesthetically display data.

In pp, it's come up with a couple clients, mostly with concerns of how AI might impact their work and career trajectory. Others using it as a WebMD of sorts. I've used it, similar to the physicians in the times article, to suggest edits to my report writing, like replacing jargon and using a more active voice. I see is as a kind of super thesaurus.

Personally, I've used chatgpt as a travel agent - to generate ideas and help plan complicated trips. And to more deeply explore music -- suggesting artists/composers from parts of the world I'm not familiar with.

How is AI showing up in your life?

NY Times - Doctors using ChatGPT

APA - How to use ChatGPT as a learning tool

Al Jazerra - ChatGPT is giving therapy. A mental health revolution may be next

Elyoseph, Hadar-Shoval, Asraf, Lvovsky, (2023). ChatGPT outperforms humans in emotional awareness evaluations

Members don't see this ad.
 
  • Like
Reactions: 1 users
How is AI showing up in your work?

We've just begun considering AI on campus, most concretely by including a detector within the online student paper submission portal. It attempts to detect, and highlight, % of the paper that was AI generated. For me this semester, out of ~120 student papers, ~800 pages, nearly all were marked at 0%. I had a few that were ~5%, and it looked like AI was most commonly used to generate topic sentences. There were also two final papers that had the whole abstract marked as AI. Nothing egregious and we don't have an official campus policy yet. I've played around with its ability to aesthetically display data.

In pp, it's come up with a couple clients, mostly with concerns of how AI might impact their work and career trajectory. Others using it as a WebMD of sorts. I've used it, similar to the physicians in the times article, to suggest edits to my report writing, like replacing jargon and using a more active voice. I see is as a kind of super thesaurus.

Personally, I've used chatgpt as a travel agent - to generate ideas and help plan complicated trips. And to more deeply explore music -- suggesting artists/composers from parts of the world I'm not familiar with.

How is AI showing up in your life?

NY Times - Doctors using ChatGPT

APA - How to use ChatGPT as a learning tool

Al Jazerra - ChatGPT is giving therapy. A mental health revolution may be next

Elyoseph, Hadar-Shoval, Asraf, Lvovsky, (2023). ChatGPT outperforms humans in emotional awareness evaluations

I use it for my practice to help with marketing.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
I don't even know how to use AI at this point, which should probably be a bit embarrassing to admit. I should see if it can help me shorten my reports.
Only You GIF by Smokey Bear


Only YOU can prevent overly long reports.
 
  • Like
  • Haha
Reactions: 8 users
How is AI showing up in your work?

We've just begun considering AI on campus, most concretely by including a detector within the online student paper submission portal. It attempts to detect, and highlight, % of the paper that was AI generated. For me this semester, out of ~120 student papers, ~800 pages, nearly all were marked at 0%. I had a few that were ~5%, and it looked like AI was most commonly used to generate topic sentences. There were also two final papers that had the whole abstract marked as AI. Nothing egregious and we don't have an official campus policy yet. I've played around with its ability to aesthetically display data.

In pp, it's come up with a couple clients, mostly with concerns of how AI might impact their work and career trajectory. Others using it as a WebMD of sorts. I've used it, similar to the physicians in the times article, to suggest edits to my report writing, like replacing jargon and using a more active voice. I see is as a kind of super thesaurus.

Personally, I've used chatgpt as a travel agent - to generate ideas and help plan complicated trips. And to more deeply explore music -- suggesting artists/composers from parts of the world I'm not familiar with.

How is AI showing up in your life?

NY Times - Doctors using ChatGPT

APA - How to use ChatGPT as a learning tool

Al Jazerra - ChatGPT is giving therapy. A mental health revolution may be next

Elyoseph, Hadar-Shoval, Asraf, Lvovsky, (2023). ChatGPT outperforms humans in emotional awareness evaluations

Today, it's AI algorithms to detect and report on the percentage of a paper that was 'authored' by an AI algorithm (however the heck that's determined and with whatever known error rate).

Next will be the AI algorithm to write a paper to defeat the AI algorithm designed to detect if the paper was authored by an AI algorithim...ad infinitum...all the way down (where there's turtles)...holding everything up.

Think the virus/malware vs. anti-virus wars...

I predict that all future academic evaluations are gonna have to require in person oral examinations after a bioscan to rule out cybernetic implants
 
  • Haha
Reactions: 1 user
I look forward to the enterprising young and precocious VA innovator at some MIRECC who creates an AI program to compose a customized limerick to cure the PTSD out of any veteran reading it. Technology sure is gosh darned amazing.

"There once was a soldier named Bourne
Whose psyche was tattered and torn;
He faced all his fears
And away went his tears
'Til at last there was nothing to mourn."
 
Last edited:
  • Like
  • Haha
Reactions: 4 users

If there are between 10^111 to 10^123 possible move variations in a game of chess...

how many possible move variations are there in a course of evaluation/psychotherapy with a veteran over a 2 year course of treatment?

tic tac toe < checkers < chess < VA psychotherapy practice

good luck, AI overlords

edit: on the other hand, AI assisted evaluation of relevant clinical constructs like overreporting/underreporting of psychiatric symptoms may finally be allowed to occur within that system since admins and clinicians can just shrug and point to the computer, right? I'll give you exactly 1 guesses as to who the veteran is going to start arguing with. Not 'the AI.' Which illustrates the truism that algorithms don't 'make decisions'...people do.
 
Last edited:
  • Like
Reactions: 1 user
I look forward to the enterprising young and precocious VA innovator at some MIRREC who creates an AI program to compose a customized limerick to cure the PTSD out of any veteran reading it. Technology sure is gosh darned amazing.

I would be more concerned about its future for differential dx and possible uses to replace psychologists doing testing altogether. I do think physicians in non-interventional and surgical fields may also have more to woryy about long-term. I can see AI being increasingly relied upon for dx. I can also see it being used for C&P evals.
 
  • Like
Reactions: 1 user
I would be more concerned about its future for differential dx and possible uses to replace psychologists doing testing altogether. I do think physicians in non-interventional and surgical fields may also have more to woryy about long-term. I can see AI being increasingly relied upon for dx. I can also see it being used for C&P evals.
The thing about diagnosis is you always have to have a 'gold-standard' criterion against which to evaluate the accuracy of the AI classification algorithm. isn't that, ultimately, going to be a human clinician?
 
The thing about diagnosis is you always have to have a 'gold-standard' criterion against which to evaluate the accuracy of the AI classification algorithm. isn't that, ultimately, going to be a human clinician?

In the forensic or utilization review realm, sure. In the clinical realm? Look how much garbage dx is in the average chart already. What stops the VA or any community hospital system from significantly reducing the numbers of doctors on salary and just using AI and mid-levels to save a buck with a few dept chiefs or clinical leads to provide second opinions? The flipside is maybe less need for mid-levels altogether if the AI can augment a doctor's productivity.
 
Last edited:
  • Like
Reactions: 1 user
Members don't see this ad :)
In the forensic or utilization review realm, sure. In the clinical realm? Look how much garbage dx is in the average chart already. What stops the VA or any community hospital system from significantly reducing the numbers of doctors on salary and just using AI and mid-levels to save a buck with a few dept chiefs or clinical leads to provide second opinions? The flipside is maybe less need to mid-levels altogether if the AI can augment a doctor's productivity.
I guess I'm just baffled as to how this would be implemented.

Would veterans literally make 50 min appointments with an AI chatbot that would proceed to (by chat) conduct a facsimile of a diagnostic interview or psychotherapy session and then, at some predetermined point render a 'finalized' diagnosis that isn't open to debate, questioning or further input by a veteran??? Or would they have continuous/unlimited ongoing access to chatting withh it atound the clock? A hovering robot that follows the veteran around video/audio recoeding continuously for a week?

Would the AI 'reside' in a lifelike robot that would do the same? A hologram? Would the robot have a mace or peppergas dispenser built into its nostrils? It had better be sturdily constructed.

I could see C&P exams being (possibly) operationalized, discrete episodes of assessment/diagnosis but not what I do as an outpatient clinician. Not without VA admins and politicians 'growing a pair' (yeah...riiiiiiiight) and telling veterans "VA DocBot has spoken,' you're done with access to psychotherapy...and, no, you may not reschedule with DocBot to further explain your nightmares or that they've increased in frequency, or that you have a new symptom or you forgot to mention this or that.' Ole DocBot is gonna have all the same headaches and complexities to manage in terms of mental health patients/veterans' presentations, complaints, shucks, and jives that I've learned how to handle over several years of practicing. I'd like to see it try. You think veterans aren't gonna have a problem with it?

Unless...of course....anyone really believes that all the non-clinician types are gonna be fielding all these queries, objections, etc. from disgruntled vets who object to what Ole DocBot has to say about their diagnoses, tx plans, goals for therapy, progress, etc. LOL, give me a break, I will bet my career stability on the fact that there ain't NO admin at VA who wants me NOT to be there in between them and a veteran mental health patient who is disgruntled over his mental health care. Puuuuh-leeeeeeeeze!!!!!
 
Last edited:
Why is it really problematic exactly?

The scale was never designed for ChatGPT to take it so the scoring criterion (if it was objectively applied) is not made differentiate between the human responses and responses that sound human. The highest level of possible scoring is really attainable by today's technology since all you need to do is differentiate your own emotions from others. Not sure it means that it's useful for practice, as the authors imply.

For fun, I took the example scenario from the Lane et al., (1990) paper (it's on research gate) and plugged it in the ChatGPT. I changed 'you' to 'human' as the study authors did:

A human and their best friend are in the same line of work. There is a prize given annually to the best performance of the year. Both the human and their best friend work hard to win the prize. One night is the winner is announced: the best friend won the prize. How would the best friend feel?

Response:

If the best friend wins the prize while both the human and their best friend have been working hard in the same line of work, the best friend would likely experience a mix of emotions. Here are a few possible reactions:

  1. Joy and Excitement: The best friend would probably feel elated and overjoyed about winning the prize. They may celebrate their achievement, feeling proud of their hard work paying off.
  2. Surprised: Winning the prize might come as a surprise to the best friend, especially if they didn't expect to be chosen as the winner. This surprise could amplify their joy and excitement.
  3. Gratitude: The best friend might feel grateful for the recognition and appreciation of their efforts. They may appreciate the support they received from the human and acknowledge their role in helping them achieve success.
  4. Empathy for the Human: The best friend, being close to the human, might also experience a sense of empathy towards them. They might understand that the human had hoped to win the prize and could feel a mix of happiness for their own success and concern for the emotions the human might be experiencing.
  5. Guilt or Awkwardness: Depending on the nature of the friendship and the competitive dynamics between them, the best friend might also feel a sense of guilt or awkwardness for surpassing the human in this particular instance. They might worry about potentially damaging their relationship or causing jealousy or disappointment in the human.
Ultimately, the best friend's feelings could be a complex blend of joy, surprise, gratitude, empathy, and possibly a touch of guilt or awkwardness. It would be essential for them to communicate openly with the human and navigate their emotions in a considerate and supportive manner to maintain the strength of their friendship.

Author's discussion:

The present findings expand our understanding of the abilities of ChatGPT and shows that, beyond possessing theoretical and semantic knowledge (Kung et al., 2023; Rudolph et al., 2023), ChatGPT can also successfully identify and describe emotions from behavioral descriptions in a scenario. It can reflect and abstract emotional states in deep and multidimensional integrative ways. Interestingly, recent studies that have discussed the potential of AI in the mental health field have mostly emphasized its potential in technical tasks that could reduce the need for clinical encounters. They claim that as the effectiveness of mental health care is heavily reliant on strong clinician–patient relationships, AI technologies present an opportunity to streamline non-personalized tasks, thereby freeing up clinicians’ time to focus on delivering more empathic care and “humanizing” their practice (Topol, 2019). Scholars have suggested the following applications of AI in mental health: assisting clinicians in completing time-consuming tasks such as documenting and updating medical records (Doraiswamy et al. 2020), improving the accuracy of diagnosis and prognosis (Bzdok and Meyer-Lindenberg, 2018), promoting the understanding of mental illnesses mechanisms (Braun et al., 2017), and improving treatment that based on biological feedback (Lee et al., 2021). However, beyond its “technical” contributions, our research highlights AI’s potential to increase interpersonal (i.e., one can describe an interpersonal situation and ask ChatGPT to suggest what emotions the other person probably felt) and intrapersonal (i.e., one can describe a situation and ask ChatGPT to suggest what emotions they probably felt) understanding, which is considered a core skill in clinical psychotherapy.
 
I guess I'm just baffled as to how rhis would be implemented.

Would veterans literally make 50 min appointments with an AI chatbot that would proceed to (by chat) conduct a facsimile of a diagnostic interview or psychotherapy session and then, at some predetermined point render a 'finalized' diagnosis that isn't open to debate, questioning or further input by a veteran??? Or would they have continuous/unlimited ongoing access to chatting withh it atound the clock?

Would the AI 'reside' in a lifelike robot that would do the same? A hologram? Would the robot have a mace or peppergas dispenser built into its nostrils? It had better be sturdily constructed.

I could see C&P exams being (possibly) operationalized, discrete episodes of assessment/diagnosis but not what I do as an outpatient clinician. Not without VA admins and politicians 'growing a pair' (yeah...riiiiiiiight) and relling veterans "VA DocBot has spoken,' you're done with access to psychotherapy...and, no, you may not reschedule with DocBot to further explain your nightmares or that they've increased in frequency, or that you have a new symptom or you forgot to mention this or that.' Ole DocBot is gonna have all the same headaches and complexities to manage in terms of mental health patients/veterans' presentations, complaints, shucks, and jives that I've learned how to handle over sevetal years of practicing. I'd like to see it try. You think veterans aren't gonna have a problem with it?

Unless...of course....anyone really believes that all the non-clinician types are gonna be fielding all these queries, objections, etc. from disgruntled vets who object to what Ole DocBot has to say about their diagnoses, tx plans, goals for therapy, progress, etc. LOL, give me a break, I will bet my career stability on the fact that there ain't NO admin at VA who wants me NOT to be there in between them and a veteran mental health patient who is disgruntled over his mental health care. Puuuuh-leeeeeeeeze!!!!!

AI embedded into the EHR. Templated clinical interview that is entered by the mid level or patient (similar to a self-report questionnaire). EHR suggests a dx when you complete your note.
 
  • Like
Reactions: 1 users
AI embedded into the EHR. Templated clinical interview that is entered by the mid level or patient (similar to a self-repot questionnaire). EHR suggests a dx when you complete your note.
Fascinating. I'm gonna have to reflect on that a bit.
 
AI embedded into the EHR. Templated clinical interview that is entered by the mid level or patient (similar to a self-report questionnaire). EHR suggests a dx when you complete your note.
Why involve a human at all?
 
Why involve a human at all?

By the time it reaches that level, and it may, I will be retired or dead. It is more about acclimating the population to interacting with technology than it is the technology. My kid will grow up thinking Alexa and Siri are normal, I still think it is weird. However, I already prefer self checkout in many instances.
 
Last edited:
  • Like
Reactions: 1 users
By the time it reaches that level, and it may, I will be retired or dead. It is more about acclimating the population to interacting with technology than it is the technology. My kid will grow up thinking Alexa and Siri are normal, I still think it is weird. However, I already prefer self checkout in many instances.
Me and you, both. I HATE 'self-checkout' (and my first job ever was bagging groceries).

If the grocery store expects me to work for them again at my age, I need a paycheck, 401K, and medical/dental plan, dammit!
 
Last edited:
Me and you, both. I HATE 'self-checkout' (and my first job ever was bagging groceries).

If the grocery store expects me to work for them again at my age, I need a paycheck, 401K, and medical/dental olan, dammit!

I shop a couple times a week, and I love self-checkout. Generally much faster, and you end up bagging your own groceries at most places these days anyway.
 
  • Like
Reactions: 1 user
In pp, it's come up with a couple clients, mostly with concerns of how AI might impact their work and career trajectory.
On this point, I've found that analogy of ATMs and bank tellers to be a useful analogy for those with concerns. Hint: there are more bank tellers now than before the advent of ATMs. Certainly there is a difference between automation and AI though I think there are good parallels between automation and repetitive jobs with AI and knowledge fields.
 
  • Like
Reactions: 2 users
On this point, I've found that analogy of ATMs and bank tellers to be a useful analogy for those with concerns. Hint: there are more bank tellers now than before the advent of ATMs. Certainly there is a difference between automation and AI though I think there are good parallels between automation and repetitive jobs with AI and knowledge fields.
MSA's (clerks/ secretaries) definitely better look the Hell out, lol

'Let me administer a labyrinthine series of scheduling rules filling up a tome the size of a New York City phone book'.... or...

Provider clicks on a button saying veteran is authorized to self-schedule (reschedule) using their phone app (or web portal) as many weekly appointments (not more frequent) with a +/- 3 day tolerance/ wiggle room for up to the next six months and the veteran can cancel or reschedule on their phone app with a couple of clicks

...wait...we could already implement that without fancy AI algorithms...and probably should have done so at least a decade ago
 
  • Like
Reactions: 1 users
MSA's (clerks/ secretaries) definitely better look the Hell out, lol

'Let me administer a labyrinthine series of scheduling rules filling up a tome the size of a New York City phone book'.... or...

Provider clicks on a button saying veteran is authorized to self-schedule (reschedule) using their phone app (or web portal) as many weekly appointments (not more frequent) with a +/- 3 day tolerance/ wiggle room for up to the next six months and the veteran can cancel or reschedule on their phone app with a couple of clicks

...wait...we could already implement that without fancy AI algorithms...and probably should have done so at least a decade ago

But then who would make the 3 phone calls and send them a letter when they miss a session?....oh,wait:rolleyes:
 
  • Haha
  • Like
Reactions: 1 users
But then who would make the 3 phone calls and send them a letter when they miss a session?....oh,wait:rolleyes:
The same person who makes all his own photocopies, answers his own phone calls, sweeps his own floor, takes out his own trash, etc. etc.

I have been trying for over a month to correct my name in the HR / TMS database (got randomly screwed up somehow about a year ago for some unknown reason). I have contacted multiple departments (who all send me to other departments) and entered a total of four separate tickets with different departments over the same issue. It was in all likelihood a data entry error that I just can't get corrected. Somehow I just keep running into 'The Department of 'That's not our Department''
 
Last edited:
  • Like
Reactions: 1 user
An interesting thing we've learned from the remote work/return to office thing is that companies won't always do what is cheapest/maximizes profits, because remote work has been shown to be a cost-savings with positive (or, at the very least, neutral) impacts on productivity, yet companies are still insisting on return to office. Not sure how that will translate into decisions about AI, but it's an interesting case study. Also, AI "hallucinating" stuff seems to be a significant problem, at least for now.
 
  • Like
Reactions: 1 users
An interesting thing we've learned from the remote work/return to office thing is that companies won't always do what is cheapest/maximizes profits, because remote work has been shown to be a cost-savings with positive (or, at the very least, neutral) impacts on productivity, yet companies are still insisting on return to office. Not sure how that will translate into decisions about AI, but it's an interesting case study. Also, AI "hallucinating" stuff seems to be a significant problem, at least for now.
I'm not necessarily disagreeing with your position, in general...but...

Has anyone who works at the VA found that, since all these positions have gone 'work from home'/ remote (HR, Payroll, IT, etc.) you've actually been having a more difficult time getting them to help you with stuff?

That has been my experience. It has been even more difficult (I didn't think that was even possible) to contact them than ever before. I can't even physically go over to their building and 'lay siege' to them anymore.
 
  • Like
Reactions: 1 user
I'm not necessarily disagreeing with your position, in general...but...

Has anyone who works at the VA found that, since all these positions have gone 'work from home'/ remote (HR, Payroll, IT, etc.) you've actually been having a more difficult time getting them to help you with stuff?

That has been my experience. It has been even more difficult (I didn't think that was even possible) to contact them than ever before. I can't even physically go over to their building and 'lay siege' to them anymore.

I mean, they weren't very helpful before, lol.

I would be all in favor of AI taking over legacy patients. ;)
 
  • Like
Reactions: 1 users
An interesting thing we've learned from the remote work/return to office thing is that companies won't always do what is cheapest/maximizes profits, because remote work has been shown to be a cost-savings with positive (or, at the very least, neutral) impacts on productivity, yet companies are still insisting on return to office. Not sure how that will translate into decisions about AI, but it's an interesting case study. Also, AI "hallucinating" stuff seems to be a significant problem, at least for now.

I don't want to stray too far off topic, but the remote work thing is interesting. While it can save money in the long term, in the short term it will lose a lot of money and that is the problem. Companies cannot easily break long term leases or unload commercial real estate right now without a firesale. Long-term as leases end and commercial real estate prices rebound, we shall see.
 
  • Like
Reactions: 1 user
I've used to help organize some recommendations.
 
Also useful for having it change pronouns and names in text.
 
The thing about diagnosis is you always have to have a 'gold-standard' criterion against which to evaluate the accuracy of the AI classification algorithm. isn't that, ultimately, going to be a human clinician?
Have you ever met a group of humans that would be that good standard? lol. I think that AI might become the good standard and like chess, only the masters will be able to surpass. Just think how much the VA would be improved if it was all AI in charge of administration.
 
  • Haha
Reactions: 1 user
Have you ever met a group of humans that would be that good standard? lol. I think that AI might become the good standard and like chess, only the masters will be able to surpass. Just think how much the VA would be improved if it was all AI in charge of administration.
I'm still confused how this would work, methodologically, but, I guess that's why I work at VA.

I'm sure, in short order, the papers will start being published on this to alleviate my confusion, lol. It'll be interesting.
 
Last edited:
  • Haha
Reactions: 1 user
I can think of a lot of potential uses for AI in our field. However, I do think we are a very, very long way off from AI being able to demonstrate flexibility that a human can to handle a multitude of tasks. My fear is we land in a two-tier system where only the wealthier/privileged get to work with humans. There is nothing more infuriating than trying to communicate with a chatbot or some other computer interface that simply doesn't get what you want to do and can't friggin connect you to a person who could solve the issue in 10 seconds or less. I worry we're going to end up with a lot more of that in the near-term even though the technology won't "really" be ready to do what we try to make it do for decades. Everyone and their brother has an AI startup right now. My experience is that > 95% of these are trash products that don't work <at all> and the entire business model revolves around tricking enough people into buying/investing that google/fb/whoever will eventually pay them to go away.

RE: remote work and productivity - I'm actually not sure the jury is (fully) out yet. There was a rush to study it for obvious reasons that tended to rely on very crass indicators of productivity. I suspect we'll eventually land on it being positive for some indices, neutral for some and negative for some. Anecdotally, that certainly fits with what I'm seeing just in my narrow experience. I think eventually the pendulum will land into a more hybrid model than pre-pandemic, but the optimization algorithm we follow to get there is probably not going to be an efficient one...
 
  • Like
  • Love
Reactions: 7 users
I can think of a lot of potential uses for AI in our field. However, I do think we are a very, very long way off from AI being able to demonstrate flexibility that a human can to handle a multitude of tasks. My fear is we land in a two-tier system where only the wealthier/privileged get to work with humans. There is nothing more infuriating than trying to communicate with a chatbot or some other computer interface that simply doesn't get what you want to do and can't friggin connect you to a person who could solve the issue in 10 seconds or less. I worry we're going to end up with a lot more of that in the near-term even though the technology won't "really" be ready to do what we try to make it do for decades. Everyone and their brother has an AI startup right now. My experience is that > 95% of these are trash products that don't work <at all> and the entire business model revolves around tricking enough people into buying/investing that google/fb/whoever will eventually pay them to go away.
this is pretty much what one of my clients, who is a tech professional, predicts will happen. Similar to how the internet is everywhere and everything can be connected, client thinks AI will follow the same path. Also says it's here more than we think, now that we're talking about it, it's already impacting our lives. The help chatbot is a good example.
 
  • Like
Reactions: 1 user
I can think of a lot of potential uses for AI in our field. However, I do think we are a very, very long way off from AI being able to demonstrate flexibility that a human can to handle a multitude of tasks. My fear is we land in a two-tier system where only the wealthier/privileged get to work with humans. There is nothing more infuriating than trying to communicate with a chatbot or some other computer interface that simply doesn't get what you want to do and can't friggin connect you to a person who could solve the issue in 10 seconds or less. I worry we're going to end up with a lot more of that in the near-term even though the technology won't "really" be ready to do what we try to make it do for decades. Everyone and their brother has an AI startup right now. My experience is that > 95% of these are trash products that don't work <at all> and the entire business model revolves around tricking enough people into buying/investing that google/fb/whoever will eventually pay them to go away.

RE: remote work and productivity - I'm actually not sure the jury is (fully) out yet. There was a rush to study it for obvious reasons that tended to rely on very crass indicators of productivity. I suspect we'll eventually land on it being positive for some indices, neutral for some and negative for some. Anecdotally, that certainly fits with what I'm seeing just in my narrow experience. I think eventually the pendulum will land into a more hybrid model than pre-pandemic, but the optimization algorithm we follow to get there is probably not going to be an efficient one...
Or worse yet (which hasn't actually happened in my experience)--just invents an answer that may or may not be correct. Which a person might also do, but still, you don't expect it from a computer.

And yeah, I don't care what anyone tells me, I know that I'm generally more productive in the office. I can just work more flexibly at home, so by the end of the day, I probably get more done but it takes me longer to do it. Although some of that's offset by the time savings for travel. I agree that it seems to fit better with some people and some occupations than others, and that ultimately, a hybrid format will probably be the norm for many. I shudder to think of a world that's 100% work-from-home.
 
this is pretty much what one of my clients, who is a tech professional, predicts will happen. Similar to how the internet is everywhere and everything can be connected, client thinks AI will follow the same path. Also says it's here more than we think, now that we're talking about it, it's already impacting our lives. The help chatbot is a good example.
People act like AI is some new revolution, but we're talking about advances in technology that has been around for 50 years in some cases. Voice recognition when you call your pharmacy is a form of AI and that started in what, the early 2000's? Even earlier? Its been everywhere for a long time, the only difference is we're in a step function with a rapid acceleration in advances.

Actually clinical diagnostics is a good way to think about AI as a whole. Much of it is built on prediction accuracy using the same type of ROC plots we use determining optimal cutoffs on psych measures. For a long time we struggled to do better chance. Then we got to around 75% accuracy - theoretically interesting but not really useful. Now we can pretty easily push to 90%. Likely as much due to advances in computational power as due to advances in AI itself. I think that less 10% is going to be the hardest to attain and also the biggest concern. A person can override and figure out how to handle the edge cases. I think that is attainable in some instances for AI, but I think people will burn entire corporations to the ground out of sheer frustration long before they achieve it if 10% of the time their clients end up in an endless loop unable to resolve the issue they're dealing with and having no way to communicate to a person.
Or worse yet (which hasn't actually happened in my experience)--just invents an answer that may or may not be correct. Which a person might also do, but still, you don't expect it from a computer.

And yeah, I don't care what anyone tells me, I know that I'm generally more productive in the office. I can just work more flexibly at home, so by the end of the day, I probably get more done but it takes me longer to do it. Although some of that's offset by the time savings for travel. I agree that it seems to fit better with some people and some occupations than others, and that ultimately, a hybrid format will probably be the norm for many. I shudder to think of a world that's 100% work-from-home.
Interesting discussion regarding what this looks like for psychologists. I am more productive in the office now (post-kids) but was more productive at home pre-kids. My overall take is that:
1) WFH has pushed even more of the workload onto the "productive" individuals. Slackers gonna slack and this opens the door for them to do more of it. This is one of the hidden effects of the WFH climate. As someone else mentioned, I now have to balance an even higher cognitive load about following up with all the folks who never respond to emails repeatedly since I can't just pop down the hall anymore.
2) Building collaborations is simply harder - especially when transitioning to a new setting - because the "water cooler" conversations don't happen. The bar to set up a zoom meeting is higher and you aren't going to schedule random 5-10 minute hello zoom meetings with everyone in your department. This would bolster short-term productivity (less time spent chatting) but potentially stifle innovation and long-term productivity/impact.
3) I think data quality will suffer. I always had the most respect for the faculty who remained "boots-on-the-ground" with data and data collection efforts. In my eyes, their work is unquestionably higher quality than faculty who remove themselves from that process. Too many "invisible errors" get introduced that are very unlikely to be picked up in peer review or any other stage of the process. This mostly applies to experimental work.
4) Recruitment has become even more impossibly-effing-hard for the myriad types of research that can only be done in-person. We've been biding our time for now but we'll see how many trials end up having to terminate early due to recruitment issues.
4) Mentorship will be hit or miss for similar reasons. Meetings are more formalized and off-the-cuff conversations will happen less frequently. I don't interact with my post-docs nearly as much as my post-doc mentor interacted with me. Maybe that means they have more time to write. Time will tell what ultimate impact this has on careers.
 
  • Like
Reactions: 4 users
I can think of a lot of potential uses for AI in our field. However, I do think we are a very, very long way off from AI being able to demonstrate flexibility that a human can to handle a multitude of tasks. My fear is we land in a two-tier system where only the wealthier/privileged get to work with humans. There is nothing more infuriating than trying to communicate with a chatbot or some other computer interface that simply doesn't get what you want to do and can't friggin connect you to a person who could solve the issue in 10 seconds or less. I worry we're going to end up with a lot more of that in the near-term even though the technology won't "really" be ready to do what we try to make it do for decades. Everyone and their brother has an AI startup right now. My experience is that > 95% of these are trash products that don't work <at all> and the entire business model revolves around tricking enough people into buying/investing that google/fb/whoever will eventually pay them to go away.

RE: remote work and productivity - I'm actually not sure the jury is (fully) out yet. There was a rush to study it for obvious reasons that tended to rely on very crass indicators of productivity. I suspect we'll eventually land on it being positive for some indices, neutral for some and negative for some. Anecdotally, that certainly fits with what I'm seeing just in my narrow experience. I think eventually the pendulum will land into a more hybrid model than pre-pandemic, but the optimization algorithm we follow to get there is probably not going to be an efficient one...
Excellent points.
this is pretty much what one of my clients, who is a tech professional, predicts will happen. Similar to how the internet is everywhere and everything can be connected, client thinks AI will follow the same path. Also says it's here more than we think, now that we're talking about it, it's already impacting our lives. The help chatbot is a good example.
There's a lot of promise there but also--I would argue--peril in the prospect of getting swallowed up and enveloped within a persistent, ever-measuring, ever-responding digital landscape populated with virtual AI 'assistants' and 'analyzers' feeding data to various third parties (your boss, your health insurance company, the local government/ police). Just look at the issues folks are having right now adjusting to smartphones (which is a relatively encapsulated form of user interface). What I mean are the impacts on social development, daily habits, etc. There are definitely pros/cons to these things being omnipresent in our lives and we may not fully realize what they are for many decades to come.

I listened to a podcast where a neuroscientist was describing how much information resides within the miniscule variations/patterns in electrical potentials/activity that play out in real time in, say, the musculature of your arm & hand as you merely think about (or experience an 'action urge' with regard to) performing a motor movement/act (even if you do not actually engage in that act). He speculated that the tech exists (or would soon exist) that was sensitive enough to detect these activations (even calibrated to the specific person) such that if you have a thought about punching your boss, for example, it would register and be recognized. With enough computational power and constant monitoring, predicting, consequating and controlling of behavioral responses it would be possible for one's 'digital aura' (perhaps powered by AI algorithms (perhaps known, perhaps not known to the user) to subtly influence one's behavior over time and certainly to 'track' your particular signature and be engaged in a continuous process of monitoring, recording, and analyzing all this activity (that goes on well beneath even your own conscious awareness) and...potentially acting on it to (in a feedback loop) shape your or others' behavior.

And we're not even talking about the truly invasive cybernetic sort of stuff involving implants directly into the CNS (which I think Elon Musk and company are working on?).
 
Last edited:
1) WFH has pushed even more of the workload onto the "productive" individuals. Slackers gonna slack and this opens the door for them to do more of it. This is one of the hidden effects of the WFH climate. As someone else mentioned, I now have to balance an even higher cognitive load about following up with all the folks who never respond to emails repeatedly since I can't just pop down the hall anymore.
2) Building collaborations is simply harder - especially when transitioning to a new setting - because the "water cooler" conversations don't happen. The bar to set up a zoom meeting is higher and you aren't going to schedule random 5-10 minute hello zoom meetings with everyone in your department. This would bolster short-term productivity (less time spent chatting) but potentially stifle innovation and long-term productivity/impact.
3) I think data quality will suffer. I always had the most respect for the faculty who remained "boots-on-the-ground" with data and data collection efforts. In my eyes, their work is unquestionably higher quality than faculty who remove themselves from that process. Too many "invisible errors" get introduced that are very unlikely to be picked up in peer review or any other stage of the process. This mostly applies to experimental work.
4) Recruitment has become even more impossibly-effing-hard for the myriad types of research that can only be done in-person. We've been biding our time for now but we'll see how many trials end up having to terminate early due to recruitment issues.
4) Mentorship will be hit or miss for similar reasons. Meetings are more formalized and off-the-cuff conversations will happen less frequently. I don't interact with my post-docs nearly as much as my post-doc mentor interacted with me. Maybe that means they have more time to write. Time will tell what ultimate impact this has on careers.
I'm not sure this meshes with my experience, FWIW. Regarding recruitment and data collection, because I work with populations that have often transportation issues, doing things virtually when possible really opens a lot more people to recruitment than would otherwise be available; of course, not everything can be done virtually, but it's a lot easier to schedule someone for a Zoom interview than it is to ask them to come in person and risk their transportation falling through, for example. As someone who has long collaborated with people in different states, I also haven't found collaboration to be an issue--I had 10-ish pubs with one collaborator (including some in high-tier journals), before I ever met him in person, for example. I've done lots of distance mentorship and received lots of distance mentorship, and it's generally been about the same as in-person mentorship in terms of success rates, maybe higher. YMMV, of course, which is why it would be interesting to do research on this. We also saw a good bit of ableism with this--people with disabilities were long told that WFH was not a reasonable accommodation, able-bodied people needed it due to the pandemic (so it suddenly became "reasonable"), disabled people showed they could do their jobs remotely and do well, and suddenly WFH became "unreasonable" again once able-bodied people no longer needed it, even with person-specific data that it was, indeed, reasonable for many people and didn't affect essential job functions.
 
Last edited:
  • Like
Reactions: 5 users
We also use very different methodologies though, so I guess i should qualify that my experiences come with that perspective in mind.

I can't readily MRI a brain or do an EKG via zoom. Injecting drugs or even having people take pills "might" be doable remotely, but it depends on the medication and in some cases that risk-benefit ratio is thrown off. The likelihood of an IRB signing off on me giving an experimental drug to someone not under our physical observation is....low. And rightfully so in my eyes. Even administering a computer task or collecting biospecimens comes with a lot of extra caveats if I want anyone near the level of precision I'm used to. I worry about degradation if the courier is an hour behind schedule, let alone if fedex loses the package for days at a time. For the type of work you do, I imagine most of these are non-issues. We've moved everything we can (consent, questionnaires, etc.) online, but that eliminates most of the data I actually care about.

RE: collaborations - I guess that speaks to my concern about what the unit of productivity is:) I would argue pubs are only one index and not necessarily the best one. It is also different working with pre-existing collaborators (unclear if yours was) as I think forming new collaborations is the bigger challenge. Not impossible. Just harder. How many additional collaborations would you have established if the pandemic didn't happen and would that work be any more innovative/impactful than what you did work on? That's more the question I'm asking and its a much harder one to answer. I don't know the answer, but certainly it has "some" impact. How the pros/cons shake out remain to be seen. We know publication rate won't go down and might even go up. We also know the overhwelming majority of scientific publications won't be read by more than a handful of people and will do little to move the field forward. Once we start trying to measure "impact" I think it becomes a much more difficult question.

The ableism piece is also an interesting and very significant one one. At a minimum, there are unquestionably now a far wider range of things that "can" be done remotely. What those are and when is it optimal vs sub-optimal is what I'm wondering. I don't think there will be an easy answer to that question. I do think it is much more difficult to make the case that such accommodations are unreasonable following this experience. I don't worry about individual researchers WFH for whatever reason. If the entire body of academics did it exclusively....personally I think that would severely hinder advancement as a whole (and effectively shut the doors on many entire disciplines that simply couldn't exist anymore). That's an instinct though and its an empirical question.
 
Last edited:
We also use very different methodologies though, so I guess i should qualify that my experiences come with that perspective in mind.

I can't readily MRI a brain or do an EKG via zoom. Injecting drugs or even having people take pills "might" be doable remotely, but it depends on the medication and in some cases that risk-benefit ratio is thrown off. The likelihood of an IRB signing off on me giving an experimental drug to someone not under our physical observation is....low. And rightfully so in my eyes. Even administering a computer task or collecting biospecimens comes with a lot of extra caveats if I want anyone near the level of precision I'm used to. I worry about degradation if the courier is an hour behind schedule, let alone if fedex loses the package for days at a time. For the type of work you do, I imagine most of these are non-issues. We've moved everything we can (consent, questionnaires, etc.) online, but that eliminates most of the data I actually care about.

RE: collaborations - I guess that speaks to my concern about what the unit of productivity is:) I would argue pubs are only one index and not necessarily the best one. It is also different working with pre-existing collaborators (unclear if yours was) as I think forming new collaborations is the bigger challenge. Not impossible. Just harder. How many additional collaborations would you have established if the pandemic didn't happen and would that work be any more innovative/impactful than what you did work on? That's more the question I'm asking and its a much harder one to answer. I don't know the answer, but certainly it has "some" impact. How the pros/cons shake out remain to be seen. We know publication rate won't go down and might even go up. We also know the overwhelming majority of scientific publications won't be read by more than a handful of people and will do little to move the field forward. Once we start trying to measure "impact" I think it becomes a much more difficult question.

The ableism piece is also an interesting and very significant one one. At a minimum, there are unquestionably now a far wider range of things that "can" be done remotely. What those are and when is it optimal vs sub-optimal is what I'm wondering. I don't think there will be an easy answer to that question. I do think it is much more difficult to make the case that such accommodations are unreasonable following this experience. I don't worry about individual researchers WFH for whatever reason. If the entire body of academics did it exclusively....personally I think that would severely hinder advancement as a whole (and effectively shut the doors on many entire disciplines that simply couldn't exist anymore). That's an instinct though and its an empirical question.
Yeah, it really does the depend on the methods for sure, which is why I said that there are some things that for sure can't be done remotely, so it varies. A big issue we see with accommodations, though--and we've touched on this some in our training work, and I've long wanted to write an article specifically addressing it, because it's a huge issue--is that people really struggle with distinguishing between "essential function" versus "that's how we've always done it." For example, I have a colleague who can't type for long periods of time d/t chronic pain, and got the feedback from some sites that she couldn't have a placement there d/t the documentation requirements, but really, she just needed speech-to-text software and a reasonably quiet space she could dictate in, because the essential function wasn't "typing notes" but rather "getting the notes into the computer," but that kind of cognitive flexibility is hard for a lot of people and leads to really good candidates being shut out, not because they can't do the essential function but because the decision makers don't know what the essential function actually is. I think we see the same thing with a lot of "we can't do that remotely" when the reason given is that "well, we haven't done it remotely before [and don't want to try]," not an actual reason why. Again, not in all cases--some things really can't be accommodated--but we see this lack of cognitive flexibility in accommodation denials a lot.

As for mentorship, I guess it's been productive on a lot of levels--producing research (that has been high impact, in terms of citations, community response and media coverage) and just having a really solid mentoring relationship overall. How you quantify mentorship outcomes is actually really interesting, and would be an interesting research topic--I have one professional mentor (also a distance mentoring relationship, actually) where we have some solid, objective professional outcomes (large grant [that we met through], some high impact papers) but the bulk of her mentorship to me has actually been more personal, helping me through really rough times I went through personally and medically while on the tenure track. She actually explicitly told me that she cared more about my health and well-being than anything that would actually benefit her (e.g., grants, pubs) in terms of a mentoring outcome. When I talked to here recently to thank her for all the invaluable support she gave me through the tenure process (seriously, I would not have gotten through it without her), to my surprise, she said that she felt like she "didn't have to do much at all, because [ I ] was a shoe-in and didn't need any help being an excellent researcher" whereas I was like "I leaned on you for support and reassurance so hard and I really owe my tenure to you," so it raises questions of what mentoring inputs and outputs are and what matters in that regard. Really interesting question that I now want to study, heh.

(Side note: I'm beginning to think I have a weird experience with academic impact, because I've had people tell me, for example, that my research has been one of the first times they felt like their experiences were heard and reflected, that it helped them figure out their sexuality after decades of confusion, that they used it to modify their training programs, that it fundamentally altered their clinical practice, etc--I mean, not every article is world-changing, of course, but I've never really felt like my academic work was in some vacuum).
 
  • Like
  • Care
Reactions: 3 users
I don't want to stray too far off topic, but the remote work thing is interesting. While it can save money in the long term, in the short term it will lose a lot of money and that is the problem. Companies cannot easily break long term leases or unload commercial real estate right now without a firesale. Long-term as leases end and commercial real estate prices rebound, we shall see.
Well that’s their problem (the companies leasing offices no longer needed) .
Or worse yet (which hasn't actually happened in my experience)--just invents an answer that may or may not be correct. Which a person might also do, but still, you don't expect it from a computer.

And yeah, I don't care what anyone tells me, I know that I'm generally more productive in the office. I can just work more flexibly at home, so by the end of the day, I probably get more done but it takes me longer to do it. Although some of that's offset by the time savings for travel. I agree that it seems to fit better with some people and some occupations than others, and that ultimately, a hybrid format will probably be the norm for many. I shudder to think of a world that's 100% work-from-home.
It’ll be like that movie Wall-E!

But seriously, I think most jobs can be done remotely although there will be a need to get people out and about sometimes. I can’t see it as healthy to be work from home then little need to leave the home. Maybe humans are just conditioned to need to go from point a to point b to do their work since that’s how it’s been for decades before March 2020. Now times have changed, maybe psychology can help ?
 
Well that’s their problem (the companies leasing offices no longer needed) .

It’ll be like that movie Wall-E!

But seriously, I think most jobs can be done remotely although there will be a need to get people out and about sometimes. I can’t see it as healthy to be work from home then little need to leave the home. Maybe humans are just conditioned to need to go from point a to point b to do their work since that’s how it’s been for decades before March 2020. Now times have changed, maybe psychology can help ?
I would think that I/O psychology already has some research in that arena. I know that some of the stuff I studied for EPPP in that section asked about different types of work schedules and structures so I’d be willing to bet we have some answers to those questions. Not always good at disseminating some of that useful but less appealing info. One reason Freud referred to his stages as psychosexual is that he knew that it would help disseminate his ideas and sex sells.
 
  • Like
Reactions: 1 users
I just discovered ChatGPT makes very decent mindfulness scripts.

I ran across this video as well:
 
  • Like
Reactions: 1 users
I'm really enjoying these tools:

SciSummary

and Search - Consensus - Evidence-Based Answers, Faster

Both of these have been helping me quickly pull data together when I have broad research questions and start to get lost in the weeds.

I'll admit I'm kind of obsessed with AI at the moment.
Thanks for the links; I wasn't aware of these tools. Gotta admit, though, a little skeptical of having my articles summarized for me by an algorithm. I already feel guilty enough when I scan the abstracts and skip to the Discussion sections of articles. I'll have to check it out.

"SciSummary (Short for Science Summary) was founded in 2023 to make digestion of scientific articles easier. Using modern AI technology it summarizes articles, allowing you to understand them without having to read the full text."

Makes me wonder if Frank Herbert (Dune author) was engaging in science fiction or science prediction when he wrote:

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
 
  • Like
Reactions: 1 user
Top