Interesting Ethical Dilemma

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DynamicDidactic

Still Kickin'
10+ Year Member
Joined
Jul 27, 2010
Messages
1,814
Reaction score
1,526
Saw this and thought it may be worth sharing:

Members don't see this ad.
 
Oh no, people are judging our attractiveness without our knowledge...


....said every human being ever.
 
Last edited:
  • Like
Reactions: 5 users
Members don't see this ad :)
I'm on side of the researcher in terms of publicly available being subject to research without needing to get consent.

Literally everyone who sees these photographs on social media is explicitly or not making these judgments, it is a strange position to maintain that having a selected group of people than write them down in a slightly more formalized way is problematic.
 
  • Like
Reactions: 1 user
I think there are some interesting conversations worth having around the exact bounds of privacy in the digital age, when permission is needed to use that data, etc. That said, based on what we know now I do find it hard to find fault with a researcher who exclusively used publicly available data. My interest in the conversations is more theoretical/philosophical than pragmatic at this point.

I think attractiveness ratings were only part of the issue here - the other piece is combining it with other public data sources. I don't take issue with it as a scientist, but I see how the public might be uncomfortable with it. Especially if they think of these as discrete aspects of themselves and aren't aware it can be combined. That said - how does one, at-scale, reliably link social media to other sources of information given many people don't use their real name? I'm curious the method?

Also worth mentioning that if this hadn't garnered at least a semi-controversial result, I'm not sure this ever gets noticed, let alone blows up...
 
  • Like
Reactions: 3 users
It does make me think about how hard it may become to reasonably ensure the anonymity of research participants using archived data.
 
Last edited:
  • Like
Reactions: 1 users
I think there are some interesting conversations worth having around the exact bounds of privacy in the digital age, when permission is needed to use that data, etc. That said, based on what we know now I do find it hard to find fault with a researcher who exclusively used publicly available data. My interest in the conversations is more theoretical/philosophical than pragmatic at this point.

I think attractiveness ratings were only part of the issue here - the other piece is combining it with other public data sources. I don't take issue with it as a scientist, but I see how the public might be uncomfortable with it. Especially if they think of these as discrete aspects of themselves and aren't aware it can be combined. That said - how does one, at-scale, reliably link social media to other sources of information given many people don't use their real name? I'm curious the method?

Also worth mentioning that if this hadn't garnered at least a semi-controversial result, I'm not sure this ever gets noticed, let alone blows up...
I would estimate that 60% of people being upset about this is that the result was one that does not sound very nice or feel good to contemplate.
 
  • Like
Reactions: 4 users
I'm on side of the researcher in terms of publicly available being subject to research without needing to get consent.

Agreed. The images were in the public domain. I think the bigger issue here are the charges levied against the researcher and recommendations. Informing people that that their images in the public domain and seeking permission would be quite onerous and limit the data available. The idea that researching controversial topics and utilizing social media data will lead to ethics complaints are a problem. It limits how much science can study the modern reality.
 
  • Like
Reactions: 1 user
Agreed. The images were in the public domain. I think the bigger issue here are the charges levied against the researcher and recommendations. Informing people that that their images in the public domain and seeking permission would be quite onerous and limit the data available. The idea that researching controversial topics and utilizing social media data will lead to ethics complaints are a problem. It limits how much science can study the modern reality.

Their images were on social media accounts, these people willingly put their images into the public domain. Why would they need to be informed of that, they already willingly assented.
 
Their images were on social media accounts, these people willingly put their images into the public domain. Why would they need to be informed of that, they already willingly assented.

Ask the ethics committee. That was their recommendation:

The document does level some criticism against the researcher, however, noting that he could easily have notified the participants that he was conducting the study. Moreover, the report stated:

It may also reasonably be considered as an invasion of personal integrity to have one’s looks rated the way it was done in the study. The execution of the study might thereby have had unethical consequences even if the letter of the law has been followed. For this, the execution of the study deserves criticism
 
  • Like
Reactions: 1 user
Right but were the academic performance data in the public domain? Where did they get the grades from?

The article states all the info was publicly available.
 
Right but were the academic performance data in the public domain? Where did they get the grades from?

Good question, it says "The ratings were then linked to other publicly available data about the students, including academic performance." Maybe they just do things differently in Sweden
 
Members don't see this ad :)
I would not be surprised if that was actually available for research. Northern Europe is worlds ahead of everyone else on having all these things centralized, stored and accessible for research purposes. I am only slightly exaggerating when I say that effectively the entire country has their health data in a research registry.
 
  • Like
Reactions: 1 user
Good question, it says "The ratings were then linked to other publicly available data about the students, including academic performance." Maybe they just do things differently in Sweden


From the article:

" The Scandinavian countries also allow researchers to use detailed administrative data on income, debt, mental health, medicine use, and so on. Probably, not everyone is okay with this, but the public use of such data is enshrined in our Constitution (at least in Sweden)."
 
  • Like
Reactions: 1 user
I guess IRBs operate differently in N. Europe.

My main thing (having been an external reviewer for a small IRB) in thinking about this specific study is does the risk to participants, like what happened, justify potential benefits? This is especially important in cases where publicly available data is being used in *ways it was not explicitly intended to be used.* I don’t really see the benefits of the study at all, but maybe I need to go read the article.
 
I guess IRBs operate differently in N. Europe.

My main thing (having been an external reviewer for a small IRB) in thinking about this specific study is does the risk to participants, like what happened, justify potential benefits? This is especially important in cases where publicly available data is being used in *ways it was not explicitly intended to be used.* I don’t really see the benefits of the study at all, but maybe I need to go read the article.


I mean, not the most impactful study, but I can think if many, many, many studies with far less useful and/or applicable findings
 
  • Like
Reactions: 1 user
I guess IRBs operate differently in N. Europe.

My main thing (having been an external reviewer for a small IRB) in thinking about this specific study is does the risk to participants, like what happened, justify potential benefits? This is especially important in cases where publicly available data is being used in *ways it was not explicitly intended to be used.* I don’t really see the benefits of the study at all, but maybe I need to go read the article.
Idk--I've done a study where we analyzed information that different institutions posted or didn't post online--I don't think they ever "explicitly intended" people outside of the institutions to analyze their website content for comprehensiveness and correctness, but we did, because the "data" were there. Interestingly, I'm involved in a research project now where we are using publicly available Reddit posts for machine learning, and it's been a lot of interesting discussion over whether and how to make our curated/coded dataset for that publicly available, especially as it concerns a marginalized identity.
 
I would not be surprised if that was actually available for research. Northern Europe is worlds ahead of everyone else on having all these things centralized, stored and accessible for research purposes. I am only slightly exaggerating when I say that effectively the entire country has their health data in a research registry.
Right but people's academic grades? With identifying information?
I have a collaborator who has access to the Swedish health registry datasets and they are incredible, but everything is supposed to be deidentified.

I'm really surprised that this research group could get individual identities from the academic data that they were able to link to social media posts.
Presumably people aren't tagging their social media posts with their national identification numbers, which means they must have had names on the academic data. That's... unusual.

Edit: I downloaded the supplementary info which has a statement on the data. It says ,

"Academic outcomes. All grade data, and data on the gender of professors, come from LADOK, which is the student administration system used at Lund University."

So that's not national registry data. Did he go ask the Lund University registrar for the grade info I wonder? You have to wonder if the registrar knew he was going to go cross-tabulate the grade information with people's social media posts and their parents' tax returns. !!
 
Last edited:
  • Like
Reactions: 1 user
Let us remember that psychologists convinced all Ivy league and Seven Sister schools to take nude photos of ALL freshmen from 1940-1970s. Sylvia Plath, Diane Sawyer, Hilary Clinton, George Bush, JFK, etc.

And let us remember that Harvard's Grant study on happiness, which is still ongoing, also took naked photos of students, measured their genitals, asked their parents about how they learned about sex, followed the students until they died, AND ARE CURRENTLY FOLLOWING THEIR BOOMER CHILDREN.

IRBs, man.
 
Last edited:
  • Like
Reactions: 1 users
Concluding Remarks: "This paper has shown that students’ facial attractiveness impact academic outcomes when classes are held in-person. As education moved online following the onset of the pandemic, the grades of attractive female students deteriorated. This finding implies that the female beauty premium observed when education is in-person is likely to be chiefly a consequence of discrimination. On the contrary, for male students, there was still a significant beauty premium even after the introduction of online teaching. The latter finding suggests that for males in particular, beauty can be a productivity-enhancing attribute."

educational bias exists? women face discrimination in education? good looking students, especially men, receive social benefits?
 
Let us remember that psychologists convinced all Ivy league and Seven Sister schools to take nude photos of ALL freshmen from 1940-1970s. Sylvia Plath, Diane Sawyer, Hilary Clinton, George Bush, JFK, etc.

And let us remember that Harvard's Grant study on happiness, which is still ongoing, also took naked photos of students, measured their genitals, asked their parents about how they learned about sex, followed the students until they died, AND ARE CURRENTLY FOLLOWING THEIR BOOMER CHILDREN.

IRBs, man.
And they give trauma researchers so much undeserved hassle much of the time...
 
Right but people's academic grades? With identifying information?
I have a collaborator who has access to the Swedish health registry datasets and they are incredible, but everything is supposed to be deidentified.

I'm really surprised that this research group could get individual identities from the academic data that they were able to link to social media posts.
Presumably people aren't tagging their social media posts with their national identification numbers, which means they must have had names on the academic data. That's... unusual.

Edit: I downloaded the supplementary info which has a statement on the data. It says ,

"Academic outcomes. All grade data, and data on the gender of professors, come from LADOK, which is the student administration system used at Lund University."

So that's not national registry data. Did he go ask the Lund University registrar for the grade info I wonder? You have to wonder if the registrar knew he was going to go cross-tabulate the grade information with people's social media posts and their parents' tax returns. !!
Yeah, that really changes things, but if he was upfront with the IRB and Lund University, and they cleared it, the complaint is really with whoever at the university let him have the data and not the researcher himself, IMO.
 
Yeah, that really changes things, but if he was upfront with the IRB and Lund University, and they cleared it, the complaint is really with whoever at the university let him have the data and not the researcher himself, IMO.
Meta-comment not directly germane to the discussion: This is IMO one of the biggest issue with academia right now. A bajillion different peripheral entities have inserted themselves in the research process, but take absolutely no responsibility for anything. They slow everything down, half-ass their job and then just fall back on "Well, ultimately it is the PI's responsibility...." when they screw up. So why does your office exist then? Just shut up and sign the papers I put on your desk.

The importance of this varies by office, but I'm pretty sure if we had no IRB, no IT, no Post-Award Office and no Signing Officials, my research would be done more ethically (i.e., my consents would actually inform participants of the research vs being lengthy legal documents), have more reliable technology, have more efficient financial management with fewer errors, and better-written contracts that are processed more quickly. These offices largely exist to prevent outliers at the expense of the overwhelming bulk of the distribution.
 
  • Like
Reactions: 1 users
Meta-comment not directly germane to the discussion: This is IMO one of the biggest issue with academia right now. A bajillion different peripheral entities have inserted themselves in the research process, but take absolutely no responsibility for anything. They slow everything down, half-ass their job and then just fall back on "Well, ultimately it is the PI's responsibility...." when they screw up. So why does your office exist then? Just shut up and sign the papers I put on your desk.

The importance of this varies by office, but I'm pretty sure if we had no IRB, no IT, no Post-Award Office and no Signing Officials, my research would be done more ethically (i.e., my consents would actually inform participants of the research vs being lengthy legal documents), have more reliable technology, have more efficient financial management with fewer errors, and better-written contracts that are processed more quickly. These offices largely exist to prevent outliers at the expense of the overwhelming bulk of the distribution.
At the end of my academic career I was having insane problem with the IRB. Absurdly long consents for simple surveys, one time they told me that in an experiment I had to list all the possible conditions in the consent (I said absolutely no, that would make exp work impossible), new comments on things not relevant to ethics on more than a dozen iterations of a simple protocol pushing approval lengths to months, etc. They started getting pushback from the campus centers and I think it started getting better.
 
  • Like
Reactions: 1 user
At the end of my academic career I was having insane problem with the IRB. Absurdly long consents for simple surveys, one time they told me that in an experiment I had to list all the possible conditions in the consent (I said absolutely no, that would make exp work impossible), new comments on things not relevant to ethics on more than a dozen iterations of a simple protocol pushing approval lengths to months, etc. They started getting pushback from the campus centers and I think it started getting better.

When I was still doing experimental linguistics, the IRB at one point told me they couldn't approve the project because I had ended sentences in the consent with prepositions. I explained at some length, as a linguist, why this was a perfectly normal and grammatical construction in English and has been for many centuries. They were not impressed and insisted. A small part of me died when I just went ahead and made the consent forms substantially clunkier and harder to read in the name of protecting human subjects.
 
  • Haha
  • Like
Reactions: 1 users
When I was still doing experimental linguistics, the IRB at one point told me they couldn't approve the project because I had ended sentences in the consent with prepositions. I explained at some length, as a linguist, why this was a perfectly normal and grammatical construction in English and has been for many centuries. They were not impressed and insisted. A small part of me died when I just went ahead and made the consent forms substantially clunkier and harder to read in the name of protecting human subjects.
They’re wrong.

I would have died on that hill.

:rofl:
 
  • Like
Reactions: 1 users
I had to explain to a hospital IRB why, in a study of de identified archival data, their recommendation to go back and consent every patient in the database, was actually a greater threat to privacy than simply using the deidientified data.
 
  • Like
Reactions: 2 users
This discussion of IRBs reminds me of a line that a friend of mine in recovery likes to say, the only thing worse than his problems were his solutions.
 
  • Like
Reactions: 1 user
I have plenty of IRB awfulness to share but also have to say that what we had before the Declaration of Helsinki was, like, Tuskegee, naked pictures of undergrads, and Philip Zimbardo's mock prisoner experiments, so I'm not sure we all want to go back there.
 
  • Like
Reactions: 2 users
I feel like we've reached a point where the net harm caused by a handful of unethical studies might be less (in aggregate) than the net harm done by the institutional barriers we're putting in place. It is an interesting philosophical question. Especially given a not-insignificant amount of unethical research is still done and the whole point above is that the IRB would likely just wave their hands and say "Oh, well ultimately its the PIs responsibility!" and absolve themselves of any responsibility.

I don't actually see IRBs going anywhere, nor do I genuinely think eliminating them is the answer. However the current model is unsustainable, much of it is downright stupid, and a lot of what IRBs do in present time certainly seems to increase risk to participants and the public rather than decrease it.
 
  • Like
Reactions: 1 user
I feel like we've reached a point where the net harm caused by a handful of unethical studies might be less (in aggregate) than the net harm done by the institutional barriers we're putting in place. It is an interesting philosophical question. Especially given a not-insignificant amount of unethical research is still done and the whole point above is that the IRB would likely just wave their hands and say "Oh, well ultimately its the PIs responsibility!" and absolve themselves of any responsibility.

I don't actually see IRBs going anywhere, nor do I genuinely think eliminating them is the answer. However the current model is unsustainable, much of it is downright stupid, and a lot of what IRBs do in present time certainly seems to increase risk to participants and the public rather than decrease it.

Most of the unethical research behavior (which I do think are occurring more frequently) can likely be easily addressed by the changing the academic promotion and salary structure. This country has an unhealthy obsession with productivity that is ruining everything...most of all quality.
 
Last edited:
  • Like
  • Love
Reactions: 4 users
Most if the unethical research behavior (which I do think are occurring more frequently) can likely be easily addressed by the changing the academic promotion and salary structure. This country has an unhealthy obsession with productivity that is ruining everything...most of all quality.
I feel like psychometrics is the answer to your 2nd sentence. We're obsessed with productivity as indexed by metrics we can reliably measure, but often completely ignore validity in doing so.
 
  • Like
Reactions: 1 user
I feel like psychometrics is the answer to your 2nd sentence. We're obsessed with productivity as indexed by metrics we can reliably measure, but often completely ignore validity in doing so.
I think it's also an issue of needing to measure productivity and impact at least *somewhat* objectively, because pure subjectively can often be even worse. I've posted before about my department's merit process, which definitely has its issues, but it's also so much better than the previous process of the department head just going off of vibes or something, especially when you have a department with main different subfields in it, like mine.
 
  • Like
Reactions: 1 users
I feel like we've reached a point where the net harm caused by a handful of unethical studies might be less (in aggregate) than the net harm done by the institutional barriers we're putting in place. It is an interesting philosophical question. Especially given a not-insignificant amount of unethical research is still done and the whole point above is that the IRB would likely just wave their hands and say "Oh, well ultimately its the PIs responsibility!" and absolve themselves of any responsibility.

I don't actually see IRBs going anywhere, nor do I genuinely think eliminating them is the answer. However the current model is unsustainable, much of it is downright stupid, and a lot of what IRBs do in present time certainly seems to increase risk to participants and the public rather than decrease it.
As a counterpoint, I saw an actually dangerous study advertised a few years ago that was approved by a for-profit IRB (they wanted people--online, with no actual clinician involved in the study, and already belonging to vulnerable population--to think of a friend or relative who had attempted or died by suicide and write out what they imagined that person's suicide note would say, and this was after several questions asking them, in detail, about their personal experiences with suicidality, IIRC). I think IRBs tend to be more conservative with suicidality in studies than they need to be, but that study made me so concerned that I actually contacted the IRB, because it just seemed needlessly dangerous. So, IRBs can be ridiculous and some people would still do blatantly dangerous things if given the opportunity.
 
  • Wow
Reactions: 1 user
I think it's also an issue of needing to measure productivity and impact at least *somewhat* objectively, because pure subjectively can often be even worse. I've posted before about my department's merit process, which definitely has its issues, but it's also so much better than the previous process of the department head just going off of vibes or something, especially when you have a department with main different subfields in it, like mine.
Well that's precisely it - objectivity should inherently increase reliability (or at least some forms of it). 100% agree completely subjective processes can be problematic too. The issue is where we draw a line that strikes a balance. Pure objectivity is great for widgets-per-hour type jobs, but for nearly any professional role there needs to be some in between. I'd argue - on average - the balance has swung too far towards objectivity for <most> things (not just talking academia here...banking, real estate, tech, lots of industries). Hell, many industries now (*cough* TECH *cough) survive almost entirely by gaming short-term metrics for the sole purpose of tricking investors/customers who solely look at those metrics versus eyeballing their product and asking "Is this an utterly *****ic idea". The exact balance point is a nuanced issue though and exactly where that point lies will depend on the question.


RE: your second post - ridiculous. The existence of for-profit IRBs confounds me, though I don't doubt a university IRB could do the same thing. The process is just ineffective...and I'm really, truly, genuinely hard-pressed to believe we couldn't come up with a better one. Or at least dramatically improve the one we have without too much effort.

I also can't think of any possible scientific value in having someone write out an imagined suicide note, but maybe I'm just not thinking creatively enough? Maybe as a really strikingly powerful and downright evil experimental mood induction?
 
  • Like
Reactions: 1 user
Most of the unethical research behavior (which I do think are occurring more frequently) can likely be easily addressed by the changing the academic promotion and salary structure. This country has an unhealthy obsession with productivity that is ruining everything...most of all quality.
And the way they operationalize 'productivity' can actually be the opposite of productivity.

The VA measures 'productivity' of its psychotherapists as how many RVU's they earn per unit time. It's 'easy to measure.' Takes no work on the part of supervisors.

Sounds plausible on its face.

But the realities of VA practice are tricky here.

You can have your clinics backed/locked up for the next three months with de facto 'case management (as opposed to active psychotherapy)' cases and your RVU measured 'productivity' looks awesome. But you could go an entire year and only 'clear' (if any) maybe 5-10 patients from your caseload. Hell, I've seen people scheduling patients five months out between appointments. That's like...two sessions per year, even if they perfectly attend.

Meanwhile, you could have a therapist working hard to get people in/out of their caseload by scheduling everyone weekly.

There are many veterans 'in therapy' who aren't actually engaging in the therapy.

The only way to get these people to leave your caseload is to hold them/you accountable for doing active therapy and meeting on a weekly basis and documenting your attempts to engage them and provide therapy.

Most of these patients will eventually passively drop out of therapy but it takes time and it takes (often) a good number of cancelled/no-show appointments. Which hurts your 'productivity.' Even though you're actually being productive as Hell as a psychotherapist.

Which psychotherapist is the more 'productive?'

Psychologist A who clears 100 cases from their caseload over 12 months or

Psychologist B who has every slot filled for the next three months but has only cleared 10 cases over the past 12 months?

Under the current system, Psychologist B looks more 'productive'

I'll go out on a limb and call this a problem.

If you are a manager of a podunk Piggly Wiggly grocery store or a fast food franchise, you're responsible for conducting some basic analyses for coverage (of staff shifts), inflow/outflow of inventory, etc. to ensure everything doesn't 'seize up.' If you don't, you're out of a job.

Meanwhile, we have several layers of GS-14 and GS-15 supervision in 'mental health product line' operations who--to my knowledge--haven't conducted a single analysis of how many FTE's we need to cover the numbers of patients demanding psychotherapy and our ability to get them in/out/through a 'course of psychotherapy' (or, as they love to say, 'episode of care.').

So many bobbleheads are chanting, 'episode of care...episode of care...episode of care' as the next 'Big Thing' idea on par with splitting the atom or cold fusion that leadership has innovated...

But no one has done a fifth-grader level of arithmetic/analysis of the above.
 
Last edited:
  • Like
  • Love
Reactions: 3 users
And the way they operationalize 'productivity' can actually be the opposite of productivity.

The VA measures 'productivity' of its psychotherapists as how many RVU's they earn per unit time. It's 'easy to measure.' Takes no work on the part of supervisors.

Sounds plausible on its face.

But the realities of VA practice are tricky here.

You can have your clinics backed/locked up for the next three months with de facto 'case management (as opposed to active psychotherapy)' cases and your RVU measured 'productivity' looks awesome. But you could go an entire year and only 'clear' (if any) maybe 5-10 patients from your caseload. Hell, I've seen people scheduling patients five months out between appointments. That's like...two sessions per year, even if they perfectly attend.

Meanwhile, you could have a therapist working hard to get people in/out of their caseload by scheduling everyone weekly.

There are many veterans 'in therapy' who aren't actually engaging in the therapy.

The only way to get these people to leave your caseload is to hold them/you accountable for doing active therapy and meeting on a weekly basis and documenting your attempts to engage them and provide therapy.

Most of these patients will eventually passively drop out of therapy but it takes time and it takes (often) a good number of cancelled/no-show appointments. Which hurts your 'productivity.' Even though you're actually being productive as Hell as a psychotherapist.

Which psychotherapist is the more 'productive?'

Psychologist A who clears 100 cases from their caseload over 12 months or

Psychologist B who has every slot filled for the next three months but has only cleared 10 cases over the past 12 months?

Under the current system, Psychologist B looks more 'productive'

I'll go out on a limb and call this a problem.

If you are a manager of a podunk Piggly Wiggly grocery store or a fast food franchise, you're responsible for conducting some basic analyses for coverage (of staff shifts), inflow/outflow of inventory, etc. to ensure everything doesn't 'seize up.' If you don't, you're out of a job.

Meanwhile, we have several layers of GS-14 and GS-15 supervision in 'mental health product line' operations who--to my knowledge--haven't conducted a single analysis of how many FTE's we need to cover the numbers of patients demanding psychotherapy and our ability to get them in/out/through a 'course of psychotherapy' (or, as they love to say, 'episode of care.').

So many bobbleheads are chanting, 'episode of care...episode of care...episode of care' as the next 'Big Thing' idea on par with splitting the atom or cold fusion that leadership has innovated...

But no one has done a fifth-grader level of arithmetic/analysis of the above.
We need psychologists in leadership roles to combat the bobbleheads! “Episode of care”. Omg! Where do they come up with this stuff and why? What is interesting is it is always part of a plan to make things better. It is fascinating. The fee for service private run healthcare is run by greed. Obviously can lead to harmful practices. Then we have the government run system which is run by bureaucrats which obviously leads to incompetence. Finally when the two are connected then the greedy take advantage of the incompetent to make even more money. Cynical Monday morning post. Guess I’ll just keep running my little business and try not to be too greedy or incompetent and keep my head down so they don’t turn their sights on me.
 
  • Like
Reactions: 2 users
And the way they operationalize 'productivity' can actually be the opposite of productivity.

The VA measures 'productivity' of its psychotherapists as how many RVU's they earn per unit time. It's 'easy to measure.' Takes no work on the part of supervisors.

Sounds plausible on its face.

But the realities of VA practice are tricky here.

You can have your clinics backed/locked up for the next three months with de facto 'case management (as opposed to active psychotherapy)' cases and your RVU measured 'productivity' looks awesome. But you could go an entire year and only 'clear' (if any) maybe 5-10 patients from your caseload. Hell, I've seen people scheduling patients five months out between appointments. That's like...two sessions per year, even if they perfectly attend.

Meanwhile, you could have a therapist working hard to get people in/out of their caseload by scheduling everyone weekly.

There are many veterans 'in therapy' who aren't actually engaging in the therapy.

The only way to get these people to leave your caseload is to hold them/you accountable for doing active therapy and meeting on a weekly basis and documenting your attempts to engage them and provide therapy.

Most of these patients will eventually passively drop out of therapy but it takes time and it takes (often) a good number of cancelled/no-show appointments. Which hurts your 'productivity.' Even though you're actually being productive as Hell as a psychotherapist.

Which psychotherapist is the more 'productive?'

Psychologist A who clears 100 cases from their caseload over 12 months or

Psychologist B who has every slot filled for the next three months but has only cleared 10 cases over the past 12 months?

Under the current system, Psychologist B looks more 'productive'

I'll go out on a limb and call this a problem.

If you are a manager of a podunk Piggly Wiggly grocery store or a fast food franchise, you're responsible for conducting some basic analyses for coverage (of staff shifts), inflow/outflow of inventory, etc. to ensure everything doesn't 'seize up.' If you don't, you're out of a job.

Meanwhile, we have several layers of GS-14 and GS-15 supervision in 'mental health product line' operations who--to my knowledge--haven't conducted a single analysis of how many FTE's we need to cover the numbers of patients demanding psychotherapy and our ability to get them in/out/through a 'course of psychotherapy' (or, as they love to say, 'episode of care.').

So many bobbleheads are chanting, 'episode of care...episode of care...episode of care' as the next 'Big Thing' idea on par with splitting the atom or cold fusion that leadership has innovated...

But no one has done a fifth-grader level of arithmetic/analysis of the above.

I just had my yearly eval and my supervisor gave me "exceeds expectations" even though my RVUs are on the lower side because they think I'm doing great clinical work and the numbers aren't the best measure of the actual work I'm doing. That was so refreshing!
 
  • Like
Reactions: 7 users
We need psychologists in leadership roles to combat the bobbleheads! “Episode of care”. Omg! Where do they come up with this stuff and why? What is interesting is it is always part of a plan to make things better. It is fascinating. The fee for service private run healthcare is run by greed. Obviously can lead to harmful practices. Then we have the government run system which is run by bureaucrats which obviously leads to incompetence. Finally when the two are connected then the greedy take advantage of the incompetent to make even more money. Cynical Monday morning post. Guess I’ll just keep running my little business and try not to be too greedy or incompetent and keep my head down so they don’t turn their sights on me.

I wish that was the solution in most cases. However, I have encountered many of both with a psychology license. Plenty or greed in the PP space. In the VA, I find a system that refuses to acknowledge its own shortcomings. You have folks that have never left the VA and have a limited understanding of billing and RVUs trying to manage front line staff in a blind leading the blind situation. It is not hard. Read the codes and stop misinforming others. In both cases, it is because you won't pay management enough. So, you get the young and naive or the just plain incompetent.
 
Last edited:
  • Like
  • Love
Reactions: 3 users
And the way they operationalize 'productivity' can actually be the opposite of productivity.

The VA measures 'productivity' of its psychotherapists as how many RVU's they earn per unit time. It's 'easy to measure.' Takes no work on the part of supervisors.

Sounds plausible on its face.

But the realities of VA practice are tricky here.

You can have your clinics backed/locked up for the next three months with de facto 'case management (as opposed to active psychotherapy)' cases and your RVU measured 'productivity' looks awesome. But you could go an entire year and only 'clear' (if any) maybe 5-10 patients from your caseload. Hell, I've seen people scheduling patients five months out between appointments. That's like...two sessions per year, even if they perfectly attend.

Meanwhile, you could have a therapist working hard to get people in/out of their caseload by scheduling everyone weekly.

There are many veterans 'in therapy' who aren't actually engaging in the therapy.

The only way to get these people to leave your caseload is to hold them/you accountable for doing active therapy and meeting on a weekly basis and documenting your attempts to engage them and provide therapy.

Most of these patients will eventually passively drop out of therapy but it takes time and it takes (often) a good number of cancelled/no-show appointments. Which hurts your 'productivity.' Even though you're actually being productive as Hell as a psychotherapist.

Which psychotherapist is the more 'productive?'

Psychologist A who clears 100 cases from their caseload over 12 months or

Psychologist B who has every slot filled for the next three months but has only cleared 10 cases over the past 12 months?

Under the current system, Psychologist B looks more 'productive'

I'll go out on a limb and call this a problem.

If you are a manager of a podunk Piggly Wiggly grocery store or a fast food franchise, you're responsible for conducting some basic analyses for coverage (of staff shifts), inflow/outflow of inventory, etc. to ensure everything doesn't 'seize up.' If you don't, you're out of a job.

Meanwhile, we have several layers of GS-14 and GS-15 supervision in 'mental health product line' operations who--to my knowledge--haven't conducted a single analysis of how many FTE's we need to cover the numbers of patients demanding psychotherapy and our ability to get them in/out/through a 'course of psychotherapy' (or, as they love to say, 'episode of care.').

So many bobbleheads are chanting, 'episode of care...episode of care...episode of care' as the next 'Big Thing' idea on par with splitting the atom or cold fusion that leadership has innovated...

But no one has done a fifth-grader level of arithmetic/analysis of the above.

Definitely true, I find myself wondering why I should work so hard. I can fill my caseload with treatment resistant veterans who want to rant about the world for an hour without interruption while I would probably be surfing the internet and have a better RVU numbers (which seems to be all anyone cares about the past two years) I could probably see the same 10-20 patients until I retire.

Same can be true in academia. Ever notice how a lot of folks that publish frequently do so in the 1 or 2 journals? Not all of them are quality journals.
 
Last edited:
  • Like
Reactions: 1 user
Definitely true, I find myself wondering why I should work so hard. I can fill my caseload with treatment resistant veterans who want to rant about the world for an hour without interruption while I would probably be surfing the internet and have a better RVU numbers (which seems to be all anyone cares about the past two years) I could probably see the same 10-20 patients until I retire.

Same can be true in academia. Ever notice how a lot of folks that publish frequently do so in the 1 or 2 journals? Not all of them are quality journals.
This seems to be what at least some folks consistently do once they hit about year 20 in the "system." Not that I condone it, but I can understand how they get to that point.
 
  • Like
Reactions: 1 users
This seems to be what at least some folks consistently do once they hit about year 20 in the "system." Not that I condone it, but I can understand how they get to that point.

My mother always said I was a quick study.
 
  • Haha
Reactions: 2 users
This is what the experts refer to a 'moral hazard.'

But I'm about there with ya. Why try hard to do effective therapy when the system punishes that?
 
  • Like
Reactions: 1 user
This is what the experts refer to a 'moral hazard.'

But I'm about there with ya. Why try hard to do effective therapy when the system punishes that?

It is the moral hazard created by the moral hazard management put into place. We pay the consequences of their poorly designed rules and the consequence is gaming the system.
 
  • Like
Reactions: 1 user
This seems to be what at least some folks consistently do once they hit about year 20 in the "system." Not that I condone it, but I can understand how they get to that point.

Or keeping the patient the full 60 min, which then puts you behind on notes and admin time.
 
  • Like
Reactions: 1 users
Or keeping the patient the full 60 min, which then puts you behind on notes and admin time.
Yet another 'damned if you do, damned if you don't' provider trap built into the system with the end result of increasing stress and burnout.
 
But no one has done a fifth-grader level of arithmetic/analysis of the above.

New pre-employment test just dropped:

5.jpg
 
  • Haha
Reactions: 1 user
Top