VA Mental Health Provider Venting / Problem-solving / Peer Support Thread

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Do you guys also have the stupidest system for MHV? Like, they email everyone in the clinic so the majority of the time you're clicking the link for no reason?
I remember a few years ago, our messages were (thankfully) mostly just our own. But we were still divided into small teams that didn't necessarily make a lot of sense, so I would occasionally get messages meant for other providers/teams (e.g., the PCT). If I'd had to deal with getting emails for the entire facility or service, I probably would've thrown my computer out a window.
 
I remember a few years ago, our messages were (thankfully) mostly just our own. But we were still divided into small teams that didn't necessarily make a lot of sense, so I would occasionally get messages meant for other providers/teams (e.g., the PCT). If I'd had to deal with getting emails for the entire facility or service, I probably would've thrown my computer out a window.
Sweet, sweeeeeet revenge...

 
I've been trying to tell people in the VA claims subs that you don't need a diagnosis from VA MH to get SC, but no one believes me.
Omg I heard that so often when I did C&Ps or I’d see these Veterans in my assessment clinic just setting the stage (they think) for an examination by telling me they have all the things.
 
Anyone else’s VA on a strict 24 rule for notes where weekends, holidays, and leave DO count to the 24 hours?
Your VA seems really really really bad even for VA standards based on this and your other posts.

They keep track of our notes with 24 hours but the only list that gets sent out/naughty list is if you don't complete a note in a week which is super easy to manage. This is except for non crisis related or notes where we need to make mandated reports of course.
 
Last edited:
Anyone else’s VA on a strict 24 rule for notes where weekends, holidays, and leave DO count to the 24 hours?

Your VA seems really really really bad even for VA standards based on this and your other posts.

They keep track of our notes with 24 hours but the only list that gets sent out/naughty list is if you don't complete a note in a week which is super easy to manage. This is for non crisis related or notes where we need to make mandated reports of course.

This. The standard is 24 hours and yes weekends count. I believe that is actually a national rule. That said, they break down note completion as 24,48, and 72 hours. Usually, 72 hours is the real naughty list. However, some chiefs are more difficult about such things. I stopped scheduling late Friday sessions because this became an issue one year on a departmental level. That chief eventually quit though and back to normal we went.
 
We require notes same day, but they could be a placeholder if needed
 
This. The standard is 24 hours and yes weekends count. I believe that is actually a national rule. That said, they break down note completion as 24,48, and 72 hours. Usually, 72 hours is the real naughty list. However, some chiefs are more difficult about such things. I stopped scheduling late Friday sessions because this became an issue one year on a departmental level. That chief eventually quit though and back to normal we went.
The current COS is really pushing for the 24 hour list to be the true "naughty list" on notes, which wouldn't be too bad if weekends, leave, holidays, etc., weren't included.
 
I'm gonna be super cranky and provocative for a second:

Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
Honestly, same question.
 
I'm gonna be super cranky and provocative for a second:

Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?

Works as an effective deterrent to suicide attempts or as political CYA?

I feel like this falls in to the category of no one has a better idea.
 
Works as an effective deterrent to suicide attempts or as political CYA?

I feel like this falls in to the category of no one has a better idea.

Suicide attempts. I get what it was designed for and there are definitely cases where it'd be super helpful. But there are also a lot of cases where the requirements don't really make sense clinically, or even could be counter-productive
 
Suicide attempts. I get what it was designed for and there are definitely cases where it'd be super helpful. But there are also a lot of cases where the requirements don't really make sense clinically, or even could be counter-productive

I have yet to encounter a truly well done suicide prevention plan. Not even the inpatient folks do more than the minimum. Too time intensive.
 
I'm gonna be super cranky and provocative for a second:

Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
Actually no. The VA/DoD Clinical Practice Guidelines (that reviews the quality of the empirical evidence for/against various practices) essentially says as much.

Except...no one actually reads it. We pretend like it doesn't exist.

I actually read the damned thing when it came out in 2024 (most recent version) and it's beyond demoralizing to realize that all of the cumbersome required "life-saving" rituals/practices have recs that are 'weak for' or 'neither for nor against' their use.


Unfortunately, The Church of Suicide Prevention holds more sway in the modern VA healthcare system than the Catholic Church did in most nations of medieval Europe.

"No one expects The Spanish Inquisition!!!"
 
Last edited:
I need somewhere to complain about this and this seemed the most relevant thread. We had someone from informatics stop by our clinic to inform us that our patient satisfaction scores had dropped by *gasp* 10%!!!!! from the high 90s to the high 80s shown to us in a lovely chart that scaled the y-axis by increments of 10. You might all be wondering how patient satisfaction was measured. Was it a validated scale that accounted for multidimensionality? lol, nope! It was a single item dichotomized where the two highest response categories were distilled into a single category and the rest of the data was thrown out. But surely there was careful item analysis to determine how respondents were using the scale? Lol, nope. high number go down bad; high number go up good. Was there any inferential statistics done or did we just infer patterns from trend lines that may or may not exist? lol, nope. But that's not going to stop us from spending 30 minutes discussing how we can remediate our insufficiencies.
 
I'm gonna be super cranky and provocative for a second:

Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
My understanding is Flags = increased attention and use of services, which may convey reduced suicide risk overall. But flags themselves, independent of the subsequent services they trigger, do not lead to any significant decreases in mortality.
 
Last edited:
I need somewhere to complain about this and this seemed the most relevant thread. We had someone from informatics stop by our clinic to inform us that our patient satisfaction scores had dropped by *gasp* 10%!!!!! from the high 90s to the high 80s shown to us in a lovely chart that scaled the y-axis by increments of 10. You might all be wondering how patient satisfaction was measured. Was it a validated scale that accounted for multidimensionality? lol, nope! It was a single item dichotomized where the two highest response categories were distilled into a single category and the rest of the data was thrown out. But surely there was careful item analysis to determine how respondents were using the scale? Lol, nope. high number go down bad; high number go up good. Was there any inferential statistics done or did we just infer patterns from trend lines that may or may not exist? lol, nope. But that's not going to stop us from spending 30 minutes discussing how we can remediate our insufficiencies.
No statistic (or score) is interpretable in the absence of (proper, relevant) norms.

Variability across time is a feature of the natural world.

I may just be cynical, but I'd put more stock in the quality of a VA clinician with 'patient satisfaction' scores that aren't perfect or insanely high. The only way to achieve 'patient satisfaction' scores at the top of the scale is to bend over backwards to give patients the compensible diagnoses they desire rather than the diagnosis they actually have. The current folks in charge won't rest until ordering up a PTSD diagnosis is as quick and straightforward as ordering up a Big Mac at the McDonalds drive thru window.

The VA has tons of doctoral-level psychologists who are, by virtue of their training, subject matter experts on statistical hypothesis testing, experimental methods, threats to internal/external validity, etc....

But they leave the "black belt" data analysis to people who don't know the first thing about it.
 
Last edited:
No statistic (or score) is interpretable in the absence of (proper, relevant) norms.

Variability across time is a feature of the natural world.

I may just be cynical, but I'd put more stock in the quality of a VA clinician with 'patient satisfaction' scores that aren't perfect or insanely high. The only way to achieve 'patient satisfaction' scores at the top of the scale is to bend over backwards to give patients the compensible diagnoses they desire rather than the diagnosis they actually have. The current folks in charge won't rest until ordering up a PTSD diagnosis is as quick and straightforward as ordering up a Big Mac at the McDonalds drive thru window.

The VA has tons of doctoral-level psychologists who are, by virtue of their training, subject matter experts on statistical hypothesis testing, experimental methods, threats to internal/external validity, etc....

But they leave the "black belt" data analysis to people who don't know the first thing about it.

Preach Jennifer Lopez GIF by NBC


It was especially frustrating because I've rejected papers as a peer reviewer for **** statistics like these, but in these meetings I don't have a ton of political capital to just run the room on this.
 
Preach Jennifer Lopez GIF by NBC


It was especially frustrating because I've rejected papers as a peer reviewer for **** statistics like these, but in these meetings I don't have a ton of political capital to just run the room on this.
Exactly. You will be hated for knowing far more than they do about it. I've tried.

One time it worked out okay, though. They put me on some random 'professional (review?) committee' or some such crap. The committee was an interdisciplinary one with people from various professions represented (I was the psychology service guy, yay!).

For the first several meetings I sat through the absolutely mind-numbing task of reviewing ALL (there were a HUNDRED or more) notes from the 'trainees' (in our case interns) chart notes just to check like two to four (can't remember) things, including (a) was the note entered in less than 24 hours from the date/time of the appointment, (b) was the note cosigned by the appropriate licensed supervisor within 24 hours, etc.

So, after three months of this crap and it always being 100%... (I think rarely, they would find a fallout or two but it was always above 90%, which was the criterion...

I said...um...you know...we could easily achieve the same level of quality review by just, you know, SAMPLING something like 10% of the cases each month (like, friggin 10-15 cases to review) rather than the full 100% (100-150 notes) and then--on the off chance that one or more of those 10% of the sampled cases is a 'fallout,' then we can go back to the masochistic practice of reviewing EVERY SINGLE GD NOTE.

They thought I was Leonardo Da Vinci or Isaac Newton or something.

Sampling theory. Friggin undergrad psychology methods stuff.
 
Exactly. You will be hated for knowing far more than they do about it. I've tried.

One time it worked out okay, though. They put me on some random 'professional (review?) committee' or some such crap. The committee was an interdisciplinary one with people from various professions represented (I was the psychology service guy, yay!).

For the first several meetings I sat through the absolutely mind-numbing task of reviewing ALL (there were a HUNDRED or more) notes from the 'trainees' (in our case interns) chart notes just to check like two to four (can't remember) things, including (a) was the note entered in less than 24 hours from the date/time of the appointment, (b) was the note cosigned by the appropriate licensed supervisor within 24 hours, etc.

So, after three months of this crap and it always being 100%... (I think rarely, they would find a fallout or two but it was always above 90%, which was the criterion...

I said...um...you know...we could easily achieve the same level of quality review by just, you know, SAMPLING something like 10% of the cases each month (like, friggin 10-15 cases to review) rather than the full 100% (100-150 notes) and then--on the off chance that one or more of those 10% of the sampled cases is a 'fallout,' then we can go back to the masochistic practice of reviewing EVERY SINGLE GD NOTE.

They thought I was Leonardo Da Vinci or Isaac Newton or something.

Sampling theory. Friggin undergrad psychology methods stuff.

Have they never watched any election coverage in their lives? Do they think those people sample 100% of the population?
 
Have they never watched any election coverage in their lives? Do they think those people sample 100% of the population?
It was bad, man.

The worst part was the fact that we almost NEVER found a 'fallout' case and they'd been doing this for YEARS (wasting time reviewing every single student note for every single discipline at the facility every single month).
 
Did anyone get above a fully successful rating this year? With the new standards, it honestly seems impossible
Nope.

But I have never harbored any illusions that the actual quality of my clinical work would EVER be formally or explicitly acknowledged by the system here.

See also the book entitled, "The Tyranny of Metrics."
 
Did anyone get above a fully successful rating this year? With the new standards, it honestly seems impossible

I'm being told our whole dept is getting fully successful only based on HR. Goodbye annual productivity bonus, you were small but better than nothing. Just one more of a thousand small cuts.
 
I'm being told our whole dept is getting fully successful only based on HR. Goodbye annual productivity bonus, you were small but better than nothing. Just one more of a thousand small cuts.
Seems like it has to be against some sort of regulation for HR to make a blanket statement like that.
 
Seems like it has to be against some sort of regulation for HR to make a blanket statement like that.

HR did not make the statement. They just kicked back all ratings that were higher than fully successful. Which was likely most of us. They now get final say on the ratings. Got a heads up as the final ratings are delayed and this is the reason.
 
Last edited:
When I was last in the VA, they told neuropsych specifically that the way it was set up, it was impossible for us to "exceed expectations" or whatever the top category was.
We were told the same. I think part of it related to one of the quality metrics being that the psychologist provided psychotherapy. Never mind that our bosses also consistently told us they would prefer we spent all of our time performing assessments because of our backlog.
 
We were told the same. I think part of it related to one of the quality metrics being that the psychologist provided psychotherapy. Never mind that our bosses also consistently told us they would prefer we spent all of our time performing assessments because of our backlog.

Yeah, a real incentive to go out there and try to over perform...
 
Question for the practicing VA clinical psychologists:

Have any of you been successful (and if so, how?) in getting administrative support to be able to utilize alternative instruments to the PCL-5 in your clinical work? For example, proprietary measures such as the DAPS or TSI-2 (or others)?

If you are in an 'assessment psychologist' or neuropsychologist position where your main duties involve more assessment than therapy, does this make a difference? Surely the neuropsych folks have been able (in some contexts) to get admin support for obtaining/using some proprietary measures.
 
As a DOD psychologist I've been thinking of jumping ship to the VA because they get paid over 20 percent more for the same GS step in my city. I know the SSR pay increase was passed sometime in 2022, but is it permanent or will it eventually sunset? I actually like my gig, caseload is very reasonable, but a 40k pay increase may be worth the jump.
 
Question for the practicing VA clinical psychologists:

Have any of you been successful (and if so, how?) in getting administrative support to be able to utilize alternative instruments to the PCL-5 in your clinical work? For example, proprietary measures such as the DAPS or TSI-2 (or others)?

If you are in an 'assessment psychologist' or neuropsychologist position where your main duties involve more assessment than therapy, does this make a difference? Surely the neuropsych folks have been able (in some contexts) to get admin support for obtaining/using some proprietary measures.
I’m assessment and research focused. I usually get most instruments I ask for as long as our neuropsychologists have what they need. Are you looking for PCL-5 alternatives for routine clinical outcome monitoring? Or for one time assessments?
 
I’m assessment and research focused. I usually get most instruments I ask for as long as our neuropsychologists have what they need. Are you looking for PCL-5 alternatives for routine clinical outcome monitoring? Or for one time assessments?
I'm definitely looking for PCL-5 alternatives (to include in a multi-method psychological assessment/evaluation process for purposes of case formulation, differential diagnosis, and treatment planning). I'd like something that has some embedded validity scales (I'm aware of some of the cool initial work some folks have done trying to explore this space [embedded validity scales] for the PCL-5, but I don't think that stuff is 'ready for prime time' yet and everyone gets PCL-5's thrown at them constantly already). I frequently use the MMPI-2-RF in these assessments so I have broadband psychopathology (under the HiTOP model) covered in those cases as well as embedded validity scales for that measure) but I'd like to have a 'measure' / checklist for PTSD that isn't so face-valid (people who just circle 3's and 4's) and also something that may be a bit more 'supplemental' to the ocean of PCL-5's/PHQ-9's that most of these patients have been swimming in. So, alternatives such as the DAPS and TSI-2 have piqued my interest along with alternatives to the PHQ/GAD approach to 'operationalizing' the DSM-5 criterion sets where people just circle high numbers. Alternatives/ additions to the PHQ/GAD in the form of the MASQ or IDAS-II to measure depression/anxiety presentations would also be nice. I'm really interested in the new Inventory of Problems (IOP-29) measure but since it is a dedicated/ standalone measure of response bias it would be a 'non-starter' in VA clinical practice, though I have suspicions that in the coming years the VA may be forced to admit that symptom overreporing is a HUGE issue compromising the validity/integrity of both its MH research programs/publications and its clinical operations (under 'measurement-based care' failures). It would be nice to be able to selectively utilize the SIRS or SIMS in cases where there is compelling preliminary evidence of likely symptom overreporting--for example, people who invalidate the MMPI-2-RF protocol due to overreporting psychopathological/cognitive/somatic problems, the folks who regularly produce PCL's in the 75+ range and PHQ-9's in the 25+ range despite observational and chart review and collateral (work performance) data clearly discrepant with that and/or reporting all sorts of bizarre/rare standalone 'pseudo-psychotic' symptoms, etc. But that's a whole 'nother topic for another time.
 
Last edited:
As a DOD psychologist I've been thinking of jumping ship to the VA because they get paid over 20 percent more for the same GS step in my city. I know the SSR pay increase was passed sometime in 2022, but is it permanent or will it eventually sunset? I actually like my gig, caseload is very reasonable, but a 40k pay increase may be worth the jump.

I would check if the SSR is available to you. My understanding is the Trump Administration stopped SSR for new hires and current folks with SSRs will transferred to the closest GS level to their current salary.
 
I would check if the SSR is available to you. My understanding is the Trump Administration stopped SSR for new hires and current folks with SSRs will transferred to the closest GS level to their current salary.
So, does this mean that a GS-13 with a current SSR will have that stripped at the beginning of the year? (e.g., the additional SSR rate to their salary will be gone and they'll be back to normal GS-13 salary?).
 
So, does this mean that a GS-13 with a current SSR will have that stripped at the beginning of the year? (e.g., the additional SSR rate to their salary will be gone and they'll be back to normal GS-13 salary?).

I know it was cancelled for IT and HR for sure. In those cases, the articles mentioned retained salary rates for current employees. I assume this would jump them to step 9 or 10. However, that means no future step raises for them. They also cancelled the critical skills incentives for new hires.

Not sure exactly what that means for psychology as we are not well covered in the news. However, our SSRs were also tied to the PACT Act, which was only funded up to 2027.
 
I'm definitely looking for PCL-5 alternatives (to include in a multi-method psychological assessment/evaluation process for purposes of case formulation, differential diagnosis, and treatment planning). I'd like something that has some embedded validity scales (I'm aware of some of the cool initial work some folks have done trying to explore this space [embedded validity scales] for the PCL-5, but I don't think that stuff is 'ready for prime time' yet and everyone gets PCL-5's thrown at them constantly already). I frequently use the MMPI-2-RF in these assessments so I have broadband psychopathology (under the HiTOP model) covered in those cases as well as embedded validity scales for that measure) but I'd like to have a 'measure' / checklist for PTSD that isn't so face-valid (people who just circle 3's and 4's) and also something that may be a bit more 'supplemental' to the ocean of PCL-5's/PHQ-9's that most of these patients have been swimming in. So, alternatives such as the DAPS and TSI-2 have piqued my interest along with alternatives to the PHQ/GAD approach to 'operationalizing' the DSM-5 criterion sets where people just circle high numbers. Alternatives/ additions to the PHQ/GAD in the form of the MASQ or IDAS-II to measure depression/anxiety presentations would also be nice. I'm really interested in the new Inventory of Problems (IOP-29) measure but since it is a dedicated/ standalone measure of response bias it would be a 'non-starter' in VA clinical practice, though I have suspicions that in the coming years the VA may be forced to admit that symptom overreporing is a HUGE issue compromising the validity/integrity of both its MH research programs/publications and its clinical operations (under 'measurement-based care' failures). It would be nice to be able to selectively utilize the SIRS or SIMS in cases where there is compelling preliminary evidence of likely symptom overreporting--for example, people who invalidate the MMPI-2-RF protocol due to overreporting psychopathological/cognitive/somatic problems, the folks who regularly produce PCL's in the 75+ range and PHQ-9's in the 25+ range despite observational and chart review and collateral (work performance) data clearly discrepant with that and/or reporting all sorts of bizarre/rare standalone 'pseudo-psychotic' symptoms, etc. But that's a whole 'nother topic for another time.
I had the IDAS II and a scoring tool if you’d like a copy of both. Unfortunately, no embedded validity indices there. I have copies of the IOP-29 and Memory module but you have to pay for those interpretations.

I use the SIMS and the IDAS II when I moonlight in our residential program doing bios once a week (not my typical longer assessment). That works well overall when I can’t get an RF administered.

When I’m interviewing for PTSD, depending on the context, I may use the PSSI-5, MINI, DIAMOND, and CAPS but those are very face valid. I like the PSSI -5 because I can capture symptom frequency and not just intensity/severity. I’ve never used the TSI, but would like to try it. I have a peer who’s uses it regularly. I have the MFAST that I’ll use on occasion too which is easy to administer.

If you want to PM me your va email, I can reach out in TEAMS tomorrow and share.
 
Top