- Joined
- Feb 27, 2020
- Messages
- 212
- Reaction score
- 342
It's been tough but at the same time kind of nice to be untethered from it. Everyone on my panel has my number so they can call if they need something.

There are a lot of great things about working inpatient, but not having MHV is definitely up there.I was able to get the one MHV message I wanted to send off today but otherwise yeah couldn't do much on that front.
I remember a few years ago, our messages were (thankfully) mostly just our own. But we were still divided into small teams that didn't necessarily make a lot of sense, so I would occasionally get messages meant for other providers/teams (e.g., the PCT). If I'd had to deal with getting emails for the entire facility or service, I probably would've thrown my computer out a window.Do you guys also have the stupidest system for MHV? Like, they email everyone in the clinic so the majority of the time you're clicking the link for no reason?
There is a setting in MHV to only get alerted to messages assigned to you!Do you guys also have the stupidest system for MHV? Like, they email everyone in the clinic so the majority of the time you're clicking the link for no reason?
Sweet, sweeeeeet revenge...I remember a few years ago, our messages were (thankfully) mostly just our own. But we were still divided into small teams that didn't necessarily make a lot of sense, so I would occasionally get messages meant for other providers/teams (e.g., the PCT). If I'd had to deal with getting emails for the entire facility or service, I probably would've thrown my computer out a window.
Omg I heard that so often when I did C&Ps or I’d see these Veterans in my assessment clinic just setting the stage (they think) for an examination by telling me they have all the things.I've been trying to tell people in the VA claims subs that you don't need a diagnosis from VA MH to get SC, but no one believes me.
With the outage, can we go home? Shiori Jr. will be here in a week, and I want to go home.
Everything over here was down. CPRS, VVC, half the websites, etc. It looks like it's back up now.Which outage is this?
Everything over here was down. CPRS, VVC, half the websites, etc. It looks like it's back up now.
I got a last minute intake scheduled with someone who's completely inappropriate for outpatient therapy (in terms of acuity and risk) and ughhhhh
That's...dumbAnyone else’s VA on a strict 24 rule for notes where weekends, holidays, and leave DO count to the 24 hours?
That's...dumb
Your VA seems really really really bad even for VA standards based on this and your other posts.Anyone else’s VA on a strict 24 rule for notes where weekends, holidays, and leave DO count to the 24 hours?
Anyone else’s VA on a strict 24 rule for notes where weekends, holidays, and leave DO count to the 24 hours?
Your VA seems really really really bad even for VA standards based on this and your other posts.
They keep track of our notes with 24 hours but the only list that gets sent out/naughty list is if you don't complete a note in a week which is super easy to manage. This is for non crisis related or notes where we need to make mandated reports of course.
The current COS is really pushing for the 24 hour list to be the true "naughty list" on notes, which wouldn't be too bad if weekends, leave, holidays, etc., weren't included.This. The standard is 24 hours and yes weekends count. I believe that is actually a national rule. That said, they break down note completion as 24,48, and 72 hours. Usually, 72 hours is the real naughty list. However, some chiefs are more difficult about such things. I stopped scheduling late Friday sessions because this became an issue one year on a departmental level. That chief eventually quit though and back to normal we went.
Honestly, same question.I'm gonna be super cranky and provocative for a second:
Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
I'm gonna be super cranky and provocative for a second:
Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
Works as an effective deterrent to suicide attempts or as political CYA?
I feel like this falls in to the category of no one has a better idea.
Suicide attempts. I get what it was designed for and there are definitely cases where it'd be super helpful. But there are also a lot of cases where the requirements don't really make sense clinically, or even could be counter-productive
Actually no. The VA/DoD Clinical Practice Guidelines (that reviews the quality of the empirical evidence for/against various practices) essentially says as much.I'm gonna be super cranky and provocative for a second:
Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
My understanding is Flags = increased attention and use of services, which may convey reduced suicide risk overall. But flags themselves, independent of the subsequent services they trigger, do not lead to any significant decreases in mortality.I'm gonna be super cranky and provocative for a second:
Is there any evidence that the VA's suicide prevention system, specifically the high risk flag and accompanying requirements, actually works?
No statistic (or score) is interpretable in the absence of (proper, relevant) norms.I need somewhere to complain about this and this seemed the most relevant thread. We had someone from informatics stop by our clinic to inform us that our patient satisfaction scores had dropped by *gasp* 10%!!!!! from the high 90s to the high 80s shown to us in a lovely chart that scaled the y-axis by increments of 10. You might all be wondering how patient satisfaction was measured. Was it a validated scale that accounted for multidimensionality? lol, nope! It was a single item dichotomized where the two highest response categories were distilled into a single category and the rest of the data was thrown out. But surely there was careful item analysis to determine how respondents were using the scale? Lol, nope. high number go down bad; high number go up good. Was there any inferential statistics done or did we just infer patterns from trend lines that may or may not exist? lol, nope. But that's not going to stop us from spending 30 minutes discussing how we can remediate our insufficiencies.
No statistic (or score) is interpretable in the absence of (proper, relevant) norms.
Variability across time is a feature of the natural world.
I may just be cynical, but I'd put more stock in the quality of a VA clinician with 'patient satisfaction' scores that aren't perfect or insanely high. The only way to achieve 'patient satisfaction' scores at the top of the scale is to bend over backwards to give patients the compensible diagnoses they desire rather than the diagnosis they actually have. The current folks in charge won't rest until ordering up a PTSD diagnosis is as quick and straightforward as ordering up a Big Mac at the McDonalds drive thru window.
The VA has tons of doctoral-level psychologists who are, by virtue of their training, subject matter experts on statistical hypothesis testing, experimental methods, threats to internal/external validity, etc....
But they leave the "black belt" data analysis to people who don't know the first thing about it.
Exactly. You will be hated for knowing far more than they do about it. I've tried.![]()
It was especially frustrating because I've rejected papers as a peer reviewer for **** statistics like these, but in these meetings I don't have a ton of political capital to just run the room on this.
Exactly. You will be hated for knowing far more than they do about it. I've tried.
One time it worked out okay, though. They put me on some random 'professional (review?) committee' or some such crap. The committee was an interdisciplinary one with people from various professions represented (I was the psychology service guy, yay!).
For the first several meetings I sat through the absolutely mind-numbing task of reviewing ALL (there were a HUNDRED or more) notes from the 'trainees' (in our case interns) chart notes just to check like two to four (can't remember) things, including (a) was the note entered in less than 24 hours from the date/time of the appointment, (b) was the note cosigned by the appropriate licensed supervisor within 24 hours, etc.
So, after three months of this crap and it always being 100%... (I think rarely, they would find a fallout or two but it was always above 90%, which was the criterion...
I said...um...you know...we could easily achieve the same level of quality review by just, you know, SAMPLING something like 10% of the cases each month (like, friggin 10-15 cases to review) rather than the full 100% (100-150 notes) and then--on the off chance that one or more of those 10% of the sampled cases is a 'fallout,' then we can go back to the masochistic practice of reviewing EVERY SINGLE GD NOTE.
They thought I was Leonardo Da Vinci or Isaac Newton or something.
Sampling theory. Friggin undergrad psychology methods stuff.
It was bad, man.Have they never watched any election coverage in their lives? Do they think those people sample 100% of the population?
Nope.Did anyone get above a fully successful rating this year? With the new standards, it honestly seems impossible
Did anyone get above a fully successful rating this year? With the new standards, it honestly seems impossible
Seems like it has to be against some sort of regulation for HR to make a blanket statement like that.I'm being told our whole dept is getting fully successful only based on HR. Goodbye annual productivity bonus, you were small but better than nothing. Just one more of a thousand small cuts.
Seems like it has to be against some sort of regulation for HR to make a blanket statement like that.
We were told the same. I think part of it related to one of the quality metrics being that the psychologist provided psychotherapy. Never mind that our bosses also consistently told us they would prefer we spent all of our time performing assessments because of our backlog.When I was last in the VA, they told neuropsych specifically that the way it was set up, it was impossible for us to "exceed expectations" or whatever the top category was.
We were told the same. I think part of it related to one of the quality metrics being that the psychologist provided psychotherapy. Never mind that our bosses also consistently told us they would prefer we spent all of our time performing assessments because of our backlog.
I’m assessment and research focused. I usually get most instruments I ask for as long as our neuropsychologists have what they need. Are you looking for PCL-5 alternatives for routine clinical outcome monitoring? Or for one time assessments?Question for the practicing VA clinical psychologists:
Have any of you been successful (and if so, how?) in getting administrative support to be able to utilize alternative instruments to the PCL-5 in your clinical work? For example, proprietary measures such as the DAPS or TSI-2 (or others)?
If you are in an 'assessment psychologist' or neuropsychologist position where your main duties involve more assessment than therapy, does this make a difference? Surely the neuropsych folks have been able (in some contexts) to get admin support for obtaining/using some proprietary measures.
I'm definitely looking for PCL-5 alternatives (to include in a multi-method psychological assessment/evaluation process for purposes of case formulation, differential diagnosis, and treatment planning). I'd like something that has some embedded validity scales (I'm aware of some of the cool initial work some folks have done trying to explore this space [embedded validity scales] for the PCL-5, but I don't think that stuff is 'ready for prime time' yet and everyone gets PCL-5's thrown at them constantly already). I frequently use the MMPI-2-RF in these assessments so I have broadband psychopathology (under the HiTOP model) covered in those cases as well as embedded validity scales for that measure) but I'd like to have a 'measure' / checklist for PTSD that isn't so face-valid (people who just circle 3's and 4's) and also something that may be a bit more 'supplemental' to the ocean of PCL-5's/PHQ-9's that most of these patients have been swimming in. So, alternatives such as the DAPS and TSI-2 have piqued my interest along with alternatives to the PHQ/GAD approach to 'operationalizing' the DSM-5 criterion sets where people just circle high numbers. Alternatives/ additions to the PHQ/GAD in the form of the MASQ or IDAS-II to measure depression/anxiety presentations would also be nice. I'm really interested in the new Inventory of Problems (IOP-29) measure but since it is a dedicated/ standalone measure of response bias it would be a 'non-starter' in VA clinical practice, though I have suspicions that in the coming years the VA may be forced to admit that symptom overreporing is a HUGE issue compromising the validity/integrity of both its MH research programs/publications and its clinical operations (under 'measurement-based care' failures). It would be nice to be able to selectively utilize the SIRS or SIMS in cases where there is compelling preliminary evidence of likely symptom overreporting--for example, people who invalidate the MMPI-2-RF protocol due to overreporting psychopathological/cognitive/somatic problems, the folks who regularly produce PCL's in the 75+ range and PHQ-9's in the 25+ range despite observational and chart review and collateral (work performance) data clearly discrepant with that and/or reporting all sorts of bizarre/rare standalone 'pseudo-psychotic' symptoms, etc. But that's a whole 'nother topic for another time.I’m assessment and research focused. I usually get most instruments I ask for as long as our neuropsychologists have what they need. Are you looking for PCL-5 alternatives for routine clinical outcome monitoring? Or for one time assessments?
As a DOD psychologist I've been thinking of jumping ship to the VA because they get paid over 20 percent more for the same GS step in my city. I know the SSR pay increase was passed sometime in 2022, but is it permanent or will it eventually sunset? I actually like my gig, caseload is very reasonable, but a 40k pay increase may be worth the jump.
So, does this mean that a GS-13 with a current SSR will have that stripped at the beginning of the year? (e.g., the additional SSR rate to their salary will be gone and they'll be back to normal GS-13 salary?).I would check if the SSR is available to you. My understanding is the Trump Administration stopped SSR for new hires and current folks with SSRs will transferred to the closest GS level to their current salary.
So, does this mean that a GS-13 with a current SSR will have that stripped at the beginning of the year? (e.g., the additional SSR rate to their salary will be gone and they'll be back to normal GS-13 salary?).
I had the IDAS II and a scoring tool if you’d like a copy of both. Unfortunately, no embedded validity indices there. I have copies of the IOP-29 and Memory module but you have to pay for those interpretations.I'm definitely looking for PCL-5 alternatives (to include in a multi-method psychological assessment/evaluation process for purposes of case formulation, differential diagnosis, and treatment planning). I'd like something that has some embedded validity scales (I'm aware of some of the cool initial work some folks have done trying to explore this space [embedded validity scales] for the PCL-5, but I don't think that stuff is 'ready for prime time' yet and everyone gets PCL-5's thrown at them constantly already). I frequently use the MMPI-2-RF in these assessments so I have broadband psychopathology (under the HiTOP model) covered in those cases as well as embedded validity scales for that measure) but I'd like to have a 'measure' / checklist for PTSD that isn't so face-valid (people who just circle 3's and 4's) and also something that may be a bit more 'supplemental' to the ocean of PCL-5's/PHQ-9's that most of these patients have been swimming in. So, alternatives such as the DAPS and TSI-2 have piqued my interest along with alternatives to the PHQ/GAD approach to 'operationalizing' the DSM-5 criterion sets where people just circle high numbers. Alternatives/ additions to the PHQ/GAD in the form of the MASQ or IDAS-II to measure depression/anxiety presentations would also be nice. I'm really interested in the new Inventory of Problems (IOP-29) measure but since it is a dedicated/ standalone measure of response bias it would be a 'non-starter' in VA clinical practice, though I have suspicions that in the coming years the VA may be forced to admit that symptom overreporing is a HUGE issue compromising the validity/integrity of both its MH research programs/publications and its clinical operations (under 'measurement-based care' failures). It would be nice to be able to selectively utilize the SIRS or SIMS in cases where there is compelling preliminary evidence of likely symptom overreporting--for example, people who invalidate the MMPI-2-RF protocol due to overreporting psychopathological/cognitive/somatic problems, the folks who regularly produce PCL's in the 75+ range and PHQ-9's in the 25+ range despite observational and chart review and collateral (work performance) data clearly discrepant with that and/or reporting all sorts of bizarre/rare standalone 'pseudo-psychotic' symptoms, etc. But that's a whole 'nother topic for another time.