Suicide risk assesment

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Merely

Membership Revoked
Removed
10+ Year Member
Joined
Jul 12, 2012
Messages
1,631
Reaction score
1,469
I hear this term being used a lot. What exactly does this mean? I’m in the ED being consulted to see a pt. I talk to the pt find out exactly what’s going on, do a comprehensive psych evaluation exploring suicidal ideation, past psych history, etc. I don’t really understand what a suicide risk assesment is. Is it a questionnaire I’m supposed to read off of? Or is it simply talking to the pt about his suicidal ideation and all that that entails like past thoughts, plans etc. in other words. Is there like a standardized thing we should be asking or is it just generally asking about suicide is what the risk assessment entails. Thx

Members don't see this ad.
 
Suicide is a complex, multi-faceted issue with societal, psychological, biological, genetic and cultural influences and correlates. A number of studies demonstrate experts are marginally, if at all, superior to laypersons in predicting a given individual's suicide risk. The prevailing public health-focused trope at the moment is some variant of 'suicide is preventable' - this is not always true. Certain risk factors are potentially favorable to mitigation. This is the real goal of a good suicide risk assessment.

Your job is to do a complete psychiatric evaluation, then frame your assessment in terms of protective factors and risk factors. A truly comprehensive suicide risk assessment would include explanations of how you may have bolstered protective factors and mitigated risk factors to the extent possible (some risk factors are not). This should lead you to a postulation of whether someone is at chronically or acutely elevated risk of lethality - this distinction informs your plan. To wit - when someone says they are suicidal, that is always a risk factor -- whether it is acute or not is the more important distinction.

Examples of protective factors - religion, no prior suicide attempts, lack of access to firearms, demonstrated willingness to reach out in crisis, strong family/interpersonal supports, gainful employment

Examples of risk factors you can mitigate - acute alcohol intoxication (by observing until sobriety), lack of access to care (by plugging into care), recent traumatic event (by inpatient hospitalization for stabilization, potentially),

Examples of risk factors you cannot - male gender, certain personality disorders, prior suicide attempts, chronic pattern of substance misuse without evidence of desire to change (sometimes)

A great many of our patients are at chronically elevated risk of lethality - fewer are at acutely elevated risk of lethality. Current best practice in most communities favors hospitalization of the latter, and harm reduction approaches for the former, but there are certainly many exceptions to that.

Finally, note that current case law, in general, doesn't expect you to predict suicide (despite what some in the public think). Rather, you're expected to address risk as above and state clearly your professional opinion as to the acuity and level of risk. If whatever you've chosen to do with the patient matches up with your own risk assessment (if thorough), you tend to be in somewhat safer legal waters.
 
I hear this term being used a lot. What exactly does this mean? I’m in the ED being consulted to see a pt. I talk to the pt find out exactly what’s going on, do a comprehensive psych evaluation exploring suicidal ideation, past psych history, etc. I don’t really understand what a suicide risk assesment is. Is it a questionnaire I’m supposed to read off of? Or is it simply talking to the pt about his suicidal ideation and all that that entails like past thoughts, plans etc. in other words. Is there like a standardized thing we should be asking or is it just generally asking about suicide is what the risk assessment entails. Thx

I think there are different things 'suicide risk assessment' means, depending on the context. The system I work in (veterans affairs) has operationalized it as a 'three tiered' process, currently (level 1 = PHQ-9 item about thoughts of suicide --> level 2 = C-SRSS (Columbia screener) --> level 3 = full 'comprehensive suicide risk assessment' (CSRE) interview).

For me, in terms of my actual clinical encounter (and in terms of what has actual treatment utility), one of the most critical things I focus on is 'is this patient currently presenting suicide risk sufficient to defend a decision to hospitalize (and is it necessary in this instance)?' If the answer to that question is 'no' then my next priority is to utilize the remainder of the session to intervene in a manner that--given time constraints--would be most likely to reduce risk of suicide between now and next session. I would probably strive to exercise my judgment (I know, a heresy in hospital systems nowadays) as an individual clinician to determine how best to maximize my time remaining with the patient and adjust accordingly.

Unfortunately, the hospital system I work for has apparently adopted the primitive notion that the more 'comprehensive,' the more fixed, the more concrete and the more (in terms of volume and specificity of questions) the assessment process is, the better. I don't think that this is necessarily the case and I don't think the literature necessarily supports the specific extensive required procedures (in their entirety).

For instance, at present, if I see someone for an initial intake, I have to do the C-SRSS (2nd level eval), if that is 'positive', then I have to do a full comprehensive suicide risk evaluation interview and (most likely) a separate process/document involving a comprehensive suicide safety plan. I'm also likely to have to do an additional interview/note called an 'overdose suicide behavior report' if there was an attempt/overdose within the past year. This could easily lead to the scenario where one faithfully adheres to all the requirements to document in all the required templates but be left with little/no time to actually intervene meaningfully with the patient to reduce risk.

The hospital systems are continually adding to the task of 'suicide risk assessment' without necessarily, in my opinion, adding a whole lot of treatment utility to the process. There was a fairly recent article published in professional psychology research and practice ('Suicide Risk Assessment: What Psychologists Need to Know') which debunked some of the assumptions inherent in these types of comprehensive approaches (e.g., the classification into 'Low/Moderate/High' risk isn't really all that useful at all in predicting suicide--which we can't anyway due to low base rates and the uniqueness of each case), but 'suicide prevention' within organizations has taken on a quasi-religious tone where adherence to 'ritual' and professions of faith (e.g., 'we are going to ELIMINATE suicide,' or 'one suicide is too many,' or 'every suicide is preventable') carry the day over rational thinking and attention to the complexities of the task and the scientific literature. Interesting times.
 
Members don't see this ad :)
In my eyes it sounds like your doing a suicide risk assessment as part of your general evaluation, but to simplify things in eyes of administrators or for medicolegal reasons I imagine is helpful to draw explicit attention to the fact to assessed this in your A/P.

Things like past attempts/substance use will be in my past psych hx. Then to prevent having to write a 2 page assessment with every single of +/- of their risk repeated, in my assessment I tend to write something like the below (obviously different based on each pt).

“Given past psychiatric hx, patient is at some chronically elevated risk of suicide, however patient is not suicidal and does not appear to be at imminent acute risk for suicide. Patient is agreeable to decreasing future risk by following up for psychotherapy and removing access to firearms”

I’m not a lawyer or forensic, so not sure if they would suggest something else, but to me that seems to demonstrate that I both evaluated and modified risk, without having to write an encyclopedia.
 
Suicide Risk Assessment (fill in all the below, and done)
Acute Factors: xyz
Chronic Factors: mno
Protective Factors: pqrs
Risk Assessment: Low, low/moderate, moderate/high, high, etc
Modiafiable Factors: remove guns, substance sobriety, etc, etc

*Discuss things you will actually do in the Plan section
 
Suicide Risk Assessment (fill in all the below, and done)
Acute Factors: xyz
Chronic Factors: mno
Protective Factors: pqrs
Risk Assessment: Low, low/moderate, moderate/high, high, etc
Modiafiable Factors: remove guns, substance sobriety, etc, etc

*Discuss things you will actually do in the Plan section

That seems very simple and reasonable, thank u
 
And every bit (or even more) 'evidence-based,' responsible, and defensible as the entirety of the VA mandated multi-level, multi-document extravaganza of documentation.

What's your take on CAMS as a therapeutic/assessment approach? There has at least been an attempt to validate a brief intervention form that I think would fit nicely into the workflow of our ED and the frank honesty/non-adversarial approach appeals to me. But I am not a suicidologist and cannot evaluate the entire literature, so opinions of those who do more thorough assessments more often would be helpful.
 
I have to admit that I'm not specifically trained in the CAMS approach (but I have purchased a recent manual and have perused it--the Managing Suicidal Risk workbook/manual). I think it's based on solid cognitive-behavioral principles but--like any particularly 'structured' protocol--there's a natural limit to how 'effective' it can be that is established by the limit to which your clients (including the veterans I work with in a post-deployment clinic setting) will respond positively to the structure vs. negatively. What frustrates me about most administrators (and even some clinicians) sometimes is their tendency to view 'suicide management' as some univariate clinical process/algorithm when it's anything but. Hopelessness/suicidality can be a 'final common pathway' endpoint with innumerable client-specific causes. In the VA system, we can be limited to monthly sessions with the vast majority of our caseloads. Therapy time is precious, especially when veterans no-show/cancel at a reasonably high rate, come late, etc.

My experience working with patients is that handling suicide risk is very much a 'multivariate' type equation which involves monitoring/intervention around not only 'suicidality' but also other primary elements of their case formulation (homelessness? major depression? psychosis? personality disorder? alcohol use? Likely to lose job and wife leave him if he is involuntarily hospitalized?). There are some things that seem 'straightforward' (in the abstract) that we 'should do' when addressing any clinical issue (including suicidality) with a generic hypothetical 'patient' but the decision of what to do isn't always straightforward with particular patient scenarios. I think there's an upper limit to how much can be 'manualized' in this area. Good practices (e.g., solid documentation, safety planning, collaboration of care), certainly. But I think it's more about 'principles' of care than specifics of implementation (which a good clinician should determine in context in collaboration with the patient).

The most ridiculous thing our VA does is go 'hog wild' with respect to all the suicide risk management formal initiatives (esp. with respect to requiring multiple separate in-depth note templates for clinicians to fill out) while simultaneously ignoring our pleas for adequate staffing that would allow us to provide weekly cognitive-behavioral therapy to our clients (which actually WOULD most likely lower the risk of suicide for them). Right now, outside of a handful of veterans I see for cognitive processing therapy (CPT) for PTSD and a few 'high risk suicide flag' cases, I am forced to see my patients monthly (or less frequently) because we don't have adequate staffing.
 
One of JCAHO's new favorite things to force us to do is to document an "evidence based" suicide assessment (Columbia SSRS was chosen for this facility and something worse for the one I work at much less often.)

And I sort-of get why they would do that. Standardization and checklists are good things for risk managers.

But, at the end of the day, all of the "validated" suicide screening tools still conclude with "use your clinical judgement to choose one of 3/4/5 risk categories and then use your clinical judgement to manage based on that category." Also, we do a terrible job predicting/preventing suicide, so the "evidence" is probably not that the "validated" scales actually do much, anyway.

I'll let others address the rest of the topic more thoroughly, we spend so much time doing suicide risk assessments in emergency departments that I'm not interested in the topic as a whole.
 
What's your take on CAMS as a therapeutic/assessment approach? There has at least been an attempt to validate a brief intervention form that I think would fit nicely into the workflow of our ED and the frank honesty/non-adversarial approach appeals to me. But I am not a suicidologist and cannot evaluate the entire literature, so opinions of those who do more thorough assessments more often would be helpful.
I am trained in CAMS and I think it can be a very helpful approach for the right patients. the set up in our ED is not conducive to psychiatrists using this approach however. You need two chairs, you need to sit next to the patient, and more importantly you need time. If you are able to spend 45mins-1hr with a patient then you can do it. Unfortunately we get about 25mins for patients in that setting. It is much better fit for outpatient setting, inpatient, or a crisis clinic/urgent care type setting. It could definitely be used in the ED if you have the right setting and ability to spend time with the patient. I like the combined assessment as intervention approach and collaborating with patients to identify non-suicidal ways of coping. It is also not good for decompensated cluster B patients because of the inability to have an alliance.
 
The great lol of "evidence based" suicide risk assessment is that the evidence consistently says that it has no adequate clinical predictive value. Trying to predict acute suicide risk is about as valid as palm reading.
 
The great lol of "evidence based" suicide risk assessment is that the evidence consistently says that it has no adequate clinical predictive value. Trying to predict acute suicide risk is about as valid as palm reading.

But as with other things in our profession u still gotta do it lol
 
Members don't see this ad :)
I am trained in CAMS and I think it can be a very helpful approach for the right patients. the set up in our ED is not conducive to psychiatrists using this approach however. You need two chairs, you need to sit next to the patient, and more importantly you need time. If you are able to spend 45mins-1hr with a patient then you can do it. Unfortunately we get about 25mins for patients in that setting. It is much better fit for outpatient setting, inpatient, or a crisis clinic/urgent care type setting. It could definitely be used in the ED if you have the right setting and ability to spend time with the patient. I like the combined assessment as intervention approach and collaborating with patients to identify non-suicidal ways of coping. It is also not good for decompensated cluster B patients because of the inability to have an alliance.

Our psych ED is something of an odd beast and bills like an outpatient clinic so it might actually be something that could be incorporated into our work flow. In some places David Jobes suggests that even just doing the SSF with someone side-by-side for 20 minutes might be a helpful intervention in this kind of setting.

Honestly the enormous Flintstone chairs we have in our interview rooms that you aren't meant to be able to throw might be a bigger obstacle as they are a pain in the *** to move. While we have a lot of chronically emotionally dysregulated/suicidal people come through, by the time a physician gets to them they've often spent hours sitting in a boring waiting room and are usually not acutely decompensated in that very moment and have some mentalizing capacities intact.
 
But as with other things in our profession u still gotta do it lol


Quite so. An old psych ED hand in our shop used to say that if he just immediately discharged everyone who presented complaining of SI, it might not make any difference empirically speaking in the local suicide completion rate. Obviously tongue in cheek but not entirely.

The literature also is pretty consistent in saying that our clinical interviews are consistently inferior to using a structured instrument to the extent we do have any predictive capabilities, so I am glad we are having a discussion about what tools are actually feasible to use in typical practice settings.
 
What’s the point of a suicide risk assessment if it doesn’t predict suicide? Isn’t that odd? In what way does it help us?
 
What’s the point of a suicide risk assessment if it doesn’t predict suicide? Isn’t that odd? In what way does it help us?

The only real reason to document a risk assessment well is for CYA when it comes to liability.
 
Very sad situation

If I had to estimate, I'd say that at least 10-20% of my time is spent on CYA activities that serve no empirical clinical benefit to my patients. That estimate was higher when I was in the VA due to the many useless initiatives. Just the world we live and practice in, it seems.
 
The literature also is pretty consistent in saying that our clinical interviews are consistently inferior to using a structured instrument to the extent we do have any predictive capabilities, so I am glad we are having a discussion about what tools are actually feasible to use in typical practice settings.
Do you happen to have a good article in mind?
 
Do you happen to have a good article in mind?
Sorry, looking for a paper to answer your question, more has been done on suicide attempts than on suicide completion. However, it is much of a piece with the superiority of statistical methods for prediction in essentially every domain of mental health.

a meta-analysis of mechanical v. clinical prediction methods in mental health: SAGE Journals: Your gateway to world-class research journals

Meehl classics: "Clinical versus Actuarial Judgment" http://meehl.umn.edu/sites/meehl.dl.umn.edu/files/138cstixdawesfaustmeehl.pdf


While there have not been an overwhelming number of studies examining suicide specifically the burden is really on people defending expert clinical judgment at this point to show that they can beat a reasonable formula. And this is coming from someone who is adamantly opposed to the deprofessionalization of medicine.

At the end of the day we still aren't great at suicide prediction, it's just that someone with adequate data base churning the formula will do as well or better than your gut instinct.
 
I have to admit that I'm not specifically trained in the CAMS approach (but I have purchased a recent manual and have perused it--the Managing Suicidal Risk workbook/manual). I think it's based on solid cognitive-behavioral principles but--like any particularly 'structured' protocol--there's a natural limit to how 'effective' it can be that is established by the limit to which your clients (including the veterans I work with in a post-deployment clinic setting) will respond positively to the structure vs. negatively. What frustrates me about most administrators (and even some clinicians) sometimes is their tendency to view 'suicide management' as some univariate clinical process/algorithm when it's anything but. Hopelessness/suicidality can be a 'final common pathway' endpoint with innumerable client-specific causes. In the VA system, we can be limited to monthly sessions with the vast majority of our caseloads. Therapy time is precious, especially when veterans no-show/cancel at a reasonably high rate, come late, etc.

My experience working with patients is that handling suicide risk is very much a 'multivariate' type equation which involves monitoring/intervention around not only 'suicidality' but also other primary elements of their case formulation (homelessness? major depression? psychosis? personality disorder? alcohol use? Likely to lose job and wife leave him if he is involuntarily hospitalized?). There are some things that seem 'straightforward' (in the abstract) that we 'should do' when addressing any clinical issue (including suicidality) with a generic hypothetical 'patient' but the decision of what to do isn't always straightforward with particular patient scenarios. I think there's an upper limit to how much can be 'manualized' in this area. Good practices (e.g., solid documentation, safety planning, collaboration of care), certainly. But I think it's more about 'principles' of care than specifics of implementation (which a good clinician should determine in context in collaboration with the patient).

The most ridiculous thing our VA does is go 'hog wild' with respect to all the suicide risk management formal initiatives (esp. with respect to requiring multiple separate in-depth note templates for clinicians to fill out) while simultaneously ignoring our pleas for adequate staffing that would allow us to provide weekly cognitive-behavioral therapy to our clients (which actually WOULD most likely lower the risk of suicide for them). Right now, outside of a handful of veterans I see for cognitive processing therapy (CPT) for PTSD and a few 'high risk suicide flag' cases, I am forced to see my patients monthly (or less frequently) because we don't have adequate staffing.

ME TOO. This is so validating, thank you. Not to mention the obsession with access numbers at (IMO) the cost of actual clinical care.

A lot of the VA's suicide prevention policies--although well-intended--actually reinforce suicidal behavior, too.
 
ME TOO. This is so validating, thank you. Not to mention the obsession with access numbers at (IMO) the cost of actual clinical care.

A lot of the VA's suicide prevention policies--although well-intended--actually reinforce suicidal behavior, too.

ME TOO. This is so validating, thank you. Not to mention the obsession with access numbers at (IMO) the cost of actual clinical care.

A lot of the VA's suicide prevention policies--although well-intended--actually reinforce suicidal behavior, too.

Agreed. We may need to start a separate thread to discuss. I'll leave it at this for now: with all the exasperated and faux-furious clucking and fussing in front of TV cameras of 'the big dogs' in congress, the press, and VA administration about the problem of veteran suicide, the single most neglected issue is lack of veteran access to weekly (vs. monthly) psychotherapy. 'Evidence-based care, indeed.
 
Suicide Risk Assessment (fill in all the below, and done)
Acute Factors: xyz
Chronic Factors: mno
Protective Factors: pqrs
Risk Assessment: Low, low/moderate, moderate/high, high, etc
Modiafiable Factors: remove guns, substance sobriety, etc, etc

*Discuss things you will actually do in the Plan section

So if I document this simple and straightforward “assessment” above I will be able to say I conducted an adequate suicide risk assessment and will be safe from a liability standpoint? I guess what I’m afraid of is like an expert witness asking me why I didn’t do one of the 50 different standardized questionnaires or whatever so I just want to be extra safe to know that the above is adequate from a lawsuit perspective.
 
No one can tell you you will be safe, just like we can't tell the patients we can prevent suicide. That's just a basic format that shows you actually thought about it.

We document for three reasons: us clinicians, lawyers, and insurance company billers. Each one of these wants different things from a note.

Good luck.
 
What’s the point of a suicide risk assessment if it doesn’t predict suicide? Isn’t that odd? In what way does it help us?
In my mind, it's not just about prediction. Suicide is a big deal, and even if we don't know that we can accurately predict it, or even that our interventions may work, I think that it warrants reasonable attempts to identify risk factors and try to modify the ones that are in theory, modifiable.

Patients and their families actually don't always think of ways to get the guns or excess opiod pills out of the house. They don't think about it, or they don't want to totally chuck them, and from there don't come up with ways of retaining ownership but making access more difficult. Advising to leave them with a trusted friend or some such, can actually be helpful. I know of several people who felt that was a life-saving move when in their next darkest moment those items weren't immediately handy. That can be enough to take the wind out of someone's sails.

That's anecdotal, not predictable, and effectiveness could be argued. But if even one suicide can be prevented with just mentioning it to a patient, I think it's worth doing.

You do have patients who will at least testify to how some of these assessments and interventions help them. That's good enough for me to try to do them and see.
 
So if I document this simple and straightforward “assessment” above I will be able to say I conducted an adequate suicide risk assessment and will be safe from a liability standpoint? I guess what I’m afraid of is like an expert witness asking me why I didn’t do one of the 50 different standardized questionnaires or whatever so I just want to be extra safe to know that the above is adequate from a lawsuit perspective.

Depends on your setting. If you work in a hospital, there is probably a policy depending on whether someone has stated active thoughts or not and what needs t be done. Here, they require the Columbia. As for the document for other patients, as long as you detail it enough that your decision making is clear, probably fine. You pretty much have to follow the standard of care where you are, and have a clear rationale for why you did or did not hospitalize someone if they do report active thoughts. I used to have a bunch of templates back in the days when I did DBT work, but they have been lost to several moves and lost thumb drives.
 
So if I document this simple and straightforward “assessment” above I will be able to say I conducted an adequate suicide risk assessment and will be safe from a liability standpoint? I guess what I’m afraid of is like an expert witness asking me why I didn’t do one of the 50 different standardized questionnaires or whatever so I just want to be extra safe to know that the above is adequate from a lawsuit perspective.
I'm confused, are you a med student or, as someone said, a PGY-1? A lot of these questions should be asked of your residency supervisors who will hopefully both give you an educational answer and also one that includes the standard of practice at that facility, which none of us can do.
 
I'm confused, are you a med student or, as someone said, a PGY-1? A lot of these questions should be asked of your residency supervisors who will hopefully both give you an educational answer and also one that includes the standard of practice at that facility, which none of us can do.
It never hurts to get many opinions
 
IMHO you really can't do a good evaluation of someone's suicide risk in an ED unless there's several collateral sources of data. E.g. I've kicked out likely over a 5000 patients out of the ED and non of them committed suicide, but for most of them we kept them for several hours, had several sources of data, I had enough information to feel confident in the decision. I'd never kick out where suicide was on the table if we didn't have enough information.

Some EDs are set up for this. E.g. by the time the psychiatrist sees the patient the social worker or someone else like a resident has already called all the collateral sources of data that could be obtained. In other EDs they just drop this on the psychiatrist's lap. (If this is the case and you're the psychiatrist either get out of the job or have a social worker get the information. It's not laziness. It's cost-effective treatment that'll allow you to see/evaluate/treat more patients while letting others who can do that aspect of the job fine get it out of the way for you).
 
Sorry, looking for a paper to answer your question, more has been done on suicide attempts than on suicide completion. However, it is much of a piece with the superiority of statistical methods for prediction in essentially every domain of mental health.

a meta-analysis of mechanical v. clinical prediction methods in mental health: SAGE Journals: Your gateway to world-class research journals

Meehl classics: "Clinical versus Actuarial Judgment" http://meehl.umn.edu/sites/meehl.dl.umn.edu/files/138cstixdawesfaustmeehl.pdf


While there have not been an overwhelming number of studies examining suicide specifically the burden is really on people defending expert clinical judgment at this point to show that they can beat a reasonable formula. And this is coming from someone who is adamantly opposed to the deprofessionalization of medicine.

At the end of the day we still aren't great at suicide prediction, it's just that someone with adequate data base churning the formula will do as well or better than your gut instinct.

If only we could do an MMPI on every psych patient that came into the ED. Honestly though, I'm not against comprehensive screens to predict behaviors as an MMPI is also going to help me uncover malingering. It's these short CSSR type things that is JCAHOs new "flavor of look we're going to stop suicide" which begins to cloud things. It is also where I think clinical judgement/common sense gets thrown out the window as people want to continue to utilize scales with poor PPV/NPV but that do a great job of CYA and then they wonder why some physician's can't think their way out of a box when a presentation does perfectly match their beloved scale.
 
If only we could do an MMPI on every psych patient that came into the ED. Honestly though, I'm not against comprehensive screens to predict behaviors as an MMPI is also going to help me uncover malingering. It's these short CSSR type things that is JCAHOs new "flavor of look we're going to stop suicide" which begins to cloud things. It is also where I think clinical judgement/common sense gets thrown out the window as people want to continue to utilize scales with poor PPV/NPV but that do a great job of CYA and then they wonder why some physician's can't think their way out of a box when a presentation does perfectly match their beloved scale.

I'll defer to the forensic psychologist who I know routinely reads these threads but my impression from that literature is that using the MMPI as a gold standard to "detect" malingering is...fraught, at best. Have definitely seen recommendations against using the validity scales that way.

I could not agree with your feelings on short screeners more, across all domains. Whenever someone cites a MOCA score as evidence that someone is demented or that an improvement in the PHQ-9 in a patient in a psychiatric clinic is evidence of treatment success, I die a little.
 
I'll defer to the forensic psychologist who I know routinely reads these threads but my impression from that literature is that using the MMPI as a gold standard to "detect" malingering is...fraught, at best. Have definitely seen recommendations against using the validity scales that way.

I could not agree with your feelings on short screeners more, across all domains. Whenever someone cites a MOCA score as evidence that someone is demented or that an improvement in the PHQ-9 in a patient in a psychiatric clinic is evidence of treatment success, I die a little.

If you don’t want to use the phq9 as evidence for improvement what else is there how do we measure what we’re doing
 
I'll defer to the forensic psychologist who I know routinely reads these threads but my impression from that literature is that using the MMPI as a gold standard to "detect" malingering is...fraught, at best. Have definitely seen recommendations against using the validity scales that way.

I could not agree with your feelings on short screeners more, across all domains. Whenever someone cites a MOCA score as evidence that someone is demented or that an improvement in the PHQ-9 in a patient in a psychiatric clinic is evidence of treatment success, I die a little.

I was mostly being flippant in regards to utilizing it as a gold standard for malingering, though there's times in my current line of work I'm bummed I didn't get more Forensic's in residency.

If you don’t want to use the phq9 as evidence for improvement what else is there how do we measure what we’re doing

The problem with things like PHQ-9 is they work....

In a vacuum that is not based in reality, one which our patient's don't have personality disorder traits, transitory life stressors that happened 5 minutes before filling out the PHQ, or the patient simply had a bad nightmare the night before. It's something you can use to monitor and help guide care but the problem is once you put these things in a hospital chart they kind of take on a life of their own. As long as you recognize the limits of the tools we have available there's certainly usefulness to be gained from them. Unfortunately these get treated as Gospel instead of "tools to assist".
 
Last edited:
I was mostly being flippant in regards to utilizing it as a gold standard for malingering, though there's times in my current line of work I'm bummed I didn't get more Forensic's in residency.



The problem with things like PHQ-9 is they work....

In a vacuum that is not based in reality, one which our patient's don't have personality disorder traits, transitory life stressors that happened 5 minutes before filling out the PHQ, or the patient simply had a bad nightmare the night before. It's something you can use to monitor and help guide care but the problem is once you put these things in a hospital chart they kind of take on a life of their own. As long as you recognize the limits of the tools we have available there's certainly usefulness from them. Instead these get treated as Gospel instead of "tools to assist".

In the VA system, anyone who 'maxes out' the PCL-5 questionnaire (all or almost all of the 20 PTSD sxs are endorsed as 'extremely') is almost always overreporting. The most impaired folks will elevate it less than those who are 'tryin' ta git [their] 100% odisability rating].'
 
Last edited:
I was mostly being flippant in regards to utilizing it as a gold standard for malingering, though there's times in my current line of work I'm bummed I didn't get more Forensic's in residency.



The problem with things like PHQ-9 is they work....

In a vacuum that is not based in reality, one which our patient's don't have personality disorder traits, transitory life stressors that happened 5 minutes before filling out the PHQ, or the patient simply had a bad nightmare the night before. It's something you can use to monitor and help guide care but the problem is once you put these things in a hospital chart they kind of take on a life of their own. As long as you recognize the limits of the tools we have available there's certainly usefulness to be gained from them. Unfortunately these get treated as Gospel instead of "tools to assist".

PHQ-9 is fine as a screening instrument in primary care clinics. That is what it was developed for. It is not that useful for monitoring response in a psychiatric clinic. It is not clear that it does all that well screening in a psychiatric clinic either: Utility and limitations of PHQ-9 in a clinic specializing in psychiatric care
 
Ok so what do u wanna do..

Well, there's two basic ways to go from this:

1. Continue to rely on a psychometrically validated instrument to figure out if what you are doing is helpful. There are lots of these. The QIDS is pretty good. You could argue that if you are prescribing psychotropic drugs maybe use one of the skinnier Hamiltons or MADRS, since most of the medications you will be using are known to affect these scores or they wouldn't be legally sold in this country. Nice quantitative-seeming data. Nothing subjective here, look, this little graph I made proves you're getting better.

Let us call this the scientism option.

2. Attempt to inhabit some of the phenomenological space of the patient as you get to know them over several appointments where you end up talking about incredibly intimate stuff. Carefully monitor and describe to yourself changes you notice in the intersubjective field with changes in medication. Pay attention to how their life seems to be going outside of the office. Elicit their concerns about the meds and listen for known adverse effects, being aware of more unusual but not strictly idiosyncratic reactions and what they would sound like described in normal human English. Ask what they think it is doing for them, and watch for discrepancies between this and the other things they are saying and for changes that are not temporally related to changes in meds.

Let's call this the phronesis option.

One you can put in a textbook or professional society guidelines. Or test about on a board exam. The other is why we have an apprenticeship model and is operator-dependent and not especially replicable.

How you go is really up to you. In terms of quality variation, the floor and ceiling of the first approach are close together, a bungalow if you will. The second is more like a cathedral.
 
Well, there's two basic ways to go from this:

1. Continue to rely on a psychometrically validated instrument to figure out if what you are doing is helpful. There are lots of these. The QIDS is pretty good. You could argue that if you are prescribing psychotropic drugs maybe use one of the skinnier Hamiltons or MADRS, since most of the medications you will be using are known to affect these scores or they wouldn't be legally sold in this country. Nice quantitative-seeming data. Nothing subjective here, look, this little graph I made proves you're getting better.

Let us call this the scientism option.

2. Attempt to inhabit some of the phenomenological space of the patient as you get to know them over several appointments where you end up talking about incredibly intimate stuff. Carefully monitor and describe to yourself changes you notice in the intersubjective field with changes in medication. Pay attention to how their life seems to be going outside of the office. Elicit their concerns about the meds and listen for known adverse effects, being aware of more unusual but not strictly idiosyncratic reactions and what they would sound like described in normal human English. Ask what they think it is doing for them, and watch for discrepancies between this and the other things they are saying and for changes that are not temporally related to changes in meds.

Let's call this the phronesis option.

One you can put in a textbook or professional society guidelines. Or test about on a board exam. The other is why we have an apprenticeship model and is operator-dependent and not especially replicable.

How you go is really up to you. In terms of quality variation, the floor and ceiling of the first approach are close together, a bungalow if you will. The second is more like a cathedral.

Wow very well put thx
 
PHQ-9 is fine as a screening instrument in primary care clinics. That is what it was developed for. It is not that useful for monitoring response in a psychiatric clinic. It is not clear that it does all that well screening in a psychiatric clinic either: Utility and limitations of PHQ-9 in a clinic specializing in psychiatric care

The PHQ-9 was created for primary clinics but I believe UW AIMS model relies fairly heavy on the PHQ-9 to address response in their psychiatric population, one could still make an argument it is the primary care clinic they are being seen in but there's evidence it can work in certain settings to help assess response when utilized as a "piece of a whole". Once again, not a huge fan of any "quick screens" but I don't think I'd single out the PHQ-9
 
Getting back to the original points, the best and most comprehensive analysis of the litterature on the question "can we predict suicide in a clinically meaningful way?" says "definitely not" and even argues that attempting to do so may lead to more harm than benefit and recommends again the practice.


Two side notes:
1) Using SA as a proxy for suicide is terrible methodology because SA gleaned from medical records are often not deliberate attempts to ends ones life with actual potential harm beyond typical daily activities and also likely include reported "attempts" by malingerers. Research where suicide is not the primary outcome is not research on suicide prediction.

2) A factor should only be described as modifiable if there is direct evidence that modifying that factor actually reduces the rate of completed suicide. For example, there is evidence that lithium reduces risk of suicide in the long term in BP1. Therefore, BP1 without lithium tx could be considered a modifiable long term factor. Owning a gun? Unclear. I imagine most homes have several easy means of suicide (belt, Tylenol/nsaids/iron, knife and internet for where to stab, lethal height buildings are available in most cities). Is owning a gun a risk factor for suicide because people who own guns have a lower threshold to kill, are more likely to have experienced violence/trauma (military, police), or because of the easy access to an easy means. I suspect all three have some role, but there needs to be a direct link between removing a gun and decreasing suicide rates driven by a randomized trial to say it's "modifiable;" until then it's an associated factor. Curiously, gun ownership has been decreasing steadily in the US while suicides increase and the implementation of the SAFE act in NY has seen a leveling and very slight decrease in rates of suicide but certainly not close to a return to the heyday of mental well-being in the early-mid 00s. I'd love to see a high quality study on these interventions if anyone has one to suggest.

Of course, I'm not suggesting that you won't lose a malpractice case when you argue that all of the science says that suicide in not predictable in a meaningful way... because a dozen psychiatrists will certainly testify that all of this is "standard of care" even though it's snake oil and juries will probably buy it.
 
Getting back to the original points, the best and most comprehensive analysis of the litterature on the question "can we predict suicide in a clinically meaningful way?" says "definitely not" and even argues that attempting to do so may lead to more harm than benefit and recommends again the practice.


Two side notes:
1) Using SA as a proxy for suicide is terrible methodology because SA gleaned from medical records are often not deliberate attempts to ends ones life with actual potential harm beyond typical daily activities and also likely include reported "attempts" by malingerers. Research where suicide is not the primary outcome is not research on suicide prediction.

2) A factor should only be described as modifiable if there is direct evidence that modifying that factor actually reduces the rate of completed suicide. For example, there is evidence that lithium reduces risk of suicide in the long term in BP1. Therefore, BP1 without lithium tx could be considered a modifiable long term factor. Owning a gun? Unclear. I imagine most homes have several easy means of suicide (belt, Tylenol/nsaids/iron, knife and internet for where to stab, lethal height buildings are available in most cities). Is owning a gun a risk factor for suicide because people who own guns have a lower threshold to kill, are more likely to have experienced violence/trauma (military, police), or because of the easy access to an easy means. I suspect all three have some role, but there needs to be a direct link between removing a gun and decreasing suicide rates driven by a randomized trial to say it's "modifiable;" until then it's an associated factor. Curiously, gun ownership has been decreasing steadily in the US while suicides increase and the implementation of the SAFE act in NY has seen a leveling and very slight decrease in rates of suicide but certainly not close to a return to the heyday of mental well-being in the early-mid 00s. I'd love to see a high quality study on these interventions if anyone has one to suggest.

Of course, I'm not suggesting that you won't lose a malpractice case when you argue that all of the science says that suicide in not predictable in a meaningful way... because a dozen psychiatrists will certainly testify that all of this is "standard of care" even though it's snake oil and juries will probably buy it.

Very interesting thank u for ur insight
 
The most ridiculous thing our VA does is go 'hog wild' with respect to all the suicide risk management formal initiatives (esp. with respect to requiring multiple separate in-depth note templates for clinicians to fill out) while simultaneously ignoring our pleas for adequate staffing that would allow us to provide weekly cognitive-behavioral therapy to our clients (which actually WOULD most likely lower the risk of suicide for them). Right now, outside of a handful of veterans I see for cognitive processing therapy (CPT) for PTSD and a few 'high risk suicide flag' cases, I am forced to see my patients monthly (or less frequently) because we don't have adequate staffing.
ME TOO. This is so validating, thank you. Not to mention the obsession with access numbers at (IMO) the cost of actual clinical care.

A lot of the VA's suicide prevention policies--although well-intended--actually reinforce suicidal behavior, too.

Now that the VA is a year out from implementing the new protocol, I'm looking into doing a QI project on it at some point in the next year or so. I'm guessing it'll probably show the same results that most other suicide screening tools have...
 
Getting back to the original points, the best and most comprehensive analysis of the litterature on the question "can we predict suicide in a clinically meaningful way?" says "definitely not" and even argues that attempting to do so may lead to more harm than benefit and recommends again the practice.


Two side notes:
1) Using SA as a proxy for suicide is terrible methodology because SA gleaned from medical records are often not deliberate attempts to ends ones life with actual potential harm beyond typical daily activities and also likely include reported "attempts" by malingerers. Research where suicide is not the primary outcome is not research on suicide prediction.

2) A factor should only be described as modifiable if there is direct evidence that modifying that factor actually reduces the rate of completed suicide. For example, there is evidence that lithium reduces risk of suicide in the long term in BP1. Therefore, BP1 without lithium tx could be considered a modifiable long term factor. Owning a gun? Unclear. I imagine most homes have several easy means of suicide (belt, Tylenol/nsaids/iron, knife and internet for where to stab, lethal height buildings are available in most cities). Is owning a gun a risk factor for suicide because people who own guns have a lower threshold to kill, are more likely to have experienced violence/trauma (military, police), or because of the easy access to an easy means. I suspect all three have some role, but there needs to be a direct link between removing a gun and decreasing suicide rates driven by a randomized trial to say it's "modifiable;" until then it's an associated factor. Curiously, gun ownership has been decreasing steadily in the US while suicides increase and the implementation of the SAFE act in NY has seen a leveling and very slight decrease in rates of suicide but certainly not close to a return to the heyday of mental well-being in the early-mid 00s. I'd love to see a high quality study on these interventions if anyone has one to suggest.

Of course, I'm not suggesting that you won't lose a malpractice case when you argue that all of the science says that suicide in not predictable in a meaningful way... because a dozen psychiatrists will certainly testify that all of this is "standard of care" even though it's snake oil and juries will probably buy it.

There are studies looking at suicide rates across states with varying levels of gun control, as well as the Israeli Army's study that showed limiting access reduced suicide rates. Granted, it's not as well controlled research as you're looking for.
 
TJC says use an evidence based screen, then if positive evidence based assessment. You have to do what they say. Most likely an institution will find a way for someone other than the psychiatrist to do it. What you need to do is take the data you have and document your assessment that supports your decision making process. The mistake I see most is not really saying what you make of the evidence, e.g. "because of XYZ, I do not believe the patient is at imminent risk of suicide warranting inpatient hospitalization. Because of risks XYZ, I have referred patients to intensive outpatient program." People will often list the evidence but forget some version of the "because".
 
TJC says use an evidence based screen, then if positive evidence based assessment. You have to do what they say. Most likely an institution will find a way for someone other than the psychiatrist to do it. What you need to do is take the data you have and document your assessment that supports your decision making process.

This would be like TJC recommending evidence based pancreatic cancer screening, and if positive, evidence based pancreatic cancer assessment on everyone who comes to a hospital. The evidence based screening is not screening.

I am generally immensely proud to be a psychiatist, but in this particular case, our refusal as a profession to say that if a particular test lacks a clinically meaningful PPV or NPV that has been shown to alter outcomes, like in every other branch of medicine, we should not recommend the test is quite embarrassing.
 
The PHQ-9 is very useful and efficient cause in very quickly sums up depression, it's severity and the individual symptoms of depression right there for you very quickly instead of asking point for point how the patient is doing. It condenses a DSM-like depression evaluation into about 2 minutes that would've taken more than 10 minutes in a more tedious interview where you're not open ended listening but asking them very direct close ended questions about each DSM-V sx.

It's not useful in suicide evaluations.

Why?

When it comes to admitting someone to inpatient the what I call the "rule of opposites" applies. People who really want to commit suicide aren't going to want to go to the hospital and anyone with an average IQ or better after reading the PHQ-9 can easily figure out a higher score, especially in the section where it asks if you think life isn't worth living is going to be a clincher to keep that person in, while someone really wanting help is less likely to commit suicide and will honestly write how terrible their depression is.

Self-report scales are great in patients that want to get better and want to be honest with you. Patients that are going to be honest with you if they are truly suicidal will usually upfront tell you so. The PHQ-9 will not somehow reveal the truth on someone trying to fake their desire to get help.

Some psych screens or tests the evaluee won't be able to make heads or tails of what the numbers mean. Not true with the PHQ-9. It'd take most people mere seconds to tell a higher score means more depression and will let that person use that score possibly to manipulate the clinician. In such sitautions expect malingers to put the maximum score or close to it, with some perfunctory 2s instead of 3s to not look completely BS, while a truly suicide person who doesn't want to get admitted so they can commit suicide will under-report their depression.

The rule of opposites I've found to be very useful in ER evaluations. If patient has poor insight, do the opposite of what they want. If a guy presents as very psychotic and tells you he's fine and wants to be released you don't release him. If a guy presents are very psychotic and wants to go into the hospital suspect malingering.

I've found the PHQ-9 to be very useful in patients with no depression up to moderate who come to my outpatient office, but it's only leads to a beginning, not an ending of an evaluation. Patients coming to private practice most of the time want to get better. They're sacrificing time out of their day to get better, and are losing time out of work or out of an otherwise busy day. The ER is very different. Many people there have nothing to lose--whether it comes to suicide or malingering. You don't tend to see well-to-do people malinger cause they'd rather get a hotel room or stay with friend, and know a hospital stay will be very expensive, while someone without a roof usually have no friends that will house them and won't end up paying the bill when they get out and/or they really are going to commit suicide-cause they got nothing to lose.
 
Last edited:
using the MMPI as a gold standard to "detect" malingering is...fraught, at best.

The MMPI, we psychiatrists aren't trained in using it. In situations, like the ER, we won't have the time to have one done. Further, while the MMPI does have some data on detecting malingering that's not what the test is made to do. There are other and better tests out there for malingering such as the SIRS or M-FAST but even with them you cannot make an opinion based on the test alone.
 
Top