fMRI predicting suicidal ideation

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

WisNeuro

Board Certified in Clinical Neuropsychology
15+ Year Member
Joined
Feb 15, 2009
Messages
18,009
Reaction score
23,746
Gonna be a slow day, here's an article for discussion. I'll hold off on my take for now til people get a chance to read.

Brain Patterns May Predict People At Risk Of Suicide

pdf of the source article is also available.

Members don't see this ad.
 
This is sorta like the old 5 scale. Asking directly seems like it would be more useful. But if someone wants to hide things from others, you're probably not going to get it out of them.

I know if I was actively suicidal, getting into an fMRI wouldn't be on my to do list.
 
  • Like
Reactions: 5 users
This is sorta like the old 5 scale. Asking directly seems like it would be more useful. But if someone wants to hide things from others, you're probably not going to get it out of them.

I know if I was actively suicidal, getting into an fMRI wouldn't be on my to do list.

Yeah, my biggest takeaways were 1) I imagine simply asking if someone is suicidal takes about 5 seconds and doesn't require an fMRI costing thousands of dollars, and 2) they did a kind of super control group, how do we know that these areas won't light up in people with any kind of elevated mood symptomatology. What's the specificity, or are we just picking up general psychopathology/big N?
 
  • Like
Reactions: 4 users
Members don't see this ad :)
I dunno, distinguishing attemptors from non-attemptors would be useful. We're good at predicting ideation but not actual attempts.
 
  • Like
Reactions: 1 users
Interesting premise. We can ask, but despite what we like to think...our ability to predict attempts is pretty terrible. We have decent prediction of ideation, but its also incredibly common and not terribly actionable. State-of-the-art really just seems to be about identifying those at > 0% risk and safety planning.

That said, this is the usual media over-hype that the lay public (and unfortunately - some clinicians) abuse either out of ignorance, stupidity or greed. This is an exciting project with reasonably strong results as these sorts of things go. It could lead somewhere 5,10, 20 years down the line. Not every study is or should be done with the idea that it can be put into practice tomorrow, despite what the public thinks - science is an incremental process.

If we want to talk methodological problems, the control group should be the least of your concerns. This wasn't even prospective prediction.
 
  • Like
Reactions: 1 user
If we want to talk methodological problems, the control group should be the least of your concerns. This wasn't even prospective prediction.

Yes, not the largest methodological hurdle, but the super controls is probably the most salient across the clinical research utilizing imaging realm. It's why people think DTI is some magic bullet for mTBI "diagnosis," despite the fact that we can see similar patterns in individuals suffering from TMJ.
 
  • Like
Reactions: 1 user
The potential for imaging to be used clinically holds some serious allure. There are a ton of hurdles, including the practical (why ask someone to spend that much when you can just ask) and the feasible (imaging just isn't good enough at distinguishing function to trust it for any type of decision yet - even if money weren't an issue). Either way, I like seeing these as proof of concept and look forward to seeing them develop more. I'm not sure they'll ever (or before my flying car) become of reasonable clinical utility, but I do think it promotes some great advances in understanding.
 
This is all academically interesting but I think the importance is more for theory than for clinical practice. Once you introduce non-squeaky clean controls and compare against human clinician raters the results may seem less impressive. Anyway, I don't know what to make of the fact that the researchers criticize self-report methods of identifying suicide risk but define their test population by their willingness to self-disclose suicidal ideation. What they're predicting isn't really risk.

In addition to practicality and feasibility there is the cost-effectiveness aspect of using imaging for psych treatment planning purposes. It will take more than a tiny incremental benefit to justify such an expensive test (compared to the alternative possibility of overscreening/overtreating). Will the cost of an MRI be so much less in 20 years that payers will go for it? Especially if the cost of human clinician time continues to stagnate? I realize no one can answer these questions conclusively, but no one even seems to bother asking them.
 
Agree 100% we're at the theory stage here. Nothing wrong with trying these sorts of things out...that's what science is for. Heck, may turn out its useless for prediction but informs a targeted TMS intervention or something else we can't currently envision down the line.

Cost effectiveness is an interesting issue. Clearly we aren't there yet, but I also think people believe MRI is more expensive than it really is. Right now, I pay $480/hour for a 3T scanner. I'd ballpark their task at ~ 30 minutes, even including a structural for registration. Would almost certainly be upcharged in a clinical context, but with tech advances its not entirely unreasonable that could drop into a range of cost effectiveness given we are dealing with rare, hard-to-predict events with catastrophic consequences. Not clear who could read the scan (psychiatry? neuroradiology? the IT guy who runs the machine learning algorithm? We're in uncharted territory here...). Definitely worth thinking about, but I also think it would be silly to stop research on this topic just because its not cost effective at this time.

All that said, I think Insel's new company using mHealth to diagnose is going to produce far more yield than fMRI in the long-term.
 
Cost effectiveness is an interesting issue. Clearly we aren't there yet, but I also think people believe MRI is more expensive than it really is. Right now, I pay $480/hour for a 3T scanner. I'd ballpark their task at ~ 30 minutes, even including a structural for registration. Would almost certainly be upcharged in a clinical context, but with tech advances its not entirely unreasonable that could drop into a range of cost effectiveness given we are dealing with rare, hard-to-predict events with catastrophic consequences. Not clear who could read the scan (psychiatry? neuroradiology? the IT guy who runs the machine learning algorithm? We're in uncharted territory here...). Definitely worth thinking about, but I also think it would be silly to stop research on this topic just because its not cost effective at this time.

Who has suggested that the research should be stopped? If one can speculate about clinical applications, one can also speculate about costs.

If the intent is to apply the technology in a clinical setting (eg, as opposed to develop a basic research methodology), it's totally reasonable to think about costs as a consideration for further research. The objective cost matters less than the cost relative to alternative screening methods. If current cost-effectiveness modeling trends hold, the difference between MRI-based screening ($480/hour plus cost of reading/interpretation time) versus the next most expensive screening method needs to come in at under $50K-100K per life-year saved (and realistically speaking, probably the low end of that range). Maybe it could, who knows.

Cool study here, btw: http://ps.psychiatryonline.org/doi/abs/10.1176/appi.ps.201600351
 
I am surprised this article is getting so much mainstream media press since it seems a step behind as a clinical tool then this article that came out in April
SAGE Journals: Your gateway to world-class journal research

I also don't really understand how "machine learning" is executed in these studies. As I understand, the outcomes are provided to the program, which creates a model based on the possible predictors. However, we haven't seen a truly predictive example where researchers actually follow individuals over time to see if the models are accurate. I fear we will never actually see a study like this.

Scientifically this is a very interesting idea but clinically it seems unlikely to work.
 
Members don't see this ad :)
Future scenario: Patient denies suicidal ideation. fMRI indicates risk. Should we involuntarily hospitalize?

This is kind of a tangent, but as someone who has been evaluating patients for emergency detentions and involuntary hospitalizations over the last several years, I have serious concerns about the tendency to take away peoples basic civil or legal rights based on future risk and the justification (mandated or coerced treatment) for taking that away. One reason I feel this way is that we have such limited resources that I can’t get people who have some motivation to change into treatment because the system is overloaded with the revolving door of so many people who are still stuck solidly in the first stage of change. More and more, I am in favor of letting people who are obviously at high risk leave against my recommendation as opposed to pursuing involuntary commitment.

A second thought about some of this is that much of what we are really dealing with are greater sociocultural problems and that looking to technology to predict behavior is more of a symptom of our societies’s unhealthy control dynamics that are based on fear than it is a realistic hope for improvement.
 
  • Like
Reactions: 4 users
Future scenario: Patient denies suicidal ideation. fMRI indicates risk. Should we involuntarily hospitalize?

I had the same thought. Makes a good premise for a dystopian sci-fi story.

A second thought about some of this is that much of what we are really dealing with are greater sociocultural problems and that looking to technology to predict behavior is more of a symptom of our societies’s unhealthy control dynamics that are based on fear than it is a realistic hope for improvement.

This. Shifting finite resources from modifying risk factors to detecting actuarial risk seems like the wrong way to go.
 
  • Like
Reactions: 1 users
I had the same thought. Makes a good premise for a dystopian sci-fi story.



This. Shifting finite resources from modifying risk factors to detecting actuarial risk seems like the wrong way to go.
My favorite philosopher of science (Philip Kitcher) published one of the most overlooked masterpiece rebuttals of biological reductionism in his manuscript, "1953 And All That: A Tale of Two Sciences." Also, during the 'Decade of the Brain' (and before), how many gazillions of research $$$'s were spent in the ultimately futile search for biological markers that would distinguish between so-called 'endogenous vs. 'reactive (environmentally-induced)' subtypes of major depressive disorder. Read some of the published work of, say, Donald Klein (big time biological reductionist) from the 1990s and see just how few of the predictions for the biological revolution panned out.
 
My favorite philosopher of science (Philip Kitcher) published one of the most overlooked masterpiece rebuttals of biological reductionism in his manuscript, "1953 And All That: A Tale of Two Sciences." Also, during the 'Decade of the Brain' (and before), how many gazillions of research $$$'s were spent in the ultimately futile search for biological markers that would distinguish between so-called 'endogenous vs. 'reactive (environmentally-induced)' subtypes of major depressive disorder. Read some of the published work of, say, Donald Klein (big time biological reductionist) from the 1990s and see just how few of the predictions for the biological revolution panned out.
I'll have to check those out, but at the same time there is something to be said for some biological predictors. If the presumption is that medical science can advance to a point where it is useful in discriminating and identifying problems, why wouldn't we make decisions based on that? It seems unreasonable that we should ignore anything but self-report in the future, as technology were to advance. Doctors don't ask 'how do you feel your blood pressure is'... they measure it. Not that I think this is close to happen, but still.

And now back to putting together a table for this paper showing how bad we are at detecting truthful responding....
 
I'll have to check those out, but at the same time there is something to be said for some biological predictors. If the presumption is that medical science can advance to a point where it is useful in discriminating and identifying problems, why wouldn't we make decisions based on that? It seems unreasonable that we should ignore anything but self-report in the future, as technology were to advance. Doctors don't ask 'how do you feel your blood pressure is'... they measure it. Not that I think this is close to happen, but still.

And now back to putting together a table for this paper showing how bad we are at detecting truthful responding....
Of course...I totally agree...IF...we could.
 
I'll have to check those out, but at the same time there is something to be said for some biological predictors. If the presumption is that medical science can advance to a point where it is useful in discriminating and identifying problems, why wouldn't we make decisions based on that? It seems unreasonable that we should ignore anything but self-report in the future, as technology were to advance. Doctors don't ask 'how do you feel your blood pressure is'... they measure it. Not that I think this is close to happen, but still.

And now back to putting together a table for this paper showing how bad we are at detecting truthful responding....
I should clarify that I am not--by any means--a fanatical anti-reductionist. I just think that Kitcher's essay is a phenomenally cogent exploration of the ways that reductionism (in chemistry/biology...but his arguments are certainly applicable to the biology/psychology interface--perhaps moreso) can break down when you're trying to reduce Theory A (say, involving phenomena at a higher level of abstraction) to Theory B (at a lower level of abstraction)...in the essay, he explores attempts to (fully) reduce classical genetics to molecular genetics. He explores the difficulties inherent in 'intertheoretic relations (and the attendant need for 'bridge theories' among other contortions).' There's a big difference between reducing psychological/behavioral phenomena to biology *in princple* vs. *in practice* and, in principle, I am opposed to neither...I just believe that it's very easy to underestimate the difficulties inherent in doing so. Heck...we can't even predict what the weather will be at a particular geographic location, say, 14 days from now.
 
Last edited:
  • Like
Reactions: 1 user
IMO, the public has a not only accepted but heavily preferred a weird dichotomy between brain and mind. It creates an escape from personal responsibility to some degree. Psychology has jumped on board and it's not necessarily a good thing.

If you really wanted to predict who was suicidal, it would probably be faster, smarter, more accurate, etc to use web resources. How long woudl it take google to write a script to scrapt public health databases for "suicide", correlate with address/IP address, use available search history to create a predictive algorithm, and repeat a few hundred thousand times?

Actually, there's probably much much more advanced stuff out there considering the Holosonic thing 10 years ago.

Remember how Target was able to predict if a woman was pregnant before she even knew? Through an automated process? Five years ago?

How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did
 
  • Like
Reactions: 4 users
I'll have to check those out, but at the same time there is something to be said for some biological predictors. If the presumption is that medical science can advance to a point where it is useful in discriminating and identifying problems, why wouldn't we make decisions based on that?

It's not an all-or-nothing issue, but a matter of emphasis. And actually, it is a presumption worth examining. I don't think we should take as a given the idea that biological markers could have superior PPV for complex, motivated behaviors. Study it, for sure, but not with blind optimism.

Doctors don't ask 'how do you feel your blood pressure is'... they measure it.

This is an interesting example (a) b/c of "white coat effect" and other sources of measurement error; (b) because if your BP is high you get a diagnosis of... high BP. Which is mainly a risk factor, a marker, or both for a number of diseases and actionable only in context (age, symptoms, history, presence of other risk factors, etc.). Don't confuse measurement precision with information.

If you really wanted to predict who was suicidal, it would probably be faster, smarter, more accurate, etc to use web resources. How long woudl it take google to write a script to scrapt public health databases for "suicide", correlate with address/IP address, use available search history to create a predictive algorithm, and repeat a few hundred thousand times?

Someone did this something like this with Twitter: http://www.jad-journal.com/article/S0165-0327(14)00536-9/abstract
 
  • Like
Reactions: 1 user
It's not an all-or-nothing issue, but a matter of emphasis. And actually, it is a presumption worth examining. I don't think we should take as a given the idea that biological markers could have superior PPV for complex, motivated behaviors. Study it, for sure, but not with blind optimism.
It seems like an empirical question when the time comes for sure. My point was only that if we are to continue to grow ourselves as a health science with a medical basis, there are some of the underlying assumptions that we need to be able to make as a field. The mind/body connection is one of those, and so to is the idea and if we are able to advance technology sufficiently to measure one then we will be able to measure the second. This is far from near, but it is an assumption that should be promote study.

Besides, assuming self-report would be intrinsically more accurate seems just as problematic given how poor reports we are and how bad/inaccurate our memories are.
 
Interesting.

I heard about a much better assessment option of predicting suicide..and this was from a writing sample and through the utilization of machine learning. Essentially, you would get a client to write about something, and certain words would be classified as being related to risk..and apparently it is able to predict with something like 95% accuracy.
 
  • Like
Reactions: 1 user
Let's dust off this old thread...

This article has been retracted

Some background:

I try to be skeptical but this stuff makes me cynical.
 
  • Wow
  • Like
Reactions: 3 users
Let's dust off this old thread...

This article has been retracted

Some background:

I try to be skeptical but this stuff makes me cynical.
Thanks for the update. I wish the “Matters Arising” were open access so we could get a better sense of what the issues were and how they came to light.
 
  • Like
Reactions: 1 user
fMRI is not useful for predicting behavior. Especially something as nebulous as depression or as specific as SI. Doesn't help that the same handful of brain structures subserve pretty much all behaviors.

You'd have better luck speaking frankly about SI and building enough a rapport for the patient to reveal such things.

Alternatively, you could hook 'em up to a lie detector. Not that a lie detector works either, but the psychological threat elicits more honest responses.
 
  • Like
Reactions: 1 users
This is sorta like the old 5 scale. Asking directly seems like it would be more useful. But if someone wants to hide things from others, you're probably not going to get it out of them.

I know if I was actively suicidal, getting into an fMRI wouldn't be on my to do list.




EDIT: didn't realize this was a necrobump
 
Last edited:
"Our computer algorithm tells us that after dropping a $600 quarter into our machine it says that your risk of completing suicide in the next year has TRIPLED from 1 in 100,000 to 3 in 100,000."

{patient blinks}

{provider blinks}

"Now let's talk about that safety plan."

Truly revolutionary
 
The funny thing is we are very good at predicting SI, but suck at predicting the thing that actually matters (suicidal behavior).
 
Last edited:
  • Like
Reactions: 3 users
The funny thing is we are very good at predicting SI, but suck at predicting the thing that actually matters (suicidal behavior).
Part of the problem is that religious zealots without clinical caseloads posing as "Champions / Avatars of Mithra' or whatever don't understand the crucial importance of:

(1) clinicians cannot predict the future, they can only practice reponsibly and adhere to standards of care/practice around SI
(2) low base rate phenomena (e.g. 1 in 10,000) happen rarely and base rates have to be taken into account
(3) most of the excessive crap that the Church of Suicide Prevention mandates isn't empirically supported (directly) by the literature and may even be iatrogenic/ counterproductive
(4) the Holy Crusade to End Suicide Forever makes about as much sense as a crusade to end hurricanes, traffic accidents or light emanating from that glowing globe in the sky 'forever'; a megalomaniacal delusion of grandeur ("WE WILL END SUICIDE, ZERO SUICIDE FOREVER!!! I WILL DO THIS!!!") cannot be a serious organizational goal that is achievable; nothing we are doing "will ever be enough" and I am hoping to be retired before they start hogtying providers and tossing them into active volcanos.
 
Last edited:
  • Like
Reactions: 3 users
Part of the problem is that religious zealots without clinical caseloads posing as "Champions / Avatars of Mithra' or whatever don't understand the crucial importance of:

(1) clinicians cannot predict the future, they can only practice reponsibly and adhere to standards of care/practice around SI
(2) low base rate phenomena (e.g. 1 in 10,000) happen rarely and base rates have to be taken into account
(3) most of the excessive crap that the Church of Suicide Prevention mandates isn't empirically supported (directly) by the literature and may even be iatrogenic/,counterproductive
(4) the Holy Crusade to End Suicide Forever makes about as much sense as a crusade to end hurricanes, traffic accidents or light emanating from that glowing globe in the sky 'forever'; a delusion cannot be a serious organizational goal that is achievable; nothing we are doing "will ever be enough" and I am hoping to be retired before they start hogtying providers and tossing them into active volcanos.

Just ask them what differentiates suicide from euthanasia?
 
  • Like
Reactions: 1 user
Top