Reconsolidation of Traumatic Memories (RTM) for PTSD?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Paywalled, but if its what I'm thinking I actually have an R01 that touches on this space (albeit not for PTSD). There is scientific merit to the idea but it is a LONG ways off from being ready for prime-time (which of course will do nothing to deter the charlatans).

I'm having a tough time getting around the paywall so I might be completely mistaken about what this one is about. If you have a way to send more details, let me know and I can give a better answer.
 
Paywalled, but if its what I'm thinking I actually have an R01 that touches on this space (albeit not for PTSD). There is scientific merit to the idea but it is a LONG ways off from being ready for prime-time (which of course will do nothing to deter the charlatans).

I'm having a tough time getting around the paywall so I might be completely mistaken about what this one is about. If you have a way to send more details, let me know and I can give a better answer.

 
Not precisely what I and my colleagues are pursuing and I'd never heard of this precise protocol, but its certainly related.

It is certainly neurobiologically/cognitively plausible. We've known for decades memory is a surprisingly malleable process, but the specifics of what drives that and how best to influence it remain hotly contested. Exposure works great for some things (anxiety), terribly for others (substance use). Habituation is the presumed mechanism for a lot of what we do across disorders, but "how" habituation actually operates is still wildly unclear. There is some very interesting computational work that essentially suggests habituation doesn't work the way we think it works - it is far removed from the clinical space, but I "think" a key implication of that work is that traditional exposure may be effective but not necessarily the MOST effective way to approach things.

All that said, until a large RCT shows this is more effective than PE I'm not inclined to much too much of it. Lots of fun basic science questions to ask about mechanisms that may change the essence of how we do therapy 20 years from now, but none of the folks I know doing work in this space would recommend integrating it into practice at this point.
 
I only briefly glanced at the article and very casually glanced at some materials after Googling "Reconsolidation of Traumatic Memories," and it looks like it's just brief imaginal exposure...? Can someone correct me?
 
This was an interesting read over lunch. TL;DR yes, there are some RCTs that show efficacy, but they're GRADEd low to very low quality due to non-randomness, limited populations, failure to document protocols clearly, failure to follow up on treatment effects etc. So, interesting, maybe, but hey, we do know that it definitely helped at least one person to publish their essay in The Atlantic

I only briefly glanced at the article and very casually glanced at some materials after Googling "Reconsolidation of Traumatic Memories," and it looks like it's just brief imaginal exposure...? Can someone correct me?

Seems that way; one mechanism identified thus far is thought to be basically graduated extinction
 
This was an interesting read over lunch. TL;DR yes, there are some RCTs that show efficacy, but they're GRADEd low to very low quality due to non-randomness, limited populations, failure to document protocols clearly, failure to follow up on treatment effects etc. So, interesting, maybe, but hey, we do know that it definitely helped at least one person to publish their essay in The Atlantic



Seems that way; one mechanism identified thus far is thought to be basically graduated extinction

Some of the non-combat A1s in there are...interesting. I don't really trust any combat related US research after working and researching in the VA, and the other groups have populations that fairly rarely show up for treatment, so, hard to really know what that meta really means given that and the other limitations.
 
Some of the non-combat A1s in there are...interesting. I don't really trust any combat related US research after working and researching in the VA, and the other groups have populations that fairly rarely show up for treatment, so, hard to really know what that meta really means given that and the other limitations.

I don't follow. Why would that make you doubt the findings of the meta, which anyone who can moderately code in R could easily reproduce?
 
Worth noting that a lot of the meta was pharmacologic manipulations, which is a whole other ballgame (and actually more aligned with what I'm doing). For whatever its worth, I think interfering with reconsolidation has merit but is incredibly complicated. This is especially true with drugs because the pharmacokinetics/dynamics of many of them are wacky. If you believe the pre-clinical data, it can actually have wildly different effects based on nuanced differences in dose timing. Aside from being wildly impractical in clinical settings ("If you only got raped 15 minutes later we could help you, but I'm afraid its too late now"...), the variability in metabolism rates across humans poses a massive problem. I'm actually targeting a different memory process for precisely this reason. This literature started with a lot of space-age pseudoscience where "Researchers could erase your memories by giving you a single innocuous dose of a medication in the ER after an accident/injury/assault/etc." and I think literature - not surprisingly - clearly tilts against that being effective. I do think there is at least "some" potential we can land on a mix of pharmacological and behavioral techniques that allow us to improve outcomes by targeting retrieval/reconsolidation processes directly. Trying not to completely dox myself (even though probably half of you know me by name🤣), but my grant is probing the neural mechanisms underlying some of this using a pharmacologic agent for a different condition. I remain quite convinced there is something to this line of work.

I do want to be clear though. Even as someone doing work in this space and advocating for its value, I do not for a second delude myself into thinking this is as exciting as the popular press make it out to be. Most likely successful outcome is we give you a pill and apply what really a slightly-tweaked version of the current PE protocol for (maybe) a 15% improvement in efficacy.

Absolutely meaningful work I stand behind, but not "We will magically erase your traumas using space lasers" sexy.
 
Worth noting that a lot of the meta was pharmacologic manipulations, which is a whole other ballgame (and actually more aligned with what I'm doing).

True though it seems that partitioning out RTM significantly dropped the SMD of all reconciliation interventions to nearly negligible (Fig. 3c and discussion on p. 10). I'm not an expert in this so I'd be curious for your thoughts as to why that might be.
 
Last edited:
I don't follow. Why would that make you doubt the findings of the meta, which anyone who can moderately code in R could easily reproduce?
I can't speak for WisNeuro, but my guess would be concerns regarding to the validity of the diagnoses and presentations, confounding influences from secondary gain, etc. Probably not concerns with the actual analyses themselves.
 
I can't speak for WisNeuro, but my guess would be concerns regarding to the validity of the diagnoses and presentations, confounding influences from secondary gain, etc. Probably not concerns with the actual analyses themselves.

Yes, I am not doubting the analyses, I am skeptical of the data that goes into the analyses. I definitely think it warrants further study, with better methodology, but I wouldn't call it a success based on this meta alone.
 
Yes, I am not doubting the analyses, I am skeptical of the data that goes into the analyses. I definitely think it warrants further study, with better methodology, but I wouldn't call it a success based on this meta alone.

The study authors reached the same conclusion about the quality of the studies hence the low GRADE of many of the RCTs.
 
True though it seems that partitioning out RTM significantly dropped the SMD of all reconciliation interventions to nearly negligible (Fig. 3c and discussion on p. 10). I'm not an expert in this so I'd be curious for your thoughts as to why that might be.
Disclaimer that I'm not remotely an expert in RTM, I'm not even sure I'd consider myself an expert in the general topic of memory-focused interventions despite having funding in that area.

I didn't dig into the underlying articles, but I think you may be overreading the results. First - just to clarify - did you mean Fig 2E vs 3C? Unless I'm missing something myself, that is the one showing the RTM interventions. It is basically 3 teeny-tiny pilots and one medium-sized pilot that all showed strong effects but come from a single research group and all have substantial risk of bias. Of course dropping them will reduce the SMD when everything else was basically null to begin with. Does it work better than the others? Maybe, but I'm not remotely convinced based on this.

This is actually a good example of why I'm generally not a huge proponent of meta-analysis and favor systematic reviews in most cases. It has its place (I've published them myself) as a piece of the puzzle, but it is easy to lose sight of critical characteristics in individual studies. It basically amounts to "reading tea leaves" once we throw all the methodological nuance out the window and distill things to sample size and effect size. Not that I'm saying you were approaching it like that, just a tangential rant on my part.
 
Disclaimer that I'm not remotely an expert in RTM, I'm not even sure I'd consider myself an expert in the general topic of memory-focused interventions despite having funding in that area.

I didn't dig into the underlying articles, but I think you may be overreading the results. First - just to clarify - did you mean Fig 2E vs 3C? Unless I'm missing something myself, that is the one showing the RTM interventions. It is basically 3 teeny-tiny pilots and one medium-sized pilot that all showed strong effects but come from a single research group and all have substantial risk of bias. Of course dropping them will reduce the SMD when everything else was basically null to begin with. Does it work better than the others? Maybe, but I'm not remotely convinced based on this.

This is actually a good example of why I'm generally not a huge proponent of meta-analysis and favor systematic reviews in most cases. It has its place (I've published them myself) as a piece of the puzzle, but it is easy to lose sight of critical characteristics in individual studies. It basically amounts to "reading tea leaves" once we throw all the methodological nuance out the window and distill things to sample size and effect size. Not that I'm saying you were approaching it like that, just a tangential rant on my part.

One of the big reasons people misunderstand and over-interpret Wampold's work.
 
I didn't dig into the underlying articles, but I think you may be overreading the results. First - just to clarify - did you mean Fig 2E vs 3C? Unless I'm missing something myself, that is the one showing the RTM interventions. It is basically 3 teeny-tiny pilots and one medium-sized pilot that all showed strong effects but come from a single research group and all have substantial risk of bias. Of course dropping them will reduce the SMD when everything else was basically null to begin with. Does it work better than the others? Maybe, but I'm not remotely convinced based on this.

Actually, I meant 2c, which, if I'm understand it correctly, shows the sensitivity analysis they discuss on p. 10. The RTM interventions alone on PTSD symptoms are 2e. Since the RCTs are small and have a high risk of bias, I was wondering if the effect being partitioned out was largely measurement error.


This is actually a good example of why I'm generally not a huge proponent of meta-analysis and favor systematic reviews in most cases. It has its place (I've published them myself) as a piece of the puzzle, but it is easy to lose sight of critical characteristics in individual studies. It basically amounts to "reading tea leaves" once we throw all the methodological nuance out the window and distill things to sample size and effect size. Not that I'm saying you were approaching it like that, just a tangential rant on my part.

I've also published these and agree with the above sentiment though not a bad way to get familiar with a topic that you know nothing about. May be a half a step up from a popular press article.
 
Last edited:
Actually, I meant 2c, which, if I'm understand it correctly, shows the sensitivity analysis they discuss on p. 10. The RTM interventions alone on PTSD symptoms are 2e. Since the RCTs are small and have a high risk of bias, I was wondering if the effect being partitioned out was largely measurement error.
Gotcha. Could be measurement error, could be something systematic (but still not real - function of biased methodology or something like that) or could be real. TBD?
 
Top