Training for the ~future~ of Psychiatry (Interventions, Ketamine, psychedelics, + many more!)

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

luckrules

Full Member
10+ Year Member
Joined
Nov 11, 2011
Messages
63
Reaction score
17
Perhaps off the wall question. Current M4 applying into psychiatry. I am broadly interested in how one trains in psychiatry for a lifelong career. In my psychiatry rotations we spent a lot of our time doing things basically how they are done now. Inpatient medication management, brief psychotherapeutic interventions, etc. Everyone was afraid of starting clozaril. I imagine a lot of residency will be like this, with more of an emphasis on outpatient experiences and speciality clinics.

Lets say I am deep in my heart very optimistic about the state of neuroscience in the next 20-30 years. Lets say I'm really excited by this. Lets say I really think its cool and want to be a part of it. Lets say I think MDMA for PTSD is interesting, or the end of life with psilocybin research is really exciting. Ketamine is always in the newsletters. How do I, as a resident, position myself to be a part of these kinds of interventions and provide them for my patients in the future?

Part of this question is "how to be a lifelong learner." And I get that I can keep reading the journals, going to meetings, CME, etc, on my own time as I am an attending.

But, there is a difference between being up on the literature and feeling comfortable with clinical practice. Using MDMA for PTSD as an example, I read the MAPS paper for their phase III trial on MDMA for PTSD. And it is a highly formulaic, operationalized kind of approach. If I was a PGY 15 attending, and I wanted to offer this to patients (ignoring legal restrictions for a moment), how would one gain familiarity enough to be able to do this? Don't you kind of need rotations in residency, supervision, etc? And if these things are going to have an impact (or pharmacogenomics, to use a less flashy example) but are not currently standard of care, we wouldn't learn them in residency.

Maybe the last part of this question is a broader question. As psychiatrists, we are medical doctors who are trained to provide mental health care. There is this interesting mix of psychological and neurological, science and art, that I think draws many of us to this field and make it exciting. These strange new interventions are being done, shouldn't we be able to take the lead in them? Both from a "turf" perspective, but also from a patient centered perspective? The ketamine clinics I've seen are being run by anesthesiologists, why is that? Why aren't we more on the front line in integrating neuroscience into the clinic, for our patients' sake?

Please excuse me if I step on any toes, or if I come across as naive. As I said, I love our field and as always, I appreciate in advance any insight this forum is able to provide are able to offer.

Members don't see this ad.
 
I mean, this is something you should’ve brought up during interviews to see which programs were going to be most in line with your goals.
 
  • Like
Reactions: 1 user
The learning doesn't stop when you graduate from residency. I would argue that the real learning begins as an attending. Residency and fellowship, if one choose to do that as well, merely prepares you for practice as an attending. I encounter far more things that are unexpected, unusual, unfamiliar, new, etc. as an attending than I ever did during residency. However, residency and fellowship prepared me for this and the education, training, and experiences during that period of time is what allows me to reason my way through problems and synthesize a new treatment recommendation or intervention that makes sense and is effective, all based on my foundation.

That doesn't answer everything you've asked, but I think addressed part of your concern. There's also a clear line that separates reasonable non-conventional treatment modalities and reckless quackery.
 
  • Like
Reactions: 4 users
Members don't see this ad :)
In my psychiatry rotations we spent a lot of our time doing things basically how they are done now. Inpatient medication management, brief psychotherapeutic interventions, etc. Everyone was afraid of starting clozaril. I imagine a lot of residency will be like this, with more of an emphasis on outpatient experiences and speciality clinics.

Remember that in medicine in general, we spend our time doing things how they are done now (in other words, to the standard of care). Once sufficient evidence for a better treatment option comes together, a new standard of care should emerge. There is a difference in research, in which we generate new knowledge, and clinical practice, in which we apply knowledge. If you want to help establish new interventions for psychiatric disorders, you should find a way to get plugged into the research world.

As for providing treatment that has a seemingly solid evidence base but not yet general acceptance, I'm not really sure. I would be interested in hearing if others here have set up practices at the "cutting edge," and what considerations go into that.
 
  • Like
Reactions: 1 users
And as for everyone being "afraid to start clozaril," there can be plenty of good reasons not to start clozaril. Sometimes, though, people are hesitant to go to options like clozapine, TCAs, MAOIs, Suboxone (for OUD), ECT, and other treatments that have a good evidence base but may be trickier to manage. That is unfortunate, and you don't want to fall into the trap of sticking only to the easy to manage treatment options. Your patients deserve access to the best options we have to offer, whatever those may be.
 
  • Like
Reactions: 1 users
Once sufficient evidence for a better treatment option comes together, a new standard of care should emerge.
"Should" is optimistic, maybe euphemistic.

As you said, standard of care is what people do, not what they should do. Example: High sensitivity troponin tests have been available in Europe and Australia for a long time. They reduce costs by making hospital stays for non-cardiac chest pain much shorter as you don't have to wait for serial testing. The sensitivity is orders of magnitude higher and can be used to monitor unstable angina, as well. Only approved in the US a year ago, not in clinical practice in the US yet. I've asked two doctors about it, and both thought that the troponin tests they used were high sensitivity troponin tests (they weren't, at least not in the sense that the term has been used to distinguish them from older tests). They weren't even aware of a completely different, higher standard in a large swath of the developed world. Standard of care is geographic not evidence-based, even within a country.

Edit: It raises the question of why people do change their practices if evidence isn't the primary motivator. Fear of litigation and marketing (losing vs. making money)? Those would be my top two guesses as impetus for change.
 
  • Like
Reactions: 1 user
Edit: It raises the question of why people do change their practices if evidence isn't the primary motivator. Fear of litigation and marketing (losing vs. making money)? Those would be my top two guesses as impetus for change.

No. As always, at the provider level I mostly give doctors the benefit of the doubt that they are doing their best, and that where they fail is not typically as simple as marketing or fear of litigation. I think this is a common misperception and one that I find vaguely insulting. I'm sure that's not your intent, but I felt the need to say as much.
 
  • Like
Reactions: 1 user
There are many complexities in how/why standards of care change, and much of it is not something that the individual provider has a lot of direct control over, in many cases.

The example of the more sensitive troponin test: it's not like just because a test exists it is automaticaly on the menu of what a provider can order. There are a lot of things that control that.

For example, having a patent for a new type of soft drink is many many steps from it being available on the market in grocery store shelves. There are practical hurdles as well as those of policy to surmount.

This would be true of a new test becoming more widely available.

Of course one also needs to be educated about these things as well, and the responsibility is certainly on a provider to stay as well abreast as they can.

Lastly, medicine tends to be a conservative profession and slow to change, for better or worse.

Changing standard of care is difficult because to some degree you are moving from the better known the the lesser known.

While it may seem obvious that is the best course of action to move towards something new that is shown to be better, it actually isn't obvious

There are many examples of enacting a new standard based on promising and good results, only for the test of time to show that it does more harm than before.

An example would be the change in attitude towards pain control and the prescribing of newer developed opiods for chronic pain in nonterminal patients in the 90s. Another would be DES back in the day.

That said, yes, we will never get anywhere if we don't at some point generalize research results to the wider population, where sadly it is often only in broader application that unforseen consequences become clear. That's the trade off that must be made for progress.

All of this is what I was taught in medical school are some of the difficulties facing every provider in being "early" vs "late" adopters of changes in standards of care.

Personally, my style is towards being an early adopter. However, that can come at a cost that I must be willing to accept, and that cost is actually not one I pay. My patients do. That is part of an informed consent type issue.

One issue is that in Western culture, particularly US, which might be part of what you are observing in terms of hesitancy of providers to change, is that "newer is better" is frequently an attitude of the culture.

So I have to be pretty careful if I am offering something newer to patients that that isn't too much undue influence on moving towards goals of care. That they are not dismissing risks on a conflated sense that newer is better.

It's a bit different when a patient experiences a known side effect from a well understood therapy, vs when they experience unknown ones when you use something not as well understood.

"First do no harm," oddly enough, can be easy for everyone to lose sight of, and this is seen in the culture's idea that doing something is better than nothing, when that often isn't the case.

When people suffer iatrogenic effects no one saw coming, because you went outside of current standard of care in order to rush to put into place new research, well, not much feels more arrogant and crummier.

These are valid concerns that are often at play in why many physicians can be slow to waver from current standards and incorporate what's new. Yes, they need to be able to do so and adapt their care to new advances. But it isn't simple or easy to make these decisions.

Certainly litigation, marketing, need for more education, have their place, and lots of other psychological factors like neophobia (fear of the new) and fear of doing harm. However there are a lot of legitimate clinical concerns, healthy skepticism, etc involved in the decision-making surrounding advances.

Doctors that are slow to change for concerns surrounding "first do no harm" are not anymore wrong than early adopters whose patients suffer negative effects.

There is a place in medicine for all of this thought. For moving forward and for wait-and-see. And no doc is strictly one type or another, and some patients I might be an early adopter of a therapy where elsewhere I hold back where it seems appropriate.

People forget too, that "fear of harm" and "fear of litigation" are linked in a way that a provider seeking to avoid both is hardly a coward to defensive medicine in most cases.

Long story short, when you move from well known well studied therapies to the new and lesser known, you sorta need a darn good reason. We can't take risks that aren't justified by possible gain while keeping in mind possible harm, and in the case of what is newer, harm that might not even be known.
 
  • Like
Reactions: 3 users
I was thinking a lot more about this post last night, and I think my original question was poorly formed. Broadly, I think I'm asking how one continues to develop as you get further in your career, which I think a lot of people have answered. More specifically I asked about ketamine, psychedelics, etc, just because they are interesting to me now. That doesn't mean I want to necessarily do them right away (see more on that in a second) but just that I don't see them being done now and wonder how one would gain comfort with new things once one is out in practice.

The learning doesn't stop when you graduate from residency. I would argue that the real learning begins as an attending. Residency and fellowship, if one choose to do that as well, merely prepares you for practice as an attending. I encounter far more things that are unexpected, unusual, unfamiliar, new, etc. as an attending than I ever did during residency. However, residency and fellowship prepared me for this and the education, training, and experiences during that period of time is what allows me to reason my way through problems and synthesize a new treatment recommendation or intervention that makes sense and is effective, all based on my foundation.

That doesn't answer everything you've asked, but I think addressed part of your concern. There's also a clear line that separates reasonable non-conventional treatment modalities and reckless quackery.

I think more recently I've been interested in the above mentioned treatments, admittedly because I've seen a lot of headlines about them and done some of my own reading. I agree with the above quote "clear line that separates reasonable non-conventional treatment modalities and reckless quackery." With regards to ketamine, psychedelics, etc., I hope that line gets drawn somewhere based on solid evidence and that we conduct those investigations in a timely manner. When there is evidence available I hope to practice according to the data to get the best possible outcomes for my patients.

Remember that in medicine in general, we spend our time doing things how they are done now (in other words, to the standard of care). Once sufficient evidence for a better treatment option comes together, a new standard of care should emerge. There is a difference in research, in which we generate new knowledge, and clinical practice, in which we apply knowledge. If you want to help establish new interventions for psychiatric disorders, you should find a way to get plugged into the research world.

As for providing treatment that has a seemingly solid evidence base but not yet general acceptance, I'm not really sure. I would be interested in hearing if others here have set up practices at the "cutting edge," and what considerations go into that.

I think Bartelby here really hit the nail on the head, as beyond "when there is evidence" there is a sort of implementation question as to how something becomes general practice. I was talking with a friend in another specialty yesterday, about standard of care in their field. Everytime I asked about why they performed certain interventions (usually more invasive or continuous monitoring) even though there was no evidence to support its use, they cited fear of an IRB or a lawsuit. While those are real and valid concerns, they are not evidence based or patient centered.

I asked an older psychiatrist what it was like when SSRIs came onto the scene and he was nonplussed, as basically SSRIs were similar to TCAs and MAOI but without the risks and a better side effect profile. But thinks like neuromodulation, ECT, Ketamine, MDMA assisted psychotherapy, etc - these are a different kind of intervention than just a new kind of anti-depressant. So if I was a practitioner out in the community, I think it would be more difficult to all of a sudden start offering these services without some support. It sounds like people are saying you get that support through going to conferences or being involved in research. That makes sense to me, I'm wondering if anyone disagrees with that, or there is something else. Are the barriers high enough that once you are done with residency you won't do that? If so, how do we get needed, evidence based treatment to our patients?

And as for everyone being "afraid to start clozaril," there can be plenty of good reasons not to start clozaril. Sometimes, though, people are hesitant to go to options like clozapine, TCAs, MAOIs, Suboxone (for OUD), ECT, and other treatments that have a good evidence base but may be trickier to manage. That is unfortunate, and you don't want to fall into the trap of sticking only to the easy to manage treatment options. Your patients deserve access to the best options we have to offer, whatever those may be.

I really love this quote as well. I think we're all trying to do right by our patients, and shouldn't lose sight of that.
 
  • Like
Reactions: 1 user
There are many complexities in how/why standards of care change, and much of it is not something that the individual provider has a lot of direct control over, in many cases.

The example of the more sensitive troponin test: it's not like just because a test exists it is automaticaly on the menu of what a provider can order. There are a lot of things that control that.

For example, having a patent for a new type of soft drink is many many steps from it being available on the market in grocery store shelves. There are practical hurdles as well as those of policy to surmount.

This would be true of a new test becoming more widely available.

Of course one also needs to be educated about these things as well, and the responsibility is certainly on a provider to stay as well abreast as they can.

Lastly, medicine tends to be a conservative profession and slow to change, for better or worse.

Changing standard of care is difficult because to some degree you are moving from the better known the the lesser known.

While it may seem obvious that is the best course of action to move towards something new that is shown to be better, it actually isn't obvious

There are many examples of enacting a new standard based on promising and good results, only for the test of time to show that it does more harm than before.

An example would be the change in attitude towards pain control and the prescribing of newer developed opiods for chronic pain in nonterminal patients in the 90s. Another would be DES back in the day.

That said, yes, we will never get anywhere if we don't at some point generalize research results to the wider population, where sadly it is often only in broader application that unforseen consequences become clear. That's the trade off that must be made for progress.

All of this is what I was taught in medical school are some of the difficulties facing every provider in being "early" vs "late" adopters of changes in standards of care.

Personally, my style is towards being an early adopter. However, that can come at a cost that I must be willing to accept, and that cost is actually not one I pay. My patients do. That is part of an informed consent type issue.

One issue is that in Western culture, particularly US, which might be part of what you are observing in terms of hesitancy of providers to change, is that "newer is better" is frequently an attitude of the culture.

So I have to be pretty careful if I am offering something newer to patients that that isn't too much undue influence on moving towards goals of care. That they are not dismissing risks on a conflated sense that newer is better.

It's a bit different when a patient experiences a known side effect from a well understood therapy, vs when they experience unknown ones when you use something not as well understood.

"First do no harm," oddly enough, can be easy for everyone to lose sight of, and this is seen in the culture's idea that doing something is better than nothing, when that often isn't the case.

When people suffer iatrogenic effects no one saw coming, because you went outside of current standard of care in order to rush to put into place new research, well, not much feels more arrogant and crummier.

These are valid concerns that are often at play in why many physicians can be slow to waver from current standards and incorporate what's new. Yes, they need to be able to do so and adapt their care to new advances. But it isn't simple or easy to make these decisions.

Certainly litigation, marketing, need for more education, have their place, and lots of other psychological factors like neophobia (fear of the new) and fear of doing harm. However there are a lot of legitimate clinical concerns, healthy skepticism, etc involved in the decision-making surrounding advances.

Doctors that are slow to change for concerns surrounding "first do no harm" are not anymore wrong than early adopters whose patients suffer negative effects.

There is a place in medicine for all of this thought. For moving forward and for wait-and-see. And no doc is strictly one type or another, and some patients I might be an early adopter of a therapy where elsewhere I hold back where it seems appropriate.

People forget too, that "fear of harm" and "fear of litigation" are linked in a way that a provider seeking to avoid both is hardly a coward to defensive medicine in most cases.

Long story short, when you move from well known well studied therapies to the new and lesser known, you sorta need a darn good reason. We can't take risks that aren't justified by possible gain while keeping in mind possible harm, and in the case of what is newer, harm that might not even be known.

I think this is a well thought out discussion of the institutional factors that influence rate of implementation. I think most docs operate basically with their patients' well being in mind, but there are a myriad of other forces that influence how we deliver care. I might just ask, if you are an early adopter, what steps do you take to mitigate any of the potential risks (both to your patients and your liability) associated with using new treatment modalities?
 
  • Like
Reactions: 1 user
Everytime I asked about why they performed certain interventions (usually more invasive or continuous monitoring) even though there was no evidence to support its use, they cited fear of an IRB or a lawsuit.
Somebody told you they did a certain intervention out of fear of the IRB? That doesn't make any sense.

It sounds like people are saying you get that support through going to conferences or being involved in research. That makes sense to me, I'm wondering if anyone disagrees with that, or there is something else. Are the barriers high enough that once you are done with residency you won't do that?
Getting CMEs is required for maintaining board certification (and even licensure in NJ at least). Many jobs will provide some money for attending conferences. If you can be on staff somewhere that has Grand Rounds, you may be exposed to people from outside your area talking on these topics.

I don't think it's too hard to keep at least close to up to date, though I am still semi-fresh out of fellowship so may be wrong.
 
Somebody told you they did a certain intervention out of fear of the IRB? That doesn't make any sense.

Sorry if that was not clear, the conversation itself was kind of winding. But basically why a less invasive form of monitoring, or a more conservative form of management, was not studied. Because "how could you get that through an IRB?"
 
Members don't see this ad :)
I think this is a well thought out discussion of the institutional factors that influence rate of implementation. I think most docs operate basically with their patients' well being in mind, but there are a myriad of other forces that influence how we deliver care. I might just ask, if you are an early adopter, what steps do you take to mitigate any of the potential risks (both to your patients and your liability) associated with using new treatment modalities?

Early adopter is relative. Besides the fact that you have to concern yourself with whether or not insurance goes for things, for me it's about doing a really good informed consent discussion.

Before you talk about a different approach, it's important to discuss all the relevant treatment options, and the relative risk/benefit, and then what you think might happen down any path. Obviously if you're offering something newer it's because you think there might be an advantage over other modalities. Sometimes it's an alternative to doing nothing when all else fails. So you outline how you're coming to the recommendation.

Essentially you document just that.

For liability, it needs to be clear that there is evidence, not just the research but also in the patient's personal history, that supports why you did what you did, and not something else.

That's really key. Let's be extreme facetious example. Say there's research suggesting LSD could be useful for EtOHism. You need to show that other more conservative therapies have been tried. That the risk of the unsuccessfully treated condition is greater than the risk taken with the more controversial therapy.

In a lawsuit, the question will always be, "but why didn't you....?" So that is what must be clearly supported in your note.
 
  • Like
Reactions: 1 user
This isn't exactly an early adopter thing, but I assisted in a lamotrigine rechallenge after lamotrigine-induced rash.

That's not exactly a not-for-nothing move.

Basically, what studies there are were cited, the compelling case made for why bother to take the risk and not a different medication plan, and documentation of clear communication with the patient about risks.
 
  • Like
Reactions: 1 user
to learn a new procedure -something like ECT, TMS, or a form of psychotherapy; you might take some type of training course (eg, a week long ect training course) and do some work with a practitioner who is already experience.
For TMS, I believe the manufacturer might offer some training.
 
  • Like
Reactions: 1 user
Lets say I am deep in my heart very optimistic about the state of neuroscience in the next 20-30 years. Lets say I'm really excited by this. Lets say I really think its cool and want to be a part of it. Lets say I think MDMA for PTSD is interesting, or the end of life with psilocybin research is really exciting. Ketamine is always in the newsletters. How do I, as a resident, position myself to be a part of these kinds of interventions and provide them for my patients in the future?

I think there's a lot to be cynical about too.

A lot of people have this opinion about psychiatry more than a lot of other fields that we're on the cusp of this grand neuroscience technology-driven revolution in how we practice. I just don't see it beyond the hype, and I see it less and less the more I look for it.

TMS looks to have an insanely high placebo response. Ketamine infusions are trending in that direction too (plus I had an experience with one of my own patients involved in one of the recent ketamine analog trials that made me really question the quality of what they've been doing in their studies). There's no evidence that genetics-based interventions like GeneSite guide a pathway to more precise treatment compared to just picking meds at random (or by cheapest). I've yet to see an fMRI study that I should actually care about at the level of my clinical practice. I see some of the interventions coming through the pipeline for my friends over in oncology and compare it to the pace of change in our field, and we're pretty much primitive unless you're going to get really excited about the makers of brexpiprazole trying to throw sh-t to the wall to see what sticks at the FDA.

I think it's that relatively primitive nature of psychiatry that makes people have wishful thinking about its future ("psychofuturism"?). I suppose that's all well and good, and mostly harmless, but my own personal Luddite philosophy of practice emphasizes the importance of seeing things as they are over seeing them as you want them to be. You'll do a lot of good for peoples' lives in this field, but even 20 years from now it'll be not from the precision of a genetically-tailored chemical infusion, but from a referral to the talented CBT therapist in the office next door.
 
Last edited:
  • Like
Reactions: 6 users
I think there's a lot to be cynical about too.

A lot of people have this opinion about psychiatry more than a lot of other fields that we're on the cusp of this grand neuroscience technology-driven revolution in how we practice. I just don't see it beyond the hype, and I see it less and less the more I look for it.

TMS looks to have an insanely high placebo response. Ketamine infusions are trending in that direction too (plus I had an experience with one of my own patients involved in one of the recent ketamine analog trials that made me really question the quality of what they've been doing in their studies). There's no evidence that genetics-based interventions like GeneSite guide a pathway to more precise treatment compared to just picking meds at random (or by cheapest). I've yet to see an fMRI study that I should actually care about at the level of my clinical practice. I see some of the interventions coming through the pipeline for my friends over in oncology and compare it to the pace of change in our field, and we're pretty much primitive unless you're going to get really excited about the makers of brexpiprazole trying to throw sh-t to the FDA wall to see what sticks.

I think it's that relatively primitive nature of psychiatry that makes people have wishful thinking about its future ("psychofuturism"?). I suppose that's all well and good, and mostly harmless, but my own personal Luddite philosophy of practice emphasizes the importance of seeing things as they are over seeing them as you want them to be. You'll do a lot of good for peoples' lives in this field, but even 20 years from now it'll be not from the precision of a genetically-tailored chemical infusion, but from a referral to the talented CBT therapist in the office next door.

Hey, I was with you until the last bit. Don't discount the good from the offhand pessimistic comment from your grouchy psychiatrist ;) as a catalyst for change. No, really.
 
I think there's a lot to be cynical about too.

A lot of people have this opinion about psychiatry more than a lot of other fields that we're on the cusp of this grand neuroscience technology-driven revolution in how we practice. I just don't see it beyond the hype, and I see it less and less the more I look for it.
...
I think it's that relatively primitive nature of psychiatry that makes people have wishful thinking about its future ("psychofuturism"?). I suppose that's all well and good, and mostly harmless, but my own personal Luddite philosophy of practice emphasizes the importance of seeing things as they are over seeing them as you want them to be. You'll do a lot of good for peoples' lives in this field, but even 20 years from now it'll be not from the precision of a genetically-tailored chemical infusion, but from a referral to the talented CBT therapist in the office next door.

Reminds me of the joke about the molecular biologist (but you can easily substitute neuroscientist) on his honeymoon.
When the bride was asked how the honeymoon had been, she replied "Fine--but he just sat on the side of the bed and kept telling me how great it was going to be in five years."
 
  • Like
Reactions: 5 users
Hey, I was with you until the last bit. Don't discount the good from the offhand pessimistic comment from your grouchy psychiatrist ;) as a catalyst for change. No, really.

Yes, but a therapist can make a grouchy comment weekly. I can do q4 weeks at best, and that's if I'm feeling generous..
 
  • Like
Reactions: 1 users
Reminds me of the joke about the molecular biologist (but you can easily substitute neuroscientist) on his honeymoon.
When the bride was asked how the honeymoon had been, she replied "Fine--but he just sat on the side of the bed and kept telling me how great it was going to be in five years."

Two behaviorists get marriedand on go on their honeymoon. After they finally consummate their marriage, one turns to the other and says "I know it was good for you, but was it good for me?"
 
  • Like
Reactions: 8 users
Top