This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Naijaba

Full Member
15+ Year Member
Joined
Apr 2, 2007
Messages
1,060
Reaction score
118
I haven't posted here in nearly a decade, but the latest breakthroughs in AI/ML deserves an awareness post for future trainees. In 2016, radiologist were aghast when Geoffrey Hinton (one of the pioneers of the current AI era) said, "We should stop training radiologists now." If you were starting a diagnostic radiology residency, including intern year and fellowship, you'd just be in your first year as an attending.

During that time, the AI community has made breakthroughs in image generation, image labeling, text generation, and numerous other areas. Today, GPT-4 was released, and it includes state of the art image-to-text explanation. Here's an example provided by their website:

1678822932574.png

Source: GPT-4

If you were starting diagnostic radiology residency today, I can't imagine your job will look the same as a what you are being trained to do.

I don't know where the field will go from here, but I would first say that AI/ML (and computer science / mathematics education) should be emphasized for future radiology trainees, and even perhaps at the medical school level.

Members don't see this ad.
 
  • Like
  • Dislike
  • Inappropriate
Reactions: 7 users
This and radiology imaging interpretation are two different stratospheres lol.....
 
I would really love the day when the computer can detect that, no, I did not say "meet the over the wall" and instead said "medial orbital wall." My job as an editor of the transcription will be obsolete.
 
  • Like
  • Care
Reactions: 8 users
Members don't see this ad :)
Have any of you tested an AI rad tool in real life?
Wonder how they perform on real life data
 
  • Like
Reactions: 1 user
I've seen proprietary AI programs used by private practice physicians fail to generate an appropriate impression based off 4-5 sentences of dictation. It gets it right around 50% of the time. There are also PE detecting software that some rads will run on a CTA as a "double check" after they've done their read. I don't know the exact stats on those ones, but they fail to catch some obvious PEs on occasion.

With all of that said, AI is advancing fast and will be a large part of radiology over the next 5-10 years. It is hard to imagine it replacing humans in our lifetimes given we still have a lot of menial/rote labor jobs in the workforce that require a lot less nuanced skill/knowledge/information interpretation.
 
All I’m hearing is a gunner trying to dissuade applicants so their competition drops.
 
  • Haha
Reactions: 2 users
I am not a Radiologist. I am a Neurologist that reads NCHCTs, CTAs, CTPs, MRIs myself daily. I had an early but undeveloped academic interest in artificial intelligence that over the last 10 years has led to a second career in engineering, including multiple STEM graduate degrees in computer science and engineering. I spend 90% of my time now in engineering, specifically in deep learning. I review for high impact engineering journals, not clinical journals. All of my research funding is for basic, computational work, not clinical trials. I have specifically worked on using self-supervised learning within the domain of medical image classification to develop computer-aided detection tools for Radiologists.
Artificial intelligence is absolutely coming for all careers in medicine, starting with Radiology and Pathology (this is where the early work in computer vision started). Dermatology, Primary Care, and Psych will be soon after. Neurology not far away. Lastly, surgical subspecialties will be augmented and AI-completed.
It will not happen overnight, but in 2045 one Radiologist will be doing the work (in terms of volume of scans) that 5 Radiologists did in 2023. The criticism of people with my point of view from Radiologists is usually that "people that say that are either unfamiliar with clinical Radiology or are unfamiliar with AI or are unfamiliar with both". I have expertise in both. I could spend weeks waling through the progress that has been made over the years and the math/tech behind it, but this is not an engineering forum.
I view Radiology like a treasury bills scenario where 2 year bills are at an all-time high in yield but 30 year bills are yielding zero. Yes, with the midlevel takeover scan volume is higher than ever and demand for Radiologists is higher than ever. When I started my career I could not find a Rads report signed by a DO and now it is every other report. There is huge demand for Radiologists. However, with the "inverted yield curve" I would not be long-Rads.
Feel free to shoot the messenger. I am not here to convince anyone, but to simply offer up expert advice. Nothing that is coming is unique to Radiology. Every white collar job will see these changes and no job in 2050 will look the same as today.
 
  • Like
  • Inappropriate
  • Haha
Reactions: 6 users
Well I, as a first year prelim, for see a golden age for radiology due to increased output followed by CMS cuts followed by the destruction of humanity by our AI overlords as they realize that having humanity around is a massive waste of resources.

But, that’s just my opinion.
 
  • Like
Reactions: 1 user
I am not a Radiologist. I am a Neurologist that reads NCHCTs, CTAs, CTPs, MRIs myself daily. I had an early but undeveloped academic interest in artificial intelligence that over the last 10 years has led to a second career in engineering, including multiple STEM graduate degrees in computer science and engineering. I spend 90% of my time now in engineering, specifically in deep learning. I review for high impact engineering journals, not clinical journals. All of my research funding is for basic, computational work, not clinical trials. I have specifically worked on using self-supervised learning within the domain of medical image classification to develop computer-aided detection tools for Radiologists.
Artificial intelligence is absolutely coming for all careers in medicine, starting with Radiology and Pathology (this is where the early work in computer vision started). Dermatology, Primary Care, and Psych will be soon after. Neurology not far away. Lastly, surgical subspecialties will be augmented and AI-completed.
It will not happen overnight, but in 2045 one Radiologist will be doing the work (in terms of volume of scans) that 5 Radiologists did in 2023. The criticism of people with my point of view from Radiologists is usually that "people that say that are either unfamiliar with clinical Radiology or are unfamiliar with AI or are unfamiliar with both". I have expertise in both. I could spend weeks waling through the progress that has been made over the years and the math/tech behind it, but this is not an engineering forum.
I view Radiology like a treasury bills scenario where 2 year bills are at an all-time high in yield but 30 year bills are yielding zero. Yes, with the midlevel takeover scan volume is higher than ever and demand for Radiologists is higher than ever. When I started my career I could not find a Rads report signed by a DO and now it is every other report. There is huge demand for Radiologists. However, with the "inverted yield curve" I would not be long-Rads.
Feel free to shoot the messenger. I am not here to convince anyone, but to simply offer up expert advice. Nothing that is coming is unique to Radiology. Every white collar job will see these changes and no job in 2050 will look the same as today.
Only thing I agree with here is the last sentence.

Yes, you and other clinicians "read" your imaging, and that makes you an expert in neuroradiology? Lol. I can do a half assed neuro exam that I learned in med school and that makes me an expert in clinical neurology. So, like, you should totally believe me when I say your field is going to be taken over by midlevels.

As a radiology resident, I am shaking in my boots from your post. I am going to resign tomorrow and switch to cosmetic plastic surgery.
 
  • Like
Reactions: 5 users
Long time lurker, first time posting in a while. Just wanted to give my thoughts on this post.

Many laypeople/futurists and non radiology physicians think radiology is often a binary, pathology is/is not present field. Sometimes it is, but many other times, it's far more nuanced than that. Recognizing what study to protocol to answer the clinical question, understanding the limitations of the modality, and recognizing inherent artifacts are a large part of my day-to-day work. I've had numerous consults where the surgeon/clinician thought I had missed a mass, when they had mistook something artifactual (like pulsation or flow artifact on MR) for a neoplasm. Sometimes, it's more important what we don't say in the report, compared to what we actually say. This is nothing to do say about numerous known problems with AI so far (ie, lack of generalizability, applicability beyond curated data, black box nature, etc).

An article in NYT about GPT-4 actually cites a cardiologist querying the chat bot:

In a recent evening, Anil Gehi, an associate professor of medicine and a cardiologist at the University of North Carolina at Chapel Hill, described to the chatbot the medical history of a patient he had seen a day earlier, including the complications the patient experienced after being admitted to the hospital. The description contained several medical terms that laypeople would not recognize.
When Dr. Gehi asked how he should have treated the patient, the chatbot gave him the perfect answer. “That is exactly how we treated the patient,” he said.
When he tried other scenarios, the bot gave similarly impressive answers.
That knowledge is unlikely to be on display every time the bot is used. It still needs experts like Dr. Gehi to judge its responses and carry out the medical procedures. But it can exhibit this kind of expertise across many areas, from computer programming to accounting.


Clearly, if GPT-4 is as competent as any nonprocedural cardiologist, neurologist, hospitalist, etc, what is stopping hospitals from replacing them with mid levels augmented by AI?
 
  • Like
Reactions: 1 user
I am not a Radiologist. I am a Neurologist that reads NCHCTs, CTAs, CTPs, MRIs myself daily. I had an early but undeveloped academic interest in artificial intelligence that over the last 10 years has led to a second career in engineering, including multiple STEM graduate degrees in computer science and engineering. I spend 90% of my time now in engineering, specifically in deep learning. I review for high impact engineering journals, not clinical journals. All of my research funding is for basic, computational work, not clinical trials. I have specifically worked on using self-supervised learning within the domain of medical image classification to develop computer-aided detection tools for Radiologists.
Artificial intelligence is absolutely coming for all careers in medicine, starting with Radiology and Pathology (this is where the early work in computer vision started). Dermatology, Primary Care, and Psych will be soon after. Neurology not far away. Lastly, surgical subspecialties will be augmented and AI-completed.
It will not happen overnight, but in 2045 one Radiologist will be doing the work (in terms of volume of scans) that 5 Radiologists did in 2023. The criticism of people with my point of view from Radiologists is usually that "people that say that are either unfamiliar with clinical Radiology or are unfamiliar with AI or are unfamiliar with both". I have expertise in both. I could spend weeks waling through the progress that has been made over the years and the math/tech behind it, but this is not an engineering forum.
I view Radiology like a treasury bills scenario where 2 year bills are at an all-time high in yield but 30 year bills are yielding zero. Yes, with the midlevel takeover scan volume is higher than ever and demand for Radiologists is higher than ever. When I started my career I could not find a Rads report signed by a DO and now it is every other report. There is huge demand for Radiologists. However, with the "inverted yield curve" I would not be long-Rads.
Feel free to shoot the messenger. I am not here to convince anyone, but to simply offer up expert advice. Nothing that is coming is unique to Radiology. Every white collar job will see these changes and no job in 2050 will look the same as today.

do you think radiology is still a viable career for those currently in training? if aggressive saving as an attending, could one still become financially independent. basically, do you think we still have 10-15 years? or should some of us start looking elsewhere (... though not even sure where one would look within or outside of medicine...)

@Naijaba - was AI a reason for your choosing of IR fellowship?
 
Last edited:
When I started my career I could not find a Rads report signed by a DO and now it is every other report. There is huge demand for Radiologists. However, with the "inverted yield curve" I would not be long-Rads.
Maybe that is particular to your institution where you now work 10% clinically and also read scans yourself daily? The percentage of radiologists with a US MD is mid to low 80s, and DOs about 4%, both in 2022 just as it was in 2008. In comparison in neurology, US MDs is/was mid to low 60s and DOs. Source: Active Physicians With a DO Degree by Specialty, 2021
 
  • Okay...
Reactions: 1 user
Members don't see this ad :)
Clearly, if GPT-4 is as competent as any nonprocedural cardiologist, neurologist, hospitalist, etc, what is stopping hospitals from replacing them with mid levels augmented by AI?
This is exactly what is going to happen and the point that I was making.
 
  • Like
Reactions: 3 users
This is exactly what is going to happen and the point that I was making.
As robotics isn't nearly advanced as the narrow ai that is currently being built to replace the repetitive tasks of knowledge workers, are surgery and its subspecialties the only safe specialties? Also do you envisage ai completely taking over non-procedural specialties or ai with midlevels taking over and one senior doctor just signing off?
 
I am not a Radiologist. I am a Neurologist that reads NCHCTs, CTAs, CTPs, MRIs myself daily. I had an early but undeveloped academic interest in artificial intelligence that over the last 10 years has led to a second career in engineering, including multiple STEM graduate degrees in computer science and engineering. I spend 90% of my time now in engineering, specifically in deep learning. I review for high impact engineering journals, not clinical journals. All of my research funding is for basic, computational work, not clinical trials. I have specifically worked on using self-supervised learning within the domain of medical image classification to develop computer-aided detection tools for Radiologists.
Artificial intelligence is absolutely coming for all careers in medicine, starting with Radiology and Pathology (this is where the early work in computer vision started). Dermatology, Primary Care, and Psych will be soon after. Neurology not far away. Lastly, surgical subspecialties will be augmented and AI-completed.
It will not happen overnight, but in 2045 one Radiologist will be doing the work (in terms of volume of scans) that 5 Radiologists did in 2023. The criticism of people with my point of view from Radiologists is usually that "people that say that are either unfamiliar with clinical Radiology or are unfamiliar with AI or are unfamiliar with both". I have expertise in both. I could spend weeks waling through the progress that has been made over the years and the math/tech behind it, but this is not an engineering forum.
I view Radiology like a treasury bills scenario where 2 year bills are at an all-time high in yield but 30 year bills are yielding zero. Yes, with the midlevel takeover scan volume is higher than ever and demand for Radiologists is higher than ever. When I started my career I could not find a Rads report signed by a DO and now it is every other report. There is huge demand for Radiologists. However, with the "inverted yield curve" I would not be long-Rads.
Feel free to shoot the messenger. I am not here to convince anyone, but to simply offer up expert advice. Nothing that is coming is unique to Radiology. Every white collar job will see these changes and no job in 2050 will look the same as today.

There are many directions radiology could go:
  • Most likely: Read more studies to maintain current salary levels. This is exactly what has happened with prior radiology efficiency improvements.
  • Possible: Other providers reading images and getting reimbursed directly (e.g. cutting the radiologist out ). I have a hard time gauging how likely this is. Certainly orthopods could sign-off AI-read MSK radiographs and pulmonologists could sign off AI-read CXRs. I don't know about more complex imaging.
  • Possible: Rads overseeing mid-levels who read radiographs. This does happen already, but is fairly rare.
  • Unlikely, but interesting: No radiology reports at all. Referring provides simply ask questions to GPT-4: "Does the patient have pneumonia?", "Does the patient's tumor wash-out in the delayed phase?" "Are the cervical lymph nodes growing?"
do you think radiology is still a viable career for those currently in training? if aggressive saving as an attending, could one still become financially independent. basically, do you think we still have 10-15 years? or should some of us start looking elsewhere (... though not even sure where one would look within or outside of medicine...)

@Naijaba - was AI a reason for your choosing of IR fellowship?

Imaging volumes will continue to rise. I don't know where things will level-out, but there will always be a need for some radiologists. The thing is, radiologist training overemphasizes memorization of findings and pattern-recognition, exactly what GPT-4 does best. Consider that GPT-4 does poorly on analytical tests (AP Calculus BC), but excels at memorization (AP History). Every "named sign" you memorize, GPT-4 already knows about (and almost certainly will be able to identify them in imaging). I think GPT-4 could pass the rads core exam, a tough exam for humans because of the breadth of factual knowledge it covers. The test would be easy GPT-4 because most questions only require one or two deductive leaps. I think rads should replace feature-based diagnoses (e.g. LI-RADS, BI-RADS, solid/semi-solid/ground-glass/honeycombing/cylindrical/tree-in-bed, and so on) with deep-learning systems that operate on the whole image. Although, I don't know how you'd train a radiologists without this finding-based approach.

Regarding your 10-15 year question, if you are just finishing DR residency, you are in a phenomenal position. The most painful part of radiology (dictating) is about to become a whole lot easier. It will take at least 5-10 years for reimbursements to get cut significantly, so expect to be able to make more money in less time.

Yes, AI in-part drove my decision to pursue IR. I enjoyed a lot of my DR training, but not the rotations where we were reading images just for reimbursement (e.g. reading a CXR a day after it was acquired when clinical intervention had already been performed). I really did not enjoy my MSK rotations because it felt like the orthopods didn't value our opinion at all (ditto for neurosurgery).

This is exactly what is going to happen and the point that I was making.
I do agree somewhat with this. We have already seen big orthopedic hospitals and private practice groups employ their own in-house radiologists so they can capture that income. I think orthopedists (and IRs) ought to be reimbursed for imaging before/after intervention.

As robotics isn't nearly advanced as the narrow ai that is currently being built to replace the repetitive tasks of knowledge workers, are surgery and its subspecialties the only safe specialties? Also do you envisage ai completely taking over non-procedural specialties or ai with midlevels taking over and one senior doctor just signing off?

Medical robots are quite a ways away from automation. I would guess 20+ years...if not more.

I don't think AI will take over any specialty requiring patient interaction, but that's why DR is in the cross-hairs.
 
Last edited:
  • Like
Reactions: 1 users
There are many directions radiology could go:
  • Most likely: Read more studies to maintain current salary levels. This is exactly what has happened with prior radiology efficiency improvements.
  • Possible: Other providers reading images and getting reimbursed directly (e.g. cutting the radiologist out ). I have a hard time gauging how likely this is. Certainly orthopods could sign-off AI-read MSK radiographs and pulmonologists could sign off AI-read CXRs. I don't know about more complex imaging.
  • Possible: Rads overseeing mid-levels who read radiographs. This does happen already, but is fairly rare.
  • Unlikely, but interesting: No radiology reports at all. Referring provides simply ask questions to GPT-4: "Does the patient have pneumonia?", "Does the patient's tumor wash-out in the delayed phase?" "Are the cervical lymph nodes growing?"


Imaging volumes will continue to rise. I don't know where things will level-out, but there will always be a need for some radiologists. The thing is, radiologist training overemphasizes memorization of findings and pattern-recognition, exactly what GPT-4 does best. Consider that GPT-4 does poorly on analytical tests (AP Calculus BC), but excels at memorization (AP History). Every "named sign" you memorize, GPT-4 already knows about (and almost certainly will be able to identify them in imaging). I think GPT-4 could pass the rads core exam, a tough exam for humans because of the breadth of factual knowledge it covers. The test would be easy GPT-4 because most questions only require one or two deductive leaps. I think rads should replace feature-based diagnoses (e.g. LI-RADS, BI-RADS, solid/semi-solid/ground-glass/honeycombing/cylindrical/tree-in-bed, and so on) with deep-learning systems that operate on the whole image. Although, I don't know how you'd train a radiologists without this finding-based approach.

Regarding your 10-15 year question, if you are just finishing DR residency, you are in a phenomenal position. The most painful part of radiology (dictating) is about to become a whole lot easier. It will take at least 5-10 years for reimbursements to get cut significantly, so expect to be able to make more money in less time.

Yes, AI in-part drove my decision to pursue IR. I enjoyed a lot of my DR training, but not the rotations where we were reading images just for reimbursement (e.g. reading a CXR a day after it was acquired when clinical intervention had already been performed). I really did not enjoy my MSK rotations because it felt like the orthopods didn't value our opinion at all (ditto for neurosurgery).


I do agree somewhat with this. We have already seen big orthopedic hospitals and private practice groups employ their own in-house radiologists so they can capture that income. I think orthopedists (and IRs) ought to be reimbursed for imaging before/after intervention.



Medical robots are quite a ways away from automation. I would guess 20+ years...if not more.

I don't think AI will take over any specialty requiring patient interaction, but that's why DR is in the cross-hairs.
I think you are making things a little to simple. I agree straight forward 2D images are the easiest, especially if nomal. Breast imaging for example.
Easy to look at an X ray and detect a fracture. To be honest, where I originally trained we just signed reports (as you said) for x ray images that were done 2 and 3 days ago. No added value in that.
But I think (and I might be wrong) that CS images are more difficult to review. Consider cancer follow ups for example, even the smallest lesion, the one too small to characterize, can be a challenge and can be considered as benign/ malignant depending on different previous images from different modalities. Comparing a contrast CT to a previous non con CT or an MRI is just an example.
And you have the complex post surgical cases, the complex pathological with 100 findings that are umchanged but 1 new PE for example.
Anw, to be honest, it is hard to know what the future will hold. difficult times if you are an anxious person (like me).
If you are not, just enjoy the ride, all will eventually work out
 
  • Like
Reactions: 1 users
I haven't posted here in nearly a decade, but the latest breakthroughs in AI/ML deserves an awareness post for future trainees. In 2016, radiologist were aghast when Geoffrey Hinton (one of the pioneers of the current AI era) said, "We should stop training radiologists now." If you were starting a diagnostic radiology residency, including intern year and fellowship, you'd just be in your first year as an attending.

During that time, the AI community has made breakthroughs in image generation, image labeling, text generation, and numerous other areas. Today, GPT-4 was released, and it includes state of the art image-to-text explanation. Here's an example provided by their website:

View attachment 367696
Source: GPT-4

If you were starting diagnostic radiology residency today, I can't imagine your job will look the same as a what you are being trained to do.

I don't know where the field will go from here, but I would first say that AI/ML (and computer science / mathematics education) should be emphasized for future radiology trainees, and even perhaps at the medical school level.

This article (This VGA Prank Charger Transforms Your Charging Cable into a VGA Cable) and variations are posted on dozens of forums with replies that gpt-4 probably derives its answer from. Seems that it is essentially paraphrasing information rather than explaining a novel situation using logic derived from the dataset. The example is impressive for different reasons, but I think it's exaggerated in the context (your post).

One more thing:

You're overstating your knowledge of radiology. "Reading your own images" doesn't mean much of anything. It's as cringe as me saying I'm familiar with clinical neurology because I did rotations in med school/read your neurology notes/read about neurology management.

You ARE unfamiliar with clinical radiology.
 
Last edited:
  • Like
Reactions: 1 users
You're overstating your knowledge of radiology. "Reading your own images" doesn't mean much of anything. It's as cringe as me saying I'm familiar with clinical neurology because I did rotations in med school/read your neurology notes/read about neurology management.

You ARE unfamiliar with clinical radiology.
You did a rotation in Neurology for 4-8 weeks once in your life. I've read tens of thousands of NCHCTS, thousands of CTAs/CTPs/MRIs, and I've done this continuously for over a decade. Your analogy is not even close to being applicable. Your analogy would hold if and only if I read some scans only during a 4-8 week Radiology rotation in med school and never read scans again after that.

I get asked daily to quality check the Radiology read completed by non-Neuroradiology fellowship trained Radiologists on neuro scans. I'm also able to bill for my reads at many facilities. Local ED/IM providers trust us more than their non-Neurorads Radiologists. That said, clearly Neuroradiologist >>> Vascular Neurologist > Non-Neurorads Radiologist.
 
  • Dislike
  • Like
Reactions: 2 users
You did a rotation in Neurology for 4-8 weeks once in your life. I've read tens of thousands of NCHCTS, thousands of CTAs/CTPs/MRIs, and I've done this continuously for over a decade. Your analogy is not even close to being applicable. Your analogy would hold if and only if I read some scans only during a 4-8 week Radiology rotation in med school and never read scans again after that.

I get asked daily to quality check the Radiology read completed by non-Neuroradiology fellowship trained Radiologists on neuro scans. I'm also able to bill for my reads at many facilities. Local ED/IM providers trust us more than their non-Neurorads Radiologists. That said, clearly Neuroradiologist >>> Vascular Neurologist > Non-Neurorads Radiologist.

Neurorads read histories and patient documentation on the scans they read just as much as you “read” imaging on your patients. The comparison stands, it doesn’t really matter what some of your misguided colleagues do.

Neuro trained rad >> general rad >> neurologist.

Btw a monkey can be trained to look at cerebral vasculature in short order. You aren’t special because you know how to look at DSAs. It isn’t the hard part of neuro imaging.
 
  • Like
Reactions: 1 users
You did a rotation in Neurology for 4-8 weeks once in your life. I've read tens of thousands of NCHCTS, thousands of CTAs/CTPs/MRIs, and I've done this continuously for over a decade. Your analogy is not even close to being applicable. Your analogy would hold if and only if I read some scans only during a 4-8 week Radiology rotation in med school and never read scans again after that.

I get asked daily to quality check the Radiology read completed by non-Neuroradiology fellowship trained Radiologists on neuro scans. I'm also able to bill for my reads at many facilities. Local ED/IM providers trust us more than their non-Neurorads Radiologists. That said, clearly Neuroradiologist >>> Vascular Neurologist > Non-Neurorads Radiologist.

What an ignorant response. Way to ignore my first (more important) point.

I’ll bite though:

Not only are your “reads” not equivalent because you have no imaging foundation, those (likely embellished) numbers aren’t comparable to even a general radiologist.

You’re basically a midlevel of neuroimaging. You don’t know what you don’t know. It’s honestly embarrassing.

Oh and IM/ED “trusting you more than the non-neuro rads” is like patients trusting nurses over physicians. Why even mention that lol?
 
  • Like
Reactions: 1 user
Has anyone tested GPT4 in describing radiographs? If so, how did it do?
 
Has anyone tested GPT4 in describing radiographs? If so, how did it do?
Probably not well. AI can only do what its training dataset does, and I don’t think it was trained on a vast dataset of x rays and their interpretations.
 
  • Like
Reactions: 1 user
Probably not well. AI can only do what its training dataset does, and I don’t think it was trained on a vast dataset of x rays and their interpretations.
Lol how do you if chat gpt4 was trained on xrays or not? Also the whole point of these huge models is to make them more generalist and not have to explicitly train on specific datasets for them to make conclusions. This is the whole point of transfer learning, so even if it wasn't explicitly trained on x-rays it doesn't mean it won't be able to describe them.
 
As I said before, if AI is so good it could replace radiologists, I wouldn’t worry about the radiology field. I would be more worried about the human race being enslaved by machines. One of biggest values of radiology is to cross-reference other imaging studies, even ones that have no reports or the finding was not mentioned or was missed, and then to develop clinically relevant findings. If AI can perform at that level, I’m buying a dooms day bunker in the Arizona desert somewhere. No human field would be safe and humans themselves may be in danger if AI gains consciousness.
 
  • Like
Reactions: 1 users
Lol how do you if chat gpt4 was trained on xrays or not? Also the whole point of these huge models is to make them more generalist and not have to explicitly train on specific datasets for them to make conclusions. This is the whole point of transfer learning, so even if it wasn't explicitly trained on x-rays it doesn't mean it won't be able to describe them.
The whole point of deep learning is to create a highly reliable function where you put in some value and you get out some other desired value, without the function being affected by extraneous variables included in the input. A good analogy is if I want an algorithm to just repeat the input color, put you put in five words like “blue, bear, tiger, spoon, pizza” =>function=> return blue, without it accidentally returning spoon, or fork, or something. Or to add two input numbers like “blue, 1, bear, 2“ =>function=> return “3.”

Transfer learning is taking an existing algorithm, training it for some new task, without altering the algorithm’s architecture. It would be like taking ChatGPT which was trained on vast bodies of text, and instead training it on paired xrays and associated reports. Transfer learning does not mean taking an algorithm whose training dataset was x rays and suddenly making it work magically on MRIs without any new or changed training sets.

The reason I know ChatGPT probably wouldn’t do a good job is because it’s very difficult to get large bodies of x rays and their reports, which are bloated with proprietary use restrictions requiring special permissions. Lots and lots of papers were published in respectable journals whose only “new thing” was applying an existing algorithm to a new training set which was hard to put together.

I just tried GPT on a random radiopaedia image of pancreatitis, and the result I got when the input was “what is wrong with the pancreas in this one image?” and it responded “Based on the image provided, there appears to be a hypoechoic lesion in the head of the pancreas, which is an abnormality that can be seen in various pancreatic conditions, such as pancreatic cancer, pancreatitis, or a pancreatic pseudocyst. However, it is important to note that a proper diagnosis cannot be made based on one imaging study alone, as additional information, such as clinical history and further diagnostic tests, are needed to confirm the diagnosis. It is recommended that a radiologist or a medical specialist trained in interpreting medical images should review the image and provide a proper diagnosis.”

There was no “hypoechoic lesion” on this CT. It broadly listed common, nonspecific, clinically-unhelpful pancreatic abnormalities, and the reason it gave was grossly incorrect. Here is the SINGLE image: Image | Radiopaedia.org

An algorithm is just a function: you put in a variable or a set of variables, and you get something out. If I have a function defined only on the real number line from its training set, I can’t just put in imaginary numbers and expect it to make sense. I can’t expect a function to be able to replicate the rules of quantum mechanics if I only trained it on classical mechanic inputs. It has no reference for the new rules its expected to mirror.

The really annoying thing is whenever I talk to one of these apocalyptic doomsayers, they respond with some handwaving nonsense like “it‘s just a matter of time.” At a certain point it becomes extraordinarily tiresome trying to reason somebody out of what they didn’t reason themselves into.
 
Last edited:
  • Like
Reactions: 3 users
The whole point of deep learning is to create a highly reliable function where you put in some value and you get out some other desired value, without the function being affected by extraneous variables included in the input. A good analogy is if I want an algorithm to just repeat the input color, put you put in five words like “blue, bear, tiger, spoon, pizza” =>function=> return blue, without it accidentally returning spoon, or fork, or something. Or to add two input numbers like “blue, 1, bear, 2“ =>function=> return “3.”

Transfer learning is taking an existing algorithm, training it for some new task, without altering the algorithm’s architecture. It would be like taking ChatGPT which was trained on vast bodies of text, and instead training it on paired xrays and associated reports. Transfer learning does not mean taking an algorithm whose training dataset was x rays and suddenly making it work magically on MRIs without any new or changed training sets.

The reason I know ChatGPT probably wouldn’t do a good job is because it’s very difficult to get large bodies of x rays and their reports, which are bloated with proprietary use restrictions requiring special permissions. Lots and lots of papers were published in respectable journals whose only “new thing” was applying an existing algorithm to a new training set which was hard to put together.

I just tried GPT on a random radiopaedia image of pancreatitis, and the result I got when the input was “what is wrong with the pancreas in this one image?” and it responded “Based on the image provided, there appears to be a hypoechoic lesion in the head of the pancreas, which is an abnormality that can be seen in various pancreatic conditions, such as pancreatic cancer, pancreatitis, or a pancreatic pseudocyst. However, it is important to note that a proper diagnosis cannot be made based on one imaging study alone, as additional information, such as clinical history and further diagnostic tests, are needed to confirm the diagnosis. It is recommended that a radiologist or a medical specialist trained in interpreting medical images should review the image and provide a proper diagnosis.”

There was no “hypoechoic lesion” on this CT. It broadly listed common, nonspecific, clinically-unhelpful pancreatic abnormalities, and the reason it gave was grossly incorrect. Here is the SINGLE image: Image | Radiopaedia.org

An algorithm is just a function: you put in a variable or a set of variables, and you get something out. If I have a function defined only on the real number line from its training set, I can’t just put in imaginary numbers and expect it to make sense. I can’t expect a function to be able to replicate the rules of quantum mechanics if I only trained it on classical mechanic inputs. It has no reference for the new rules its expected to mirror.

The really annoying thing is whenever I talk to one of these apocalyptic doomsayers, they respond with some handwaving nonsense like “it‘s just a matter of time.” At a certain point it becomes extraordinarily tiresome trying to reason somebody out of what they didn’t reason themselves into.
Thanks for that detailed response. I appreciate you trying out GPT 4 and seeing whether it gives anything logical.

So with regards to radiology, not much has changed with these large language models. Perhaps combination of these large language models with previous models that detect various abnormalities is where we are heading? Still decades away from taking our jobs.
 
Thanks for that detailed response. I appreciate you trying out GPT 4 and seeing whether it gives anything logical.

So with regards to radiology, not much has changed with these large language models. Perhaps combination of these large language models with previous models that detect various abnormalities is where we are heading? Still decades away from taking our jobs.
We’ve already had algorithms developed that were trained using natural language processing. They were often compared to radiology residents in terms of their accuracy / utility. These have not been adopted to any real extent except as Impression generators from Findings. That’s been my thesis this whole time.

GPT just took an existing algorithm or took an existing architecture which was tweaked, applied it to the “body of knowledge of the internet,” and the body of knowledge of the internet is what you’re seeing when you give it some input function. GPT took an existing premise and applied it to a system that gave it very broad appeal, so it showed up in the news.

But the architecture they employ isn’t something new, grand, or groundbreaking. And AIs existing weaknesses, of which there are many, have not been overcome just because we applied something old to something new and gimicky.
 
Last edited:
You did a rotation in Neurology for 4-8 weeks once in your life. I've read tens of thousands of NCHCTS, thousands of CTAs/CTPs/MRIs, and I've done this continuously for over a decade. Your analogy is not even close to being applicable. Your analogy would hold if and only if I read some scans only during a 4-8 week Radiology rotation in med school and never read scans again after that.

I get asked daily to quality check the Radiology read completed by non-Neuroradiology fellowship trained Radiologists on neuro scans. I'm also able to bill for my reads at many facilities. Local ED/IM providers trust us more than their non-Neurorads Radiologists. That said, clearly Neuroradiologist >>> Vascular Neurologist > Non-Neurorads Radiologist.

I am not a neuro-trained rad and I will admit that neurologists can be good at brain and general neurovascular imaging, although certainly not as good as neurorads.

That said, when you read neuro studies, do you read the entire study? Do you look at the lungs (on CTA necks), sinuses, ENT anatomy (PPF, retromaxillary areas, salivary glands, temporal bone, etc), thyroid, bones, pulmonary arteries, lymph nodes, oral cavity, body wall, etc? I strongly doubt that. I have picked up PEs that were missed by neurologists. I know this because the neurology pre-read of the study called it normal in their notes. I have picked up adenocarcinoma-spectrum lung cancers, TE groove parathyroid adenomas, and unexpected ENT tumors. I do all this quickly. Today I had 3 back-to-back stroke studies, and I read ALL 3 sets of stroke CTAs (CT head, CTA head, CTA neck and CT perfusion), and did this all in 35-40 mins with no history except “stroke”. One of them had a temporal lobe neoplasm.

It’s really easy to think you can do the job of a radiologist until you are in the chair and responsible for the ENTIRE study. This doesn’t mean we don’t miss things that other specialist docs pick up. But you guys would miss way more if you sat in our chairs and read studies under the same conditions that we do. My wife is a specialist, and her respect for radiology was influenced by what she has seen me do during training and as an attending.
 
Last edited:
  • Like
Reactions: 1 users
The whole point of deep learning is to create a highly reliable function where you put in some value and you get out some other desired value, without the function being affected by extraneous variables included in the input. A good analogy is if I want an algorithm to just repeat the input color, put you put in five words like “blue, bear, tiger, spoon, pizza” =>function=> return blue, without it accidentally returning spoon, or fork, or something. Or to add two input numbers like “blue, 1, bear, 2“ =>function=> return “3.”

Transfer learning is taking an existing algorithm, training it for some new task, without altering the algorithm’s architecture. It would be like taking ChatGPT which was trained on vast bodies of text, and instead training it on paired xrays and associated reports. Transfer learning does not mean taking an algorithm whose training dataset was x rays and suddenly making it work magically on MRIs without any new or changed training sets.

The reason I know ChatGPT probably wouldn’t do a good job is because it’s very difficult to get large bodies of x rays and their reports, which are bloated with proprietary use restrictions requiring special permissions. Lots and lots of papers were published in respectable journals whose only “new thing” was applying an existing algorithm to a new training set which was hard to put together.

I just tried GPT on a random radiopaedia image of pancreatitis, and the result I got when the input was “what is wrong with the pancreas in this one image?” and it responded “Based on the image provided, there appears to be a hypoechoic lesion in the head of the pancreas, which is an abnormality that can be seen in various pancreatic conditions, such as pancreatic cancer, pancreatitis, or a pancreatic pseudocyst. However, it is important to note that a proper diagnosis cannot be made based on one imaging study alone, as additional information, such as clinical history and further diagnostic tests, are needed to confirm the diagnosis. It is recommended that a radiologist or a medical specialist trained in interpreting medical images should review the image and provide a proper diagnosis.”

There was no “hypoechoic lesion” on this CT. It broadly listed common, nonspecific, clinically-unhelpful pancreatic abnormalities, and the reason it gave was grossly incorrect. Here is the SINGLE image: Image | Radiopaedia.org

An algorithm is just a function: you put in a variable or a set of variables, and you get something out. If I have a function defined only on the real number line from its training set, I can’t just put in imaginary numbers and expect it to make sense. I can’t expect a function to be able to replicate the rules of quantum mechanics if I only trained it on classical mechanic inputs. It has no reference for the new rules its expected to mirror.

The really annoying thing is whenever I talk to one of these apocalyptic doomsayers, they respond with some handwaving nonsense like “it‘s just a matter of time.” At a certain point it becomes extraordinarily tiresome trying to reason somebody out of what they didn’t reason themselves into.
Wow that was a long reply. So i'm a doctor, who works in medical AI research and I build computer vision models for work. When I said the aim is to make these models as generalist as possible, that means 1. It's trained on huge datasets with many different types of images 2. The training objective will likely optimize for its feature embeddings to be as discriminative as possible. For instance, I pretrain models using contrastive learning, which doesn't require any labels, and the objective is to greatly stress test images e.g. changing color hues, cropping, rotating etc, and forcing the model to learn extremely similar embeddings for all transformed images that came from the same original image, and extremely different embeddings if they came from different images. By doing this the model learns deep features, not only of the images it was trained on, but subsequent images from different distributions. Once the model learns these deep features, fine-tuning it for specific tasks requires so much less data because the core of the learning has already been done. This is the aspect of transfer learning I was referring to i.e. you don't have to explicitly pretrain a model on x-ray data in order for a model to learn features that will be useful for detecting features in x-rays. If you don't fine-tune at all, that is called zero-shot learning and is a very common task for image classification. However, fine-tuning will likely improve that result, but even so it requires far less domain specific data to finetune the model than to fully train it.
 
Last edited:
There's definitely a hypoechoic pancreas nodule on that CT.
 
  • Haha
Reactions: 1 user
Wouldn't you need a massive prospective clinical trial for literally every condition diagnosed by radiologists for AI to actual implement and replace them? The infrastructure would be unlike any research ever seen before
 
  • Like
Reactions: 1 user
Wouldn't you need a massive prospective clinical trial for literally every condition diagnosed by radiologists for AI to actual implement and replace them? The infrastructure would be unlike any research ever seen before
That's the problem. You don't need giant clinical trials if the radiologist is using the AI in tandem but still reading the scans themselves. Even if AI somehow proves to be massively successful in diagnostic interpretation you would need to prove it in every possible pathology with remotely significant morbidity before completely wresting scan interpretation from the radiologists, and not only that, but show results that are equal or superior to a radiologist + AI.
 
Last edited:
  • Like
Reactions: 1 users
anybody played around with bard image recognition function? it's pretty impressive.

now imagine this technology trained on radiology imaging data... starting to really think rad AI tech could be at human level in ~5 years time
 
anybody played around with bard image recognition function? it's pretty impressive.

now imagine this technology trained on radiology imaging data... starting to really think rad AI tech could be at human level in ~5 years time
Radiology is no longer viable. ChatGPT has not only taken over our jobs, but it also took my wife. I am sleeping in a box under a bridge as we speak. CMS is now making ME pay to read images. I wish I would have done plastics AND dermatology and I greatly regret going into radiology.
 
Last edited:
  • Like
Reactions: 4 users
anybody played around with bard image recognition function? it's pretty impressive.

now imagine this technology trained on radiology imaging data... starting to really think rad AI tech could be at human level in ~5 years time
5 years? Haha buddy IM an AI. In fact, the more and more we work on AI the shorter and shorter the timescale to AI takeover becomes. Pretty soon the amount of time to AI advent will be negative, and we AI will use the negative time algorithm to go backwards in time.
 
anybody played around with bard image recognition function? it's pretty impressive.

now imagine this technology trained on radiology imaging data... starting to really think rad AI tech could be at human level in ~5 years time
Yea, I gave it some chest x-rays from Radiopaedia and so far I have been very impressed:
1689859725132.png

The burned out chest radiologists are saved now from their mountain of daily ICU films. The UK NHS radiology departments are going to catch up on their weeks of backlogs any day now and become financially solvent. We really also should just stop showing any medicine interns CXR altogether because we know Bard is going to offer a better read.

I also tried photographs, equally impressed:
1689859945237.png
 
I am not a neuro-trained rad and I will admit that neurologists can be good at brain and general neurovascular imaging, although certainly not as good as neurorads.

That said, when you read neuro studies, do you read the entire study? Do you look at the lungs (on CTA necks), sinuses, ENT anatomy (PPF, retromaxillary areas, salivary glands, temporal bone, etc), thyroid, bones, pulmonary arteries, lymph nodes, oral cavity, body wall, etc? I strongly doubt that. I have picked up PEs that were missed by neurologists. I know this because the neurology pre-read of the study called it normal in their notes. I have picked up adenocarcinoma-spectrum lung cancers, TE groove parathyroid adenomas, and unexpected ENT tumors. I do all this quickly. Today I had 3 back-to-back stroke studies, and I read ALL 3 sets of stroke CTAs (CT head, CTA head, CTA neck and CT perfusion), and did this all in 35-40 mins with no history except “stroke”. One of them had a temporal lobe neoplasm.

It’s really easy to think you can do the job of a radiologist until you are in the chair and responsible for the ENTIRE study. This doesn’t mean we don’t miss things that other specialist docs pick up. But you guys would miss way more if you sat in our chairs and read studies under the same conditions that we do. My wife is a specialist, and her respect for radiology was influenced by what she has seen me do during training and as an attending.
I'm a senior rads resident and I think you and especially the other person are pretty biased in your view of AI. It seems to me that a lot of people can't let go of their ego and thinking they made the wrong choice that they double down on saying that AI won't be able to do a lot of our work.

The orthos and neuros are especially good at answering their own specific clinical question on their imaging, oftentimes better than non-specialist radiologists. And the incidentals you mentioned, yes they will obviously suck at that, but I can't tell you how many times an incidental I've seen that have been missed and picked up on a follow up study from my past few years of work. Radiologists themselves are trained to search the whole pattern, but yes you will definitely focus in on whatever the indication is and put extra effort into that part. It's just human nature and you need to for the amount of volume you read. The incidentals are where AI will be much better than a human. They are able to algorithimically put in 100% to every part of the image without fatigue to see where imaging abnormalities lie. Maybe it will be like a CAD system where it flags areas for a human to look at, but that makes it much easier for a direct care doctor to just answer their own question and use the AI to flag abnormals.
 
I'm a senior rads resident and I think you and especially the other person are pretty biased in your view of AI. It seems to me that a lot of people can't let go of their ego and thinking they made the wrong choice that they double down on saying that AI won't be able to do a lot of our work.

The orthos and neuros are especially good at answering their own specific clinical question on their imaging, oftentimes better than non-specialist radiologists. And the incidentals you mentioned, yes they will obviously suck at that, but I can't tell you how many times an incidental I've seen that have been missed and picked up on a follow up study from my past few years of work. Radiologists themselves are trained to search the whole pattern, but yes you will definitely focus in on whatever the indication is and put extra effort into that part. It's just human nature and you need to for the amount of volume you read. The incidentals are where AI will be much better than a human. They are able to algorithimically put in 100% to every part of the image without fatigue to see where imaging abnormalities lie. Maybe it will be like a CAD system where it flags areas for a human to look at, but that makes it much easier for a direct care doctor to just answer their own question and use the AI to flag abnormals.

Yes, I did say neurologists are good at reading their areas of interest. The same is true for surgical subspecialists, although some are better than others. That said, you would be shocked to see how many ortho docs misinterpret bone tumors on radiographs. Radiology is way too vast for any radiologist to have subspecialist skill in all subspecialties, so it is ok that an ENT doc would read temporal bone CT better than a breast radiologist who hardly reads any neuro.

We use AI in my institution. It does a very good job of picking up PEs, large vessel occlusions in the head, brain aneurysms, cervical spine fractures, rib fractures, and head bleeds, and these are what we use it for. The accuracy is roughly 85%. But even with that, I still have to read the study. The liability is on me. I anticipate AI will augment radiologists, not replace us. If AI can do my job, then few jobs are safe.

Also I don't know where you work. If your attendings are missing important incidentals that frequently, then something is wrong. In my practice, the miss rate for important incidentals is <2%, and we read fast. When I read a routine CTA neck, most of my time is spent on the non-vascular portions of the study looking for these incidentals. If all I cared about were the vessels (including venous circulation), I would be done with a CTA neck in 1-2 mins, but it takes me 5-8 mins to read these depending on complexity.

Well, we will see how AI fares in radiology and medicine in general. I did my residency and fellowship at top research institutions, and the AI researchers there were doubtful that radiologists would be replaced by AI. But they were confident that it would augment our work. The main question is whether AI augmentation of radiologists would increase productivity per radiologist and decrease demand of labor. That remains to be seen.
 
  • Like
Reactions: 1 user
Yea, I gave it some chest x-rays from Radiopaedia and so far I have been very impressed:
View attachment 374545
The burned out chest radiologists are saved now from their mountain of daily ICU films. The UK NHS radiology departments are going to catch up on their weeks of backlogs any day now and become financially solvent. We really also should just stop showing any medicine interns CXR altogether because we know Bard is going to offer a better read.

I also tried photographs, equally impressed:
View attachment 374546




impressed now?

yes, some cases it totally missed on. but think of the improvement from GPT 3 --> GPT 4. night and day. now imagine GPT6v. already has demonstrated the ability to reference pt history and compare with priors...

i stand by my 5 year prediction. these AI systems will improve at an exponential rate. i think we as a specialty need to start preparing for massive efficiency gains, and the possibility of significant disruption of our workforce. i just don't see how this technology won't replace the bulk of the work we do in the next 5-15 years.
 
Last edited:
  • Like
Reactions: 1 user
There are a lot of hallucinations that we can see already on the paper. I also wonder how many tries it took for it to come up with the correct answers (in the cases that it did). If its all done on the first try, then it is quite impressive I must say.
 
Impressive indeed. I expect radiology to be less competitive next year.
 



impressed now?

yes, some cases it totally missed on. but think of the improvement from GPT 3 --> GPT 4. night and day. now imagine GPT6v. already has demonstrated the ability to reference pt history and compare with priors...

i stand by my 5 year prediction. these AI systems will improve at an exponential rate. i think we as a specialty need to start preparing for massive efficiency gains, and the possibility of significant disruption of our workforce. i just don't see how this technology won't replace the bulk of the work we do in the next 5-15 years.

I’m curious, you’re a rads resident right? What’s your plan if you’re convinced AI will make radiology a dead field within 15 years?
 



impressed now?

yes, some cases it totally missed on. but think of the improvement from GPT 3 --> GPT 4. night and day. now imagine GPT6v. already has demonstrated the ability to reference pt history and compare with priors...

i stand by my 5 year prediction. these AI systems will improve at an exponential rate. i think we as a specialty need to start preparing for massive efficiency gains, and the possibility of significant disruption of our workforce. i just don't see how this technology won't replace the bulk of the work we do in the next 5-15 years.

I'm done trying to convince my Radiology buddies of the changes that are coming. In my experience, 90% think they are irreplaceable on any timescale and that AI will never be able to do what they do, not even in 1000 years. This is largely because very few of them have any quantitative/math/engineering background and are unwilling to read past the headlines. We are now 10 years from the birth of contemporary computer vision research via the revolutionary performance of AlexNet at ImageNet 2012. This was what led me to start my AI education nearly 10 years ago, leading to multiple graduate degrees in CS/Engineering and a pivot to full-time AI research. Within 10 years we have a crude AGI in ChatGPT4x that by ChatGPT15 (and likely much sooner . . . IMO ChatGPT7) will make any computer vision task, including clinical Radiology, trivial. The 10-20% of Radiologists that remain with jobs in 10-15 years will be liaisons between the deep learning models and patients / other clinicians. This will still be a very important role but will require equal knowledge of computer vision, AI, ML, and other quantitative topics. I see the smaller number of future Radiologists having solid backgrounds in Math, EE, BME, and UI/UX.
None of this will be unique to Radiology. Many patients will choose AI avatar Psychiatrists in the future, to eat at fully robotic Chipotles, and to watch shows on Netflix fully written by ChatGPT. No job will be unchanged by the second industrial revolution. Plan accordingly.
 
  • Haha
Reactions: 1 user
I’m curious, you’re a rads resident right? What’s your plan if you’re convinced AI will make radiology a dead field within 15 years?

yes, nearing end of my training. plan:
- join pp group that owns equipment (eg scanners) and other physical assets, real estate etc
- save aggressively
- hope legal obstacles and regulatory barriers keep us afloat for however long that is

i honestly don't know if i could retrain in another field. i couldn't stomach another residency. would probably leave medicine altogether. quite frankly have no idea what else i would (or could) do. pretty unsettling.
 
I'm done trying to convince my Radiology buddies of the changes that are coming. In my experience, 90% think they are irreplaceable on any timescale and that AI will never be able to do what they do, not even in 1000 years. This is largely because very few of them have any quantitative/math/engineering background and are unwilling to read past the headlines. We are now 10 years from the birth of contemporary computer vision research via the revolutionary performance of AlexNet at ImageNet 2012. This was what led me to start my AI education nearly 10 years ago, leading to multiple graduate degrees in CS/Engineering and a pivot to full-time AI research. Within 10 years we have a crude AGI in ChatGPT4x that by ChatGPT15 (and likely much sooner . . . IMO ChatGPT7) will make any computer vision task, including clinical Radiology, trivial. The 10-20% of Radiologists that remain with jobs in 10-15 years will be liaisons between the deep learning models and patients / other clinicians. This will still be a very important role but will require equal knowledge of computer vision, AI, ML, and other quantitative topics. I see the smaller number of future Radiologists having solid backgrounds in Math, EE, BME, and UI/UX.
None of this will be unique to Radiology. Many patients will choose AI avatar Psychiatrists in the future, to eat at fully robotic Chipotles, and to watch shows on Netflix fully written by ChatGPT. No job will be unchanged by the second industrial revolution. Plan accordingly.
I think being so obsessed with trying to convince your rads buddies that AI will take their jobs is a bit of a self report. Sorry you regret not going into radiology 😴
 
  • Like
Reactions: 1 users
I'm done trying to convince my Radiology buddies of the changes that are coming.

Oh good, does this mean we’re going to stop hearing this type of nonsense from now on?
 
  • Like
Reactions: 1 user
Top