This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.



impressed now?

yes, some cases it totally missed on. but think of the improvement from GPT 3 --> GPT 4. night and day. now imagine GPT6v. already has demonstrated the ability to reference pt history and compare with priors...

i stand by my 5 year prediction. these AI systems will improve at an exponential rate. i think we as a specialty need to start preparing for massive efficiency gains, and the possibility of significant disruption of our workforce. i just don't see how this technology won't replace the bulk of the work we do in the next 5-15 years.

No mention of contrast
Localizing the lesion to parietal lobe rather than temporal lobe
Making up the midline shift
The paradox of compressed ventricles and hydrocephalus
No measurements

I wouldn't accept this from a day 1 resident.

Members don't see this ad.
 
  • Like
Reactions: 2 users
No mention of contrast
Localizing the lesion to parietal lobe rather than temporal lobe
Making up the midline shift
The paradox of compressed ventricles and hydrocephalus
No measurements

I wouldn't accept this from a day 1 resident.
So are you not worried about this technology for our career? I am a bit scared tbh
 
Rads is doomed. The future of the field is one attending signing off on AI generated reports after briefly reviewing them for major anomalies, increasing efficiency by 5-10x. Radiology is uniquely at risk due to the combination of easy PACS integration + massive amounts of labeled data.

1000 new residency grads per year will be excessive. Those few radiologists will make a killing, but junior rads will struggle. See rad onc / EM for examples of what oversupply does to a field.

None of us can predict when this will happen. Could be 8 years, could be 25. Radiology is the best field in medicine, and at the moment it is a great time to be a radiologist, but it is not worth the future risk.
 
Last edited:
Members don't see this ad :)
I think it's going to take longer than people realize to actually integrate into practice even when the tech exists. An RCT showing AI > AI + Radiologist for every disease we know about is not a small task.
 
I think it's going to take longer than people realize to actually integrate into practice even when the tech exists. An RCT showing AI > AI + Radiologist for every disease we know about is not a small task.
You don't need a RCT. This is what everyone forgets. You just need FDA approval for an algorithm for a radiologist to use to increase their efficiency. This is already occurring (AI doc PE detection, Rad AI, etc).

AI will likely not independently read scans in our lifetime. But it doesn't have to be autonomous to obliterate the job market.
 
You don't need a RCT. This is what everyone forgets. You just need FDA approval for an algorithm for a radiologist to use to increase their efficiency. This is already occurring (AI doc PE detection, Rad AI, etc).

AI will likely not independently read scans in our lifetime. But it doesn't have to be autonomous to obliterate the job market.
I guess my question would be how much can AI really improve efficiency if it is only FDA approved to be used with a radiologist. If someone's ass is on the line don't they have to read the whole image anyway? Like if you have chest CT and AI notes a sus nodule and nothing else, would the radiologist really just sign off and hope for the best or would they look at the thyroid, heart, esophagus, liver etc like usual just to be sure?
 
Rads is doomed. The future of the field is one attending signing off on AI generated reports after briefly reviewing them for major anomalies, increasing efficiency by 5-10x. Radiology is uniquely at risk due to the combination of easy PACS integration + massive amounts of labeled data.

1000 new residency grads per year will be excessive. Those few radiologists will make a killing, but junior rads will struggle. See rad onc / EM for examples of what oversupply does to a field.

None of us can predict when this will happen. Could be 8 years, could be 25. Radiology is the best field in medicine, and at the moment it is a great time to be a radiologist, but it is not worth the future risk.
I don't trust senior residents enough to only briefly review a study for major anomalies. I'm not going to trust an AI that is known to hallucinate and miss stuff. It's going to be far less than 5-10x.
 
AI as an assistant can make radiologists more accurate however it will DECREASE efficacy. I am 100% sure about it. I am using an AI Software for PE. It decreases my efficacy. Now imagine if I had a software for all the findings on chest. It would take me twice as much to read a scan.

I am not sure how this AI works. The only thing I know is that these kind of tasks are like running a marathon. The first 80% is the easiest and the last 20% is extremely hard or impossible. I can see how AI can go the same route. First they start with simple and easy tasks like PE, Large vessels occlusion or intracranial hemorrhage or 2D fracture detection with 85% accurancy. But once they get beyond these bread and butter tasks, I feel they will hit a wall that will take years and years to do more complicated tasks or to improve efficacy from 85% to 100%.

Most radiologists are not accurate 100% in practice. But the assumption is that they have to be 100% accurate ( similar to rest of medicine). Nobody will accept AI when there are enough data that shows it is only 85% accurate.

Last but not the least: Why a neurology attending keeps checking and posting on a radiology forum?
 
Last edited:
  • Like
Reactions: 1 user
I don't trust senior residents enough to only briefly review a study for major anomalies. I'm not going to trust an AI that is known to hallucinate and miss stuff. It's going to be far less than 5-10x.
You don't have to be the one to trust it. There just have to be a few unscrupulous radiologists (maybe 1 out of every 10) that blind sign highly accurate AI generated reports for it to cause problems. At my institution we have multiple attendings that basically blind sign senior resident reports.

A great analogy here is midlevels. They clearly provide inferior care, but many dermatology attendings are more than happy to have them independently see patients and just sign off on their notes. Money talks.
 
Last edited:
  • Like
Reactions: 1 users
You don't have to be the one to trust it. There just have to be a few unscrupulous radiologists (maybe 1 out of every 10) that blind sign highly accurate AI generated reports for it to cause problems. At my institution we have multiple attendings that basically blind sign senior resident reports.

A great analogy here is midlevels. They clearly provide inferior care, but many dermatology attendings are more than happy to have them independently see patients and just sign off on their notes. Money talks.

So what? Many surgery attendings also sit back and let their senior fellows do the surgery. And many medicine attendings barely see the patients and let their senior residents manage the patients with minimum input for attending.

Also your post has one assumption. You assume that it will generate highly accurate reports. If it is accurate only in 85% of cases, then my experience tells me that it is going to decrease my efficacy.

Midlevels are different. Even before seeing patients you know which patient is going to be complex and which one is not. That is not the case for imaging studies. Dermatologists don't give their complex patients to midlevels. The midlevels do bread and butter dermatology. THe challenge with radiology is that nobody knows which study is going to be easy before reading it.
 
IMG_2733.jpeg


Ain’t nobody’s task getting replaced until the FDA says ok, and ain’t no how FDA says ok without a gorgeous front page NEJM phase III.

The path to and through phase III is littered with the million corpses of good sounding ideas, promising starts, and tearful “it was inevitable, just a matter of time” anecdotes.

Still waiting on that Phase I……. Any day now….. How many years of “any day now” has it been?

Wasn’t there a Radiology paper published recently about chest plain film AI that is below the median radiologist in AUC when tested prospectively under standardized, heterogenous conditions? I think someone made a quip about “unafraid radiologists don’t know math” or something.
 
  • Like
Reactions: 1 user
Did anyone here actually read the paper or is everyone going off the couple screenshots they have taken. The hallucinations are horrible. It missed a super obvious distal radius fracture that a med student would see. It called an obvious lung mass....but called it in the left lung when it was in the right. It also gave COMPLETELY MADE UP measurements for the lung mass in question. These are just a few of the medical imaging hallucinations, there were more related to random other images. This would never fly in practice. Not to mention we have seen over and over again that AI looks great in controlled environments and is terrible in real world situations. I am sure it will improve but this paper is not the end of the world in my opinion. As long as AI continues to hallucinate (a problem which people don't even know if there's a solution for) then it will never replace a radiologist.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
You don't have to be the one to trust it. There just have to be a few unscrupulous radiologists (maybe 1 out of every 10) that blind sign highly accurate AI generated reports for it to cause problems. At my institution we have multiple attendings that basically blind sign senior resident reports.

A great analogy here is midlevels. They clearly provide inferior care, but many dermatology attendings are more than happy to have them independently see patients and just sign off on their notes. Money talks.
Lmao they will get sued and lose all their money, then noone will blind sign AI reports and noone will use it. Money talks.

If AI requires any supervision, it will probably not be efficient and will increase reading time. Unless you can bill for interpreting the AIs interpretation in addition to billing for reading the image and the combined reimbursement is more than just reading more studies without having to check every little hallucination, AI makes no economic sense. Sorry
 
Honestly there are a million other problems with radiology these nerds can focus on to cope with their decision not to pursue it(decreasing reimbursement, volume, commoditization, no real way to offload labor to or exploit midlevels for profit, working harder than basically every other doctor in the hospital and still getting **** on). But they choose to hyper focus on the one thing that is literally a bull****, made up non-issue. I don't understand. Just choose a different specialty
 
  • Like
Reactions: 1 users
The worst case scenario.

You will lose your job and will become a primary care physician.

I bet if we get to the point that radiology becomes obsolete, other medical fields will also be totally different than now. The algorithms will change the practice of entire medicine if you believe they are accurate and reliable.
 
  • Like
Reactions: 1 user
If DR is completely replaced by AI, say goodbye to all other imaging reliant fields. Why have a neurologist/IM/FM/any other clinician when you could have a midlevel paid 1/3 - 1/4 armed with AI to diagnose and next step workup everything?
 
You don't need a RCT. This is what everyone forgets. You just need FDA approval for an algorithm for a radiologist to use to increase their efficiency. This is already occurring (AI doc PE detection, Rad AI, etc).

AI will likely not independently read scans in our lifetime. But it doesn't have to be autonomous to obliterate the job market.

1. When can we start billing for using AI?
2. How will a miss-call by AI followed by lawsuit be handled from a legal stand-point?
3. I am certainly not under estimating the technology, long-term potential is limitless. However in the meantime it would be nice to have AI driven dictation system do simple but time/energy saving tasks such as real auto-edit (eg. correct double spaces, multiple periods etc), auto-populate Impression real time during the dictation, contact the referring real-time for critical results via IM and document communication, auto-populate f/u recommendations such as Fleischners etc, after checking pts EMR to see if/how applicable. I could go on but I think I've made my point.
4. Finally, AI will decimate the need for human labor in large chunks across the board. Sorta like the automation of the auto industry but on steroids. Unsure how this plays out from a social/political standpoint. Majority of workers will be unneeded.
 
  • Like
Reactions: 1 users
1. When can we start billing for using AI?
2. How will a miss-call by AI followed by lawsuit be handled from a legal stand-point?
3. I am certainly not under estimating the technology, long-term potential is limitless. However in the meantime it would be nice to have AI driven dictation system do simple but time/energy saving tasks such as real auto-edit (eg. correct double spaces, multiple periods etc), auto-populate Impression real time during the dictation, contact the referring real-time for critical results via IM and document communication, auto-populate f/u recommendations such as Fleischners etc, after checking pts EMR to see if/how applicable. I could go on but I think I've made my point.
4. Finally, AI will decimate the need for human labor in large chunks across the board. Sorta like the automation of the auto industry but on steroids. Unsure how this plays out from a social/political standpoint. Majority of workers will be unneeded.
Again using the midlevel example:
1- You don't need to bill for using AI, it just needs to make you more efficient (see 1 EM doctor overseeing 5 NPs). One radiologist doing the work of 5
2- The radiologist that blind signed the report will be sued (see EM doctor signing off on NP notes without seeing the pt). Assuming highly sensitive and specific future algorithms, some (maybe 1 in 5) radiologists will be willing to accept this risk to increase or maintain salaries in the face of endless CMS cuts.

3 and 4 are reasonable



Not related to your post, but just adding this here as an edit:
I think it's important for trainees to consider things from a probabilistic / risk management perspective. No one in this thread (including myself), can confidently say what will happen to the field of radiology in the future. Instead, you need to think about it from a probability standpoint. What is the chance that AI will have a serious and catastrophic impact on the field? 5%? 40%? 80%? You need to research extensively, hear different perspectives (like those above), and determine that percentage yourself. And then you need to determine what level of risk you are personally willing to take in order to enter the field. Is the risk worth it for you
 
Last edited:
  • Like
Reactions: 1 users
Rads is doomed. The future of the field is one attending signing off on AI generated reports after briefly reviewing them for major anomalies, increasing efficiency by 5-10x. Radiology is uniquely at risk due to the combination of easy PACS integration + massive amounts of labeled data.

1000 new residency grads per year will be excessive. Those few radiologists will make a killing, but junior rads will struggle. See rad onc / EM for examples of what oversupply does to a field.

None of us can predict when this will happen. Could be 8 years, could be 25. Radiology is the best field in medicine, and at the moment it is a great time to be a radiologist, but it is not worth the future risk.
You speak very strongly about your perception of the future of radiology. However based on your post history, it appears that you are a medical student considering applying to heme/onc? Just curious why you are posting in the radiology forum so much and what your qualifications are to be giving advice about radiology and/or AI?
 
  • Like
Reactions: 1 users
Again using the midlevel example:
1- You don't need to bill for using AI, it just needs to make you more efficient (see 1 EM doctor overseeing 5 NPs). One radiologist doing the work of 5
2- The radiologist that blind signed the report will be sued (see EM doctor signing off on NP notes without seeing the pt). Assuming highly sensitive and specific future algorithms, some (maybe 1 in 5) radiologists will be willing to accept this risk to increase or maintain salaries in the face of endless CMS cuts.

3 and 4 are reasonable



Not related to your post, but just adding this here as an edit:
I think it's important for trainees to consider things from a probabilistic / risk management perspective. No one in this thread (including myself), can confidently say what will happen to the field of radiology in the future. Instead, you need to think about it from a probability standpoint. What is the chance that AI will have a serious and catastrophic impact on the field? 5%? 40%? 80%? You need to research extensively, hear different perspectives (like those above), and determine that percentage yourself. And then you need to determine what level of risk you are personally willing to take in order to enter the field. Is the risk worth it for you
Your midlevel example actually supports why this won't happen.

We don't have midlevels now because imaging is recorded and easy to review. If images were immediately deleted after final sign by a rad, we would have midlevels everywhere too. Medmal risk in radiology is on a different level than other specialties when it comes to attempting to sign off blindly.
 
  • Like
Reactions: 1 user
Again using the midlevel example:
1- You don't need to bill for using AI, it just needs to make you more efficient (see 1 EM doctor overseeing 5 NPs). One radiologist doing the work of 5
2- The radiologist that blind signed the report will be sued (see EM doctor signing off on NP notes without seeing the pt). Assuming highly sensitive and specific future algorithms, some (maybe 1 in 5) radiologists will be willing to accept this risk to increase or maintain salaries in the face of endless CMS cuts.

3 and 4 are reasonable



Not related to your post, but just adding this here as an edit:
I think it's important for trainees to consider things from a probabilistic / risk management perspective. No one in this thread (including myself), can confidently say what will happen to the field of radiology in the future. Instead, you need to think about it from a probability standpoint. What is the chance that AI will have a serious and catastrophic impact on the field? 5%? 40%? 80%? You need to research extensively, hear different perspectives (like those above), and determine that percentage yourself. And then you need to determine what level of risk you are personally willing to take in order to enter the field. Is the risk worth it for you

Someone is making money off the mid-levels since they get reimbursed at 80% and to the best of my knowledge, they are not making 80% income of EM/derm etc physicians.

Get your 2nd point. There are plenty of rads that sold out to PE. Pretty sure that other rads would take the risk
 
  • Like
Reactions: 1 users
I think it's important for trainees to consider things from a probabilistic / risk management perspective. No one in this thread (including myself), can confidently say what will happen to the field of radiology in the future. Instead, you need to think about it from a probability standpoint. What is the chance that AI will have a serious and catastrophic impact on the field? 5%? 40%? 80%? You need to research extensively, hear different perspectives (like those above), and determine that percentage yourself. And then you need to determine what level of risk you are personally willing to take in order to enter the field. Is the risk worth it for you

This is a key insight, and I definitely agree that you have to look at probabilities of each possible outcome. The truth is that we cannot say with 100% certainty when or if AI will replace our jobs as radiologists, or dramatically reduce the job market size. The other tricky point is that technology often scales exponentially, so it's hard to estimate that future change.

My mental model is:

20-30% chance of significant workforce reductions within 5 years. This would be a worst-case scenario for us: exponential growth continues each year, e.g. you could argue that GPT-4 is twice as good as ChatGPT. Meaning in 5 years, the AI vision models are 32X better than they are today. This seems to be roughly Sam Altman's position on AI's pace of growth. We would still have radiologists to sign off on AI-generated reports, but would need fewer radiologists to do so if the reports are highly accurate.

20-30% chance of significant workforce reductions within 10 years. If the technology continues to scale, with some breaks. And political lobbying for licensing / regulation hold back AI implementation in hospitals.

40-60% chance of taking > 10 years. So many unknown things can happen in 10 years, so it could be decades or never.


Of course, I could be completely wrong. 5+ years ago when I was deciding on radiology, everyone told me not to go into radiology because AI would replace my job. Turns out, I'm extremely happy to have done rads! If you're very aggressive with saving money, you can become financially independent very fast in this field.
 
  • Like
Reactions: 1 user
This is a key insight, and I definitely agree that you have to look at probabilities of each possible outcome. The truth is that we cannot say with 100% certainty when or if AI will replace our jobs as radiologists, or dramatically reduce the job market size. The other tricky point is that technology often scales exponentially, so it's hard to estimate that future change.

My mental model is:

20-30% chance of significant workforce reductions within 5 years. This would be a worst-case scenario for us: exponential growth continues each year, e.g. you could argue that GPT-4 is twice as good as ChatGPT. Meaning in 5 years, the AI vision models are 32X better than they are today. This seems to be roughly Sam Altman's position on AI's pace of growth. We would still have radiologists to sign off on AI-generated reports, but would need fewer radiologists to do so if the reports are highly accurate.

20-30% chance of significant workforce reductions within 10 years. If the technology continues to scale, with some breaks. And political lobbying for licensing / regulation hold back AI implementation in hospitals.

40-60% chance of taking > 10 years. So many unknown things can happen in 10 years, so it could be decades or never.


Of course, I could be completely wrong. 5+ years ago when I was deciding on radiology, everyone told me not to go into radiology because AI would replace my job. Turns out, I'm extremely happy to have done rads! If you're very aggressive with saving money, you can become financially independent very fast in this field.
20-30% chance there are workforce reductions in 10 years? What evidence do you have for this or are you just pulling numbers out of ur ass? I mean seriously how can an adult,presumably a medical trained doctor and radiologist just say dumb statements like this with literally no evidence this will happen? I have a bridge I wanna sell you btw.
 
I haven’t read the full thread, but last week on 60 Minutes one of the Godfather’s on AI specifically called out Radiologists and said AI is beginning to do their job. I was shocked that he said this. He clearly doesn’t understand the scope of Radiology. As some people have mentioned in the posts I’ve skimmed, someone is still going to have to sign off on the reports. I don’t see anyone blindly signing off on reports, so they are still going to have to read images. I just don’t see AI as having a major impact on replacing Radiologists.
 
  • Like
Reactions: 1 users

Attachments

  • hand.jpg
    hand.jpg
    83.3 KB · Views: 98
  • Haha
Reactions: 1 user
Even NP's can interpret better than this.
 
Another garbage interpretation. Also GPT4 today.
 

Attachments

  • cxr.jpg
    cxr.jpg
    89.8 KB · Views: 92
  • Haha
Reactions: 1 user
I think GPT4 not being good at interpreting images is expected given it hasnt been trained on radiology data but now there are studies such as these :

In my opinion, this new class of transformer models has just moved up the timeline for when AI will be good generally for bread and butter pathologies in specific modalities. I.e. I think that transformer models will be more general from previous generation of models and the argument about needing different algorithms for each/few pathologies will go out the window.

(I’m also a radiology resident btw).
 
  • Like
Reactions: 1 user
People keep talking about AI reading normal studies.
A few facts:

1- Real normal studies are not that common esp for cross sectional imaging. Mammo and CXR are the exceptions.

2- The most challenging task in radiology is calling something normal versus abnormal. Once the pathology is there, a lot of times it is easier to describe it. Example: A pancreatic cancer that has invaded 10 different structures is a lot easier to call than a subtle hypodensity in head of the pancreas with mild CBD dilation. A lot of PAs can point to a 4 cm lung cancer on a chest CT but a lot of senior residents miss an ill defined mass in the hilar area that may look like normal hilar structures.

3- So far AI has made me less efficient. Even if it makes us more efficient, I doubt it happens overnight. PACS made us more efficient but we got busier not unemployed.
 
  • Like
Reactions: 3 users
This is a key insight, and I definitely agree that you have to look at probabilities of each possible outcome. The truth is that we cannot say with 100% certainty when or if AI will replace our jobs as radiologists, or dramatically reduce the job market size. The other tricky point is that technology often scales exponentially, so it's hard to estimate that future change.

My mental model is:

20-30% chance of significant workforce reductions within 5 years. This would be a worst-case scenario for us: exponential growth continues each year, e.g. you could argue that GPT-4 is twice as good as ChatGPT. Meaning in 5 years, the AI vision models are 32X better than they are today. This seems to be roughly Sam Altman's position on AI's pace of growth. We would still have radiologists to sign off on AI-generated reports, but would need fewer radiologists to do so if the reports are highly accurate.

20-30% chance of significant workforce reductions within 10 years. If the technology continues to scale, with some breaks. And political lobbying for licensing / regulation hold back AI implementation in hospitals.

40-60% chance of taking > 10 years. So many unknown things can happen in 10 years, so it could be decades or never.


Of course, I could be completely wrong. 5+ years ago when I was deciding on radiology, everyone told me not to go into radiology because AI would replace my job. Turns out, I'm extremely happy to have done rads! If you're very aggressive with saving money, you can become financially independent very fast in this field.

So how exactly is it that you think new technologies become standard of care?
 
So how exactly is it that you think new technologies become standard of care?
They become standard of care in reality when the FDA or EU approves them for clinical use, and then corporations or radiology groups use them in practice if they increase profit.

Oxipit's ChestLink software already has European Union regulatory approval to provide fully-autonomous reports on normal chest xrays. No radiologist involved.

Viz.ai already notifies the stroke team on its own, the radiologist provides the final report with any incidentals.


Obviously neither of these companies or the dozens of others working in the field are going to replace us tomorrow. But if they get more approval for fully-autonomous final reads, and fully-generated reports that you just hit "sign", that could decrease the number of radiologists needed in the field.

As I said above, I still think we'll have human beings signing studies for the foreseeable future. But there could be a significant decrease in the number of people needed if AI continues to scale exponentially. The counter-point is that maybe regulatory capture, licensing, malpractice, the aging population, increase in imaging utilization, and slowing in pace of AI development all will work together to keep our jobs intact. That's why I'm not 100% certain about either outcome.
 
  • Like
Reactions: 3 users
To answer some other questions above, ChatGPT and GPT-4 were not really designed for medical imaging. Other AI models are specifically designed for medical image analysis, and perform much better.

Obviously, GPT-4's image interpretation is laughably bad on some of the images. But it's not the state of the art in the field.
 
  • Like
Reactions: 1 user
They become standard of care in reality when the FDA or EU approves them for clinical use, and then corporations or radiology groups use them in practice if they increase profit.

Oxipit's ChestLink software already has European Union regulatory approval to provide fully-autonomous reports on normal chest xrays. No radiologist involved.

Viz.ai already notifies the stroke team on its own, the radiologist provides the final report with any incidentals.


Obviously neither of these companies or the dozens of others working in the field are going to replace us tomorrow. But if they get more approval for fully-autonomous final reads, and fully-generated reports that you just hit "sign", that could decrease the number of radiologists needed in the field.

As I said above, I still think we'll have human beings signing studies for the foreseeable future. But there could be a significant decrease in the number of people needed if AI continues to scale exponentially. The counter-point is that maybe regulatory capture, licensing, malpractice, the aging population, increase in imaging utilization, and slowing in pace of AI development all will work together to keep our jobs intact. That's why I'm not 100% certain about either outcome.


During my transitional intern year I rotated on neurology for a month and they had this service. They were constantly annoyed by constant false positives. But I think early alert even if it's a false positive is an appropriate use of this sort of technology.
 
  • Like
Reactions: 1 user
The key to Viz.ai and its major competitor, RAPID, is they are care coordination apps more than they are diagnostic tools. And it's not about alerting the "stroke team", it's about alerting the neurointerventionalist in particular. The stroke neurologist was already evaluating the patient in the ED/scanner (or if not a comprehensive stroke center, it'd be the ED physician taking primary responsibility and the tele-stroke neurologist is calling it in), and the radiologist was always at the ready as soon as images came over PACS. What used to happen was the neurologist would look at the images, maybe have the radiologist on the phone looking at the images too, and then decide to call the neurointerventionalist. What this app does is sound an alarm for the neurointerventionalist as soon as the images are uploaded and it has a hit on its algorithm that has 90% sensitivity and 90% specificity for large vessel occlusions. You skip the neurologist scrolling through images, calling their senior or the radiologist, radiologist pulling up images, discussing, etc. Now as soon as the images are up, the neurointerventionalist scrolls through the images on the phone, texts everyone through this app, and gets their butt out of bed ASAP. With only 90/90 sens/spec, you'll get a bunch of false positives, but it's been decided that it's worth the annoyance to the interventionalist so they can get their act together faster to save about 10 minutes on the door to groin time.

This model does not apply to literally any other diagnostic imaging scenario out there where a 10 minute save for a few times-a-week occurrence is worth spending tens of thousands of dollars for a subscription to an app a year. Nothing is so time sensitive as stroke intervention (except cardiac cath but MI diagnosis is not imaging-dependent but rather EKG/troponin-dependent).

This model is NOT going to be the way that AI bypasses radiologists in a significant enough volume to displace jobs.
 
Last edited:
  • Like
Reactions: 2 users
The key to Viz.ai and its major competitor, RAPID, is they are care coordination apps more than they are diagnostic tools. And it's not about alerting the "stroke team", it's about alerting the neurointerventionalist in particular. The stroke neurologist was already evaluating the patient in the ED/scanner (or if not a comprehensive stroke center, it'd be the ED physician taking primary responsibility and the tele-stroke neurologist is calling it in), and the radiologist was always at the ready as soon as images came over PACS. What used to happen was the neurologist would look at the images, maybe have the radiologist on the phone looking at the images too, and then decide to call the neurointerventionalist. What this app does is sound an alarm for the neurointerventionalist as soon as the images are uploaded and it has a hit on its algorithm that has 90% sensitivity and 90% specificity for large vessel occlusions. You skip the neurologist scrolling through images, calling their senior or the radiologist, radiologist pulling up images, discussing, etc. Now as soon as the images are up, the neurointerventionalist scrolls through the images on the phone, texts everyone through this app, and gets their butt out of bed ASAP. With only 90/90 sens/spec, you'll get a bunch of false positives, but it's been decided that it's worth the annoyance to the interventionalist so they can get their act together faster to save about 10 minutes on the door to groin time.

This model does not apply to literally any other diagnostic imaging scenario out there where a 10 minute save for a few times-a-week occurrence is worth spending tens of thousands of dollars for a subscription to an app a year. Nothing is so time sensitive as stroke intervention (except cardiac cath but MI diagnosis is not imaging-dependent but rather EKG/troponin-dependent).

This model is NOT going to be the way that AI bypasses radiologists in a significant enough volume to displace jobs.
RAPID is also very dependent on human CT techs knowing what they are doing and the human patients not moving.

Would be great to have AI actually triage stroke codes since a large chunk of them turn out negative, even on next day MRI/MRA. Very small % of +LVO amenable to intervention where I practice
 
  • Like
Reactions: 1 user
They become standard of care in reality when the FDA or EU approves them for clinical use, and then corporations or radiology groups use them in practice if they increase profit.

Oxipit's ChestLink software already has European Union regulatory approval to provide fully-autonomous reports on normal chest xrays. No radiologist involved.

Viz.ai already notifies the stroke team on its own, the radiologist provides the final report with any incidentals.


Obviously neither of these companies or the dozens of others working in the field are going to replace us tomorrow. But if they get more approval for fully-autonomous final reads, and fully-generated reports that you just hit "sign", that could decrease the number of radiologists needed in the field.

As I said above, I still think we'll have human beings signing studies for the foreseeable future. But there could be a significant decrease in the number of people needed if AI continues to scale exponentially. The counter-point is that maybe regulatory capture, licensing, malpractice, the aging population, increase in imaging utilization, and slowing in pace of AI development all will work together to keep our jobs intact. That's why I'm not 100% certain about either outcome.

Reason I keep harping on clinical trials . FDA won’t approve the large majority of autonomous read AI without a clinical trial demonstrating efficacy. Hurdle to FDA approval > hurdle to CE approval, by a significant margin.

Btw, this same Oxipit AI that in a prospective study published in radiology this month that underperformed radiologists reading CXRs. There’s a reason prospective trials matter

Moreover, algorithms won’t continue to scale exponentially.
 
  • Like
Reactions: 1 users
Reason I keep harping on clinical trials . FDA won’t approve the large majority of autonomous read AI without a clinical trial demonstrating efficacy. Hurdle to FDA approval > hurdle to CE approval, by a significant margin.

Btw, this same Oxipit AI that in a prospective study published in radiology this month that underperformed radiologists reading CXRs. There’s a reason prospective trials matter

Moreover, algorithms won’t continue to scale exponentially.

I sincerely hope you're right about not scaling exponentially, if nothing else because the societal ramifications would be insane if the technology keeps progressing that fast.

Another factor against AI companies is the Federal Reserve raising interest rates, so it's harder for tech companies to raise money.

I think the answer lies somewhere between "business as usual" and "AGI is imminent" that some of the tech maxis are actually espousing.
 
  • Like
Reactions: 1 user
One other important thing to consider:
People keeps talking about algorithms grow exponentially. I think it is the exact opposite. From what I have seen, in medical imaging they have picked easy tasks. Once it gets to more complex tasks, the growth will be a lot slower.
 
  • Like
Reactions: 1 user
The idea that technology improves exponentially is a myth pushed for marketing purposes by big tech. Certain kind of technologies maybe. But your toaster oven is not exponentially more advanced than it was 100 years ago. Your car fundamentally does the same thing it did 100 years ago. Can you access a million useless websites at the palm of your hand and talk to people on the other side of the world, sure. But that isn't necessarily applicable to AI. The algorithms they are using have not fundamentally changed that much since the idea of modeling neurons was first thought of in the 1940s. They used "AI" to automate sperm counting in the 80s, yet they still use lab techs for it today. People say things like we just need more training data, but I think its more likely that improvement of any of these models is logarithmic, not exponential and after a certain amount of training data, more data wont provide any significant improvement.
 
Last edited:
  • Like
Reactions: 2 users
A few thoughts here, looking at all the comments, I am a fourth year attending general radiologist:

1) I want to see these "unscrupulous" radiologists that will blind sign cases. People always say that they exist but I have not found them. Most rads are careful to sign anything a resident has touched. Lower year residents perform far more accurately than any AI model, and senior residents are close to attending miss rates in large studies. And yet, most rads look carefully through every study dictated by ANY resident, including senior residents. The fear of missing something and getting sued is very real. Even a highly accurate AI model is not going to be 100% accurate, not even close. If it's your name on the line, you want to look at the images, that really cannot be negotiated for 99% of radiologists. It's not something that anyone who is not a radiologist will understand, everyone wants to "play" at radiology but no one wants to hit the final sign. The final sign button is where all the medicolegal risk is. If you don't do it, you simply cannot understand it.

2) Radiologists do alot more than just look at images. Even if AI can reduce image review time by 80% (which I doubt), there is still reporting time, communication of findings, looking at priors/prior reports, looking through the EMR, and for on site rads the various on site tasks (procedures, protocol questions, injections, etc). So image review time is only maybe 40% of the days workflow. So the 80% reduction would be a 32% reduction in the overall work that needs to be done. Much less than the 1 rad replacing 10 rads. Yes AI could expedite some of the other tasks, but I doubt it could do so significantly and frankly the AI companies can't be bothered with "mundane" software like a working dictation software. But if there was a highly accurate dictation software which was easy to use, I would say it would probably increase efficiency by 30% at least. Well integrated EMR probably 10%. Assistant to call findings another 10%. Good PACS another 20%. There are massive efficiency increases out there in radiology with very little uptake. AI is going 100% for the image review aspect which tbh is probably the last to be replaced (radiologists will always be hesitant to sign without looking at the images).

3) Implementation takes time. See above. Garbage PACS and EMR's are still used by a number of radiology departments around the country. Phillips iSite is the most commonly used PACS nationwide. If everyone using iSite switched to Sectra or McKesson tomorrow, we would easily get 20% efficiency increase. Why doesn't this happen? Good PACS systems are expensive. The software companies know how much efficiency they provide, and they price it accordingly. A working AI software will do exactly the same thing. It will be extremely expensive, and there will be limited uptake for academic and government departments. If it can make a department of 10 radiologists function the same with 5 (again, I highly doubt it will be that effective but lets pretend), you can bet it will cost about the same as 4 radiologists. The software companies will know quite well how much efficiency they provide. Efficiency based private practices will of course jump on board.
 
  • Like
Reactions: 4 users
Radiologists' pain points:
-PACS crashing, slow (for the record I have Philip's other PACS, Vue, which is also frustrating, crashing at least once in the middle of the day)
-Getting a helpful history
-Hard to search and poorly integrated EMR
-Dictation inaccurate
-Hanging images in PACS
-Can't find the series you want because they're named something crazy long or nonsense
-Aligning images in PACS and making measurements
-Sort the worklist by the appropriate urgency / patient status
-Motion degradation and other artifacts causing poor image quality
-Finding priors in outside imaging systems not integrated with PACS
-Getting a hold of the treating clinician to relay a critical finding
-Getting interrupted for simple questions

Not radiologists' pain points:
-Interpreting the freaking image
 
Last edited:
  • Like
Reactions: 9 users
The idea that technology improves exponentially is a myth pushed for marketing purposes by big tech. Certain kind of technologies maybe. But your toaster oven is not exponentially more advanced than it was 100 years ago. Your car fundamentally does the same thing it did 100 years ago. Can you access a million useless websites at the palm of your hand and talk to people on the other side of the world, sure. But that isn't necessarily applicable to AI. The algorithms they are using have not fundamentally changed that much since the idea of modeling neurons was first thought of in the 1940s. They used "AI" to automate sperm counting in the 80s, yet they still use lab techs for it today. People say things like we just need more training data, but I think its more likely that improvement of any of these models is logarithmic, not exponential and after a certain amount of training data, more data wont provide any significant improvement.
As an M4 applying to rads, I think a lot of the AI hype comes from hate. From low-paying specialties that deal with endless bs to the lay public who looks at rads salaries and gets jealous. All the AI experts I know have applauded me for applying to radiology, and all the radiologists I have interacted with have a very positive outlook on the field. AI will only make radiology better.
 
  • Like
Reactions: 1 user
Radiologists' pain points:
-PACS crashing, slow (for the record I have Philip's other PACS, Vue, which is also frustrating, crashing at least once in the middle of the day)
-Getting a helpful history
-Hard to search and poorly integrated EMR
-Dictation inaccurate
-Hanging images in PACS
-Aligning images in PACS and making measurements
-Sort the worklist by the appropriate urgency / patient status
-Motion degradation and other artifacts causing poor image quality
-Finding priors in outside imaging systems not integrated with PACS
-Getting a hold of the treating clinician to relay a critical finding
-Getting interrupted for simple questions

Not radiologists' pain points:
-Interpreting the freaking image
Oh god yes, this. People don't understand how 5 seconds x 50 instances x 15 pain-points per day per application add up.
 
  • Like
Reactions: 1 user
Radiologists' pain points:
-PACS crashing, slow (for the record I have Philip's other PACS, Vue, which is also frustrating, crashing at least once in the middle of the day)
-Getting a helpful history
-Hard to search and poorly integrated EMR
-Dictation inaccurate
-Hanging images in PACS
-Aligning images in PACS and making measurements
-Sort the worklist by the appropriate urgency / patient status
-Motion degradation and other artifacts causing poor image quality
-Finding priors in outside imaging systems not integrated with PACS
-Getting a hold of the treating clinician to relay a critical finding
-Getting interrupted for simple questions

Not radiologists' pain points:
-Interpreting the freaking image

All PACS crash. VuePACS is decent but still inferior to many such as Sectra and Visage. Sectra is the best PACS, but Visage is not far behind. If Visage allowed the user to customize layouts as in Sectra, I would pick it over the latter.
 
  • Like
Reactions: 1 user
Just something that crystallized after I made my last post. I kind of alluded to it briefly but it became more clear afterwards: I have realized that the total efficiency gains by AI is basically capped at the total gains that an attending radiologist gets from a senior resident.

As I mentioned before published miss rates are similar for senior residents and attendings. So there is really no accuracy issue. This is pretty much the hard cap on any AI which is relying on a radiologists supervision or sign off / it can only be as accurate as that radiologist.

Let’s think about what a senior resident does:

-interprets the images nearly as accurately as an attending (what AI is trying to do)

- creates a report based on the interpretation and relevant findings. Currently this is not a priority for AI companies and I have seen very little development on this, but it is presumed to happen with the whole AI replacing radiologists argument.

- harvests relevant info from the EMR and applied it to their interpretation and the history section of the report. AI struggles with this desperately.

-calls clinicians about critical findings (AI cannot do this)

-triaged cases to read by attending, based on their interpretation and how time sensitive findings are. Actually AI can do this very well.

With all the things a senior resident does, they function from the perspective of a rads attending, as an advanced form of AI or biological intelligence (call it BI). The best AI can hope to be is a BI, and they are way behind on most tasks that a BI can do, other than interpretation.

So how do BIs affect rad attendings efficiency and workflow in the real world? A bit, maybe 20-30% increase in efficiency when I am on with a good senior resident. Probably about the same range for other attendings. Some get slowed down by it actually. Certainly it’s never 10x the volume you can produce with a senior res compared with solo. This is why I am so skeptical of the 1 rad replacing 10 argument. It just doesn’t get what rads do on a day to day basis, other than looking at images directly.
 
  • Like
Reactions: 3 users
Just something that crystallized after I made my last post. I kind of alluded to it briefly but it became more clear afterwards: I have realized that the total efficiency gains by AI is basically capped at the total gains that an attending radiologist gets from a senior resident.

As I mentioned before published miss rates are similar for senior residents and attendings. So there is really no accuracy issue. This is pretty much the hard cap on any AI which is relying on a radiologists supervision or sign off / it can only be as accurate as that radiologist.

Let’s think about what a senior resident does:

-interprets the images nearly as accurately as an attending (what AI is trying to do)

- creates a report based on the interpretation and relevant findings. Currently this is not a priority for AI companies and I have seen very little development on this, but it is presumed to happen with the whole AI replacing radiologists argument.

- harvests relevant info from the EMR and applied it to their interpretation and the history section of the report. AI struggles with this desperately.

-calls clinicians about critical findings (AI cannot do this)

-triaged cases to read by attending, based on their interpretation and how time sensitive findings are. Actually AI can do this very well.

With all the things a senior resident does, they function from the perspective of a rads attending, as an advanced form of AI or biological intelligence (call it BI). The best AI can hope to be is a BI, and they are way behind on most tasks that a BI can do, other than interpretation.

So how do BIs affect rad attendings efficiency and workflow in the real world? A bit, maybe 20-30% increase in efficiency when I am on with a good senior resident. Probably about the same range for other attendings. Some get slowed down by it actually. Certainly it’s never 10x the volume you can produce with a senior res compared with solo. This is why I am so skeptical of the 1 rad replacing 10 argument. It just doesn’t get what rads do on a day to day basis, other than looking at images directly.
Good points but how about if we assume that AI is much much faster than BI at the interpretation aspect?
 
Top