Updated thoughts on AI/ML

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Naijaba

Full Member
15+ Year Member
Joined
Apr 2, 2007
Messages
1,060
Reaction score
118
I figured I'd give a little insight into my experience with what's happening in AI/ML and radiology. There's a great opinion piece from the New York Times that came out today. It echoes my thoughts and prompted me to write this post.

The gestalt of the article is that deep learning has enabled progress on several once-thought-to-be-impossible challenges, but with the same essential limitation as machine learning models of old.

Let me give a real-world example from speech recognition.

Speech recognition plateaued in performance between 1990 and 2010. Despite computer speeds doubling along Moore's Law, speech recognition models would still make the same, infuriating mistakes regardless of processing power. If you've ever used PowerScribe, you know what I'm talking about. On a daily basis, "Low lung volumes" is misinterpreted as "The lung volumes." PowerScribe's flagship product was/is the best in the medical dictation industry, but has long relied on older machine learning models.

Nuance is, of course, updating their system to use deep learning. Deep learning models broke plateaus in nearly every field, including speech recognition. Try going to Google.com and clicking on the microphone. Google's voice recognition is significantly better that Nuance's, even for many complex medical terms!

The problem is, both Nuance and Google's systems are still just for voice transcription. They don't make pizza, or fly helicopters, or distinguish images of cats from dogs. The models do one thing - voice to text, and that's it. Those are obviously contrived examples, but what if you wanted the model to do something tangentially related to transcription? Like, transcribing Spanish? Or, translating English to Spanish? Or, just recognizing a word like "ginormous" that it hasn't been trained on? Models can't make inferences outside their learning task.

Herein lies the rub. Each deep learning model is tuned to a specific task. Many achieve super-human performance on that given task. This is not without utility or clinical significance; a pneumothorax detector that could reliably find basilar pneumos on semi-upright films would be very useful, and likely save a few chest tubes from being removed prematurely.

I could give more examples, but I started writing this rather late at night. My thoughts on AI are more conservative than in the past. I think we will see some tremendous algorithms helping radiologists with fairly arduous or time-consuming tasks, but the problem of generalizability is daunting. Deep learning is rooted in calculus, linear algebra and probability theory, and these fields belie "intelligence." Neural networks trained using GPUs are simply more advanced model that are ultimately limited by the structure and parameters of their network and the generalizability of the training data.

Members don't see this ad.
 
  • Like
Reactions: 1 users
So you think AI still have a long way to go in radiology?
 
So you think AI still have a long way to go in radiology?

Long is an understatement. There's little to no integration of AI at the radiologist's workstation, and most (all?) vendors are going the traditional PACS-integration route. When the dust settles, I believe AI will be a PACS feature, likely owned by the major vendors.

Each AI model solves a specific problem: pneumothorax detection, rib fracture detection, bone age classification, pulmonary nodule detection, etc. How much money can an institution spend on a product that may help with just one of these tasks? There are some products (e.g. perfusion imaging or RAPID) that target high-reimbursement, high-volume imaging like CT heads, but these are the exceptions rather than the rule.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Agreed. I think AI in radiology will have a minuscule impact in the grand scheme of things in the daily radiology work flow. I speak as someone who wants AI to succeed.

The greatest enemy to radiology is declining reimbursement.
 
I remember lurking in the forum last year and you were very optimistic in AI replacing radiologist in the near future. What’s the biggest driver leading to your change in opinion. Is it starting your first DR year?
 
  • Like
Reactions: 1 user
I remember lurking in the forum last year and you were very optimistic in AI replacing radiologist in the near future. What’s the biggest driver leading to your change in opinion. Is it starting your first DR year?

Yes, I think my intern year and first few months of radiology opened my eyes quite a bit. My main reason (similar to the NYT article) is limited generalizability. I absolutely believe AI is better than radiologists for many isolated problems (pneumothorax detection, lesion tracking, pneumonia segmentation, bone age classification, etc.). There are certainly models that do multiple tasks (pneumothorax + pneumonia + central line verification + goiter detection + ... ) all in one package, but there's always something the model lacks that's obvious to the radiologist. Here are a few real examples I saw last week: "The patient's eyeglasses are seen in his front shirt pocket." "Right-sided prominent skin fold should not be confused for pneumothorax." "Radiograph mislabeled left and right." I don't think AI will ever replace radiologists without a very broad contextual awareness.

Another problem from a business side is limited volumes. The value proposition of AI is increased accuracy and cost reduction. I believe AI can improve accuracy in certain contexts, but cost reduction is dependent on the number of studies performed. The millions of yearly radiology studies pales in comparison to the number of spoken words that could be translated or transcribed. That is, there's a much, much larger market for voice-to-text AI, foreign language translation AI, self-driving AI, advertising AI, etc. than there is for radiology. Moreover, companies are targeting one diagnosis in one imaging modality (e.g. lung cancer detection on low-dose screening CT, bone age in pediatric hand radiographs, pneumonia in chest radiographs, etc.). There simply isn't enough volume to make these companies viable with single-diagnosis products. The only exception I know of is acute stroke detection through perfusion imaging (iSchemaView RAPID). Maybe lung-cancer screening and nodule tracking will turn out to be a viable product, but the space is inundated with startups and old players alike. There isn't enough volume to go around.
 
In my opinion, AI will get better and better if the funding continue.
Eventually, it will be able to interpret the normals very accurately...
To me, the future is very blurry
 
In my opinion, AI will get better and better if the funding continue.
Eventually, it will be able to interpret the normals very accurately...
To me, the future is very blurry

What make you think AI will continue to improve with just funding?

I think what the OP has alluded to is that without a general artifical intelligence, it’s difficult to automate radiology completely.

Throwing funding at something doesn’t solve problems completely. Nuclear fusion still isn’t solved.
 
What make you think AI will continue to improve with just funding?

I think what the OP has alluded to is that without a general artifical intelligence, it’s difficult to automate radiology completely.

Throwing funding at something doesn’t solve problems completely. Nuclear fusion still isn’t solved.

Narrow AI improves with Data. The more data you give, the better AI will perform.
For now, one of the problems is that we dont have enough labelled data for AI because data is expensive in radiology (to get/label)
I'm not talking about AGI. once we develop AGI I think the world will be a much different place for us to be disscussing now.
 
Narrow AI improves with Data. The more data you give, the better AI will perform.
For now, one of the problems is that we dont have enough labelled data for AI because data is expensive in radiology (to get/label)
I'm not talking about AGI. once we develop AGI I think the world will be a much different place for us to be disscussing now.

Did you read the OP? The whole OP is about how narrow AI can achieve good performence for specific things. Radiology is too general for a narrow AI or an array of it to solve. Have you started cross sectional imaging rotations yet?
 
  • Like
Reactions: 1 user
No offense, but this is what pisses me off about this whole AI thing. The people who speak the loudest about AI know absolutely nothing about radiology in practice.

OP was loud as hell last two years about how quick, inevitable and seamless the AI transition would be, and IIRC, was telling us how many less radiologists would be needed to run a department...before dictating a freaking chest x Ray.

Now he/she has some (still barely any) experience in practical radiology, and immediately start to walk it back. I appreciate the honesty, but the truth is: radiologists are the only people who realize they don’t know everything about radiology.
 
  • Like
Reactions: 2 users
No offense, but this is what pisses me off about this whole AI thing. The people who speak the loudest about AI know absolutely nothing about radiology in practice.

OP was loud as hell last two years about how quick, inevitable and seamless the AI transition would be, and IIRC, was telling us how many less radiologists would be needed to run a department...before dictating a freaking chest x Ray.

Now he/she has some (still barely any) experience in practical radiology, and immediately start to walk it back. I appreciate the honesty, but the truth is: radiologists are the only people who realize they don’t know everything about radiology.

Yes, I admit I was overly zealous. Moreover, apart from the technical feasibility of AI, I question the profitability of AI-companies. In the current environment, accuracy isn't the bottleneck. Volumes are. Radiologists have to look at every image, even those flagged as "normal." IMHO, the best AI product will pre-populate the report, to save the radiologist dictation time. That's a feature I'm looking forward to in PACS.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
I find amusing the fact that the AI-obsessed throw jibes at skeptics for being too beholden to the way things have traditionally progressed, while committing a very similar error in applying "Moore's Law" and the like.
It's simple. AI has never existed before this point, the same way the engine never existed before the industrial revolution. With that in mind, there is no reason to suspect that our progression from this point onward will be "traditional."

I'm not sure I understand your claim regarding Moore's Law.
 
It's simple. AI has never existed before this point, the same way the engine never existed before the industrial revolution. With that in mind, there is no reason to suspect that our progression from this point onward will be "traditional."

I'm not sure I understand your claim regarding Moore's Law.

Actually, AI is more than the industrial revolution.

The definition of tool is that they are extensions of ourselves, to make us work better. The rational proponents of AI work toward that goal.

The irrational proponents want to use AI to replace ourselves rather than being extensions of our ability.
 
Actually, AI is more than the industrial revolution.

The definition of tool is that they are extensions of ourselves, to make us work better. The rational proponents of AI work toward that goal.

The irrational proponents want to use AI to replace ourselves rather than being extensions of our ability.
AI could very well serve as a tool to make us work better but a tool can also cause destruction even if that was not the intended goal. This is true for nuclear energy utilization. It can be beneficial for humanity by providing an alternative energy source but could very well destroy the planet if used incorrectly.

Discounting the dangers would be a foolish oversight.
 
AI could very well serve as a tool to make us work better but a tool can also cause destruction even if that was not the intended goal. This is true for nuclear energy utilization. It can be beneficial for humanity by providing an alternative energy source but could very well destroy the planet if used incorrectly.

Discounting the dangers would be a foolish oversight.

Nope, nuclear energy is an extension of our ability to perform physical labor. The way some AI proponents are arguing for AI, they are arguing for replacement of the actual tool wielding human factor.

AI need to remain a tool, welded by people.
 
Nope, nuclear energy is an extension of our ability to perform physical labor. The way some AI proponents are arguing for AI, they are arguing for replacement of the actual tool wielding human factor.

AI need to remain a tool, welded by people.
And my question is: Who's to say we're not opening Pandora's box in our quest to have AI as a "tool wielded by people?"
 
The Artificial Intelligence Revolution: Part 1 - Wait But Why
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html?utm_source=share


TLDR: we're f**ked

This is actually a fascinating article that I would encourage anyone interested in AI to read. There is a Part 2 as well. It's nothing particularly Earth shattering but some interesting, well-written speculation about the future of humanity.

It has literally nothing to do with radiology though.
 
  • Like
Reactions: 1 user
Because most people who aren’t in medicine don’t understand the limitation of application of AI to medicine.
Fair enough, but would you agree that a large part of it is just pattern recognition? Medical school and residency gets you really good at recognizing those patterns.

If you're a radiologist, you only have so much time to look at films to become an expert at discerning x vs y on a CT scan whereas an AI could theoretically have every CT scan ever performed and even make predictions based on the accuracy of past findings. No slight against anyone but we're constrained by our biology.
 
Fair enough, but would you agree that a large part of it is just pattern recognition? Medical school and residency gets you really good at recognizing those patterns.

If you're a radiologist, you only have so much time to look at films to become an expert at discerning x vs y on a CT scan whereas an AI could theoretically have every CT scan ever performed and even make predictions based on the accuracy of past findings. No slight against anyone but we're constrained by our biology.

Just today I was at a tumor board conference where highly-respected surgical oncologists, one with a focus on pancreatic cancer, had to be walked through a scan that showed pancreas tumor involvement of the SMA and another with very subtle metastatic findings. In both cases the surgeons would have operated without a radiologist consultation requiring some attending-level discussion. Being able to effectively communicate your expert opinion to the ordering physician is not an insignificant part of the radiologist's job and rads who can do this well are highly valued. Many diagnostic radiologists also spend quite a bit of time doing procedures and barium studies, much more than the lay person or even many within the medical field realize. Radiologists also absorb legal liability. And no radiology isn't just pattern recognition. My guess is that your perception of radiology is just looking at films in a dark room all day where every study is either normal or pathognomic. I don't doubt that AI will continue to play a larger role in radiology but AI making final reads is nowhere near close even if the technology was available today. Not to mention that pathology, dermatology, retina etc would also be significantly affected. Any field has risks. If tomorrow someone invented an eye drop that could dissolve and restore a cataract then we would need much fewer ophthalmologists. If CMS looked at spine operation outcomes and decided that they don't add value they could slash spine reimbursement. It's almost impossible to accurately predict the future on a long-term timeline. I'm picking radiology because I like it the most. If it goes away then so be it, I'll adapt.
 
  • Like
Reactions: 4 users
Fair enough, but would you agree that a large part of it is just pattern recognition? Medical school and residency gets you really good at recognizing those patterns.

If you're a radiologist, you only have so much time to look at films to become an expert at discerning x vs y on a CT scan whereas an AI could theoretically have every CT scan ever performed and even make predictions based on the accuracy of past findings. No slight against anyone but we're constrained by our biology.

Is a large part of radiology pattern recognition? Let’s see, QA for equipments, QA for techs, QA for other rads, developing and implementing new protocols, protocoling studies, doing procedures, tumor boards, talking to clinicians, and then there is reading the study and developing an appropriate differential.

It’s pattern recognition like flying an airlane is just driving a plane from A to B.
 
I think AI will mostly have an impact on Xrays and films identifying the normal ones.
It will have some troubles with scans/MRI.
First of all because you will always need a radiologist to check for the stuff that the AI cant identify (like a breast cancer on an chest CT, or a thrombosis in the heart on a ct scan of the abdomen etc)
Eventually, once the AI can identify ALL pathologies, anatomical variants.. it will have to know what is clinically important and what is not (a pancreas divisum is relevant if the patient is programed for surgery, not so much if the patient have renal stones)
And if the AI is as impactfull as they say, it will add so much info that you have to decide what is clinicaly relevant or else you will end up with a 30 pages stupid report.
Not to mention the work that needs to be done for comparaison, and the critical thinking you have to do in the cases of a complicated scan
Its not like a cbc where you just put a number for wbc hb etc
You have a looot of data to handle and you need a general human MD radiologist intelligence to decide what to tell other doctors/patient.
 
  • Like
Reactions: 2 users
Just today I was at a tumor board conference where highly-respected surgical oncologists, one with a focus on pancreatic cancer, had to be walked through a scan that showed pancreas tumor involvement of the SMA and another with very subtle metastatic findings. In both cases the surgeons would have operated without a radiologist consultation requiring some attending-level discussion. Being able to effectively communicate your expert opinion to the ordering physician is not an insignificant part of the radiologist's job and rads who can do this well are highly valued. Many diagnostic radiologists also spend quite a bit of time doing procedures and barium studies, much more than the lay person or even many within the medical field realize. Radiologists also absorb legal liability. And no radiology isn't just pattern recognition. My guess is that your perception of radiology is just looking at films in a dark room all day where every study is either normal or pathognomic. I don't doubt that AI will continue to play a larger role in radiology but AI making final reads is nowhere near close even if the technology was available today. Not to mention that pathology, dermatology, retina etc would also be significantly affected. Any field has risks. If tomorrow someone invented an eye drop that could dissolve and restore a cataract then we would need much fewer ophthalmologists. If CMS looked at spine operation outcomes and decided that they don't add value they could slash spine reimbursement. It's almost impossible to accurately predict the future on a long-term timeline. I'm picking radiology because I like it the most. If it goes away then so be it, I'll adapt.
Meant no disrespect to aspiring or practicing radiologists nor am I trying to downplay the role they play on the patient care team. My aim was to generate a discussion regarding the impact of AI on the field.
 
I have some thoughts,, Why it is always AI vs. Radiologist,, May be it will be Internist + AI vs. Rad or Surgeon + AI vs. Rad? I mean many of the referring physicians are already very knowledgable with their scans and in many cases they know more than the average general radiologist. So with the help of AI, They will be at a very better position to replace radiologists.

I definitely believe that a human factor is still a must but who said that it must be the radiologist? and even if, I think that will dramatically reduced the number of jobs for radiologists. May be we will have a radiology digital farm with many radiologists sitting on computers reporting studies from many hospitals far away with the help of AI.
 
  • Like
Reactions: 2 users
I have some thoughts,, Why it is always AI vs. Radiologist,, May be it will be Internist + AI vs. Rad or Surgeon + AI vs. Rad? I mean many of the referring physicians are already very knowledgable with their scans and in many cases they know more than the average general radiologist. So with the help of AI, They will be at a very better position to replace radiologists.

I definitely believe that a human factor is still a must but who said that it must be the radiologist? and even if, I think that will dramatically reduced the number of jobs for radiologists. May be we will have a radiology digital farm with many radiologists sitting on computers reporting studies from many hospitals far away with the help of AI.

Turf is a matter of expertise, liability, and choice. A competitor must have the expertise to interpret the study to an acceptable standard, assume the liability if the study is misinterpreted, and choose to spend their time interpreting the study rather than other clinical tasks.
 
  • Like
Reactions: 1 user
The key point that is often missed I think is that AI, whether better or worse then humans at a given task, is DIFFERENT and will screw up in different ways then a human radiologist would. Like the Tesla autopilot that has a lower accident rate then humans but drove into a lake based on faulty maps/cameras, AI will screw up (even if at a lower rate then a human) in ways that would have been obvious to a human operator.

Because of that, the future (in all fields of medicine, not just rads) at least in our careers, will be AI assistance where both human and AI perform a task to a higher level of accuracy then either alone.
 
  • Like
Reactions: 4 users
AI is just going to be better CAD in our lifetimes. Prospective applicants, i would worry far more about reimbursement cuts, and corporate takeover of radiology private practices. Those are the biggest threats to our field. And they are ones that are actually happening, unlike AI
 
  • Like
Reactions: 1 user
Sorry to restart this chat but do you think anything has changed with so many FDA approved AI algorithms in the last 2 years. Do we still think radiology is a viable field to have a 20-30 year career? (3rd year med student who is strongly considering DR)
 
Sorry to restart this chat but do you think anything has changed with so many FDA approved AI algorithms in the last 2 years. Do we still think radiology is a viable field to have a 20-30 year career? (3rd year med student who is strongly considering DR)
I'm not sure there has been much change to clinical practice at this time from what I've seen. Viz.ai for large vessel occlusion seems to be a use case which really helps radiologists and patients but certainly will not be displacing any jobs. I don't know what the future entails, but as imaging continues to improve I wouldn't be surprised if imaging volumes continued to increase which would ideally offset some level of increased efficiency from AI. Examples that come to mind are low dose CT screening for pancreatic cancer, but i'd imagine a variety of new indications over our lifetime/career. I'll be interested to see what happens with these very high tesla scanners which produce images more quickly and/or with a level of detail people don't even know what to make of yet. Moreover, with the rise of midlevels throughout nearly every field of medicine, it seems that even if imaging research goes stagnant imaging volume will continue to increase. Even subspecialists have difficulties with specific imaging and good radiologists are a great source of information for those in all specialties, physician and non physicians alike.

I am just a MS3 as well, also looking at diagnostic radiology so take my opinions with a grain of salt.
 
  • Like
Reactions: 1 user
Sorry to restart this chat but do you think anything has changed with so many FDA approved AI algorithms in the last 2 years. Do we still think radiology is a viable field to have a 20-30 year career? (3rd year med student who is strongly considering DR)

Yes, go for DR. Best field in medicine.

As far as existential threats go: P/E infiltration into DR practices >>>>>>>>mid levels >>>>>>>>>>>>>>>>>>>>> AI
 
  • Like
Reactions: 4 users
I figured I'd give a little insight into my experience with what's happening in AI/ML and radiology. There's a great opinion piece from the New York Times that came out today. It echoes my thoughts and prompted me to write this post.

The gestalt of the article is that deep learning has enabled progress on several once-thought-to-be-impossible challenges, but with the same essential limitation as machine learning models of old.

Let me give a real-world example from speech recognition.

Speech recognition plateaued in performance between 1990 and 2010. Despite computer speeds doubling along Moore's Law, speech recognition models would still make the same, infuriating mistakes regardless of processing power. If you've ever used PowerScribe, you know what I'm talking about. On a daily basis, "Low lung volumes" is misinterpreted as "The lung volumes." PowerScribe's flagship product was/is the best in the medical dictation industry, but has long relied on older machine learning models.

Nuance is, of course, updating their system to use deep learning. Deep learning models broke plateaus in nearly every field, including speech recognition. Try going to Google.com and clicking on the microphone. Google's voice recognition is significantly better that Nuance's, even for many complex medical terms!

The problem is, both Nuance and Google's systems are still just for voice transcription. They don't make pizza, or fly helicopters, or distinguish images of cats from dogs. The models do one thing - voice to text, and that's it. Those are obviously contrived examples, but what if you wanted the model to do something tangentially related to transcription? Like, transcribing Spanish? Or, translating English to Spanish? Or, just recognizing a word like "ginormous" that it hasn't been trained on? Models can't make inferences outside their learning task.

Herein lies the rub. Each deep learning model is tuned to a specific task. Many achieve super-human performance on that given task. This is not without utility or clinical significance; a pneumothorax detector that could reliably find basilar pneumos on semi-upright films would be very useful, and likely save a few chest tubes from being removed prematurely.

I could give more examples, but I started writing this rather late at night. My thoughts on AI are more conservative than in the past. I think we will see some tremendous algorithms helping radiologists with fairly arduous or time-consuming tasks, but the problem of generalizability is daunting. Deep learning is rooted in calculus, linear algebra and probability theory, and these fields belie "intelligence." Neural networks trained using GPUs are simply more advanced model that are ultimately limited by the structure and parameters of their network and the generalizability of the training data.

It’s been a few years and you should be well away in your radiology residency now. Any updated thoughts on machine learning and radiology? When do you see AI replacing radiology? 5 years? 10? 25? 50? Wait until artifical general intelligence come out?
 
For a while now, nothing important really happended on the Ai front with radiology. Untill last week when an algorithm got CE approval for chest Xray interpretation in europe.
My anxiety went throught the roof.
 
This only detects normal chest x-ray but doesn't say what the abnormality is - is this something to be worried about? (it has not received FDA clearance yet either)
 
A big part of radiology is normal. Taking the normal will take the easy parts of the job. If this trend continues, yes I think we should be worried.
We will have to wait and see what will happen in europe and how things will go..
 
How long after CE approval does FDA usually approve? It seems like the bar is higher for FDA approval right?
 
This tool will make radiologist’s day easier but it wont take business away from rads at all. A tool that tells whether an x ray is normal or not is useless to referring docs because the abnormal will still need to get read
 
How long after CE approval does FDA usually approve? It seems like the bar is higher for FDA approval right?
CE and FDA are not linked. There is no time frame. But FDA is harder to get.
This tool will make radiologist’s day easier but it wont take business away from rads at all. A tool that tells whether an x ray is normal or not is useless to referring docs because the abnormal will still need to get read
You do understand that an AI taking the normal chest Xray will take like 30% of the worload of chest Xrays right?
 
CE and FDA are not linked. There is no time frame. But FDA is harder to get.

You do understand that an AI taking the normal chest Xray will take like 30% of the worload of chest Xrays right?

Maybe more than 30% of the workload. It’ll be a great productivity tool for me. I’ll sign it those reports.

It’ll be a long time before a referring doc use those tools to bypass radiologists.
 
Maybe more than 30% of the workload. It’ll be a great productivity tool for me. I’ll sign it those reports.

It’ll be a long time before a referring doc use those tools to bypass radiologists.
Many specialists would be hubristic enough to bypass radiologists if given the opportunity.
 
  • Like
Reactions: 1 users
This is weird to me for several reasons, and probably why you don’t hear a lot of people talking about this.

CNN Algorithms have modifiable parameters where they can change where they fall on the ROC. If you take an algorithm and ratchet up the sensitivity to 99%, your specificity is sacrificed. This is why the ROC is everything for such studies, and why studies that are appropriately done include ROC’s for various combinations including with and without a radiologist.

If this is an algorithm that has parameters set for extraordinarily high sensitivity for any pathology, you could conceivably use such a thing and trust when it says “normal.” But would you use it if it only gave you ten out of every hundred normals, and you still had to do the other 90 manually? Maybe, it depends on how much the software costs.

This is what’s so weird about this. I don’t like that CE approves things without published data, because they have relegated themselves to making decisions on behalf of the community of physicians. I have so many questions, and I might have more, but there is nothing I can find in terms of data about this software other than them saying “trust us” and CE saying “trust them.” They say “clinically meaningless false negatives,” what in God’s name does that even mean? Who decides that? And how can you be sure that one day it won’t be “clinically meaningless?”

FDA turnaround times require trials, so we’re not going to see anything stateside for some time until we have that publication, in which case some questions will be answered, but there’s a disturbing number of high tier journal publications that also left so many important questions unanswered.

My worry, and something I didn’t think was possible but this has made me pause to reconsider, is that radiology as a field has lost enough clout in the medical domain that people don’t heed our warnings at all anymore, they don’t care what we say, they’re going to do what they’re going to do anyway. Even if it kills people in the process. Suing the FDA is a much taller order than your community telerad.
 
  • Like
Reactions: 2 users
AI/ML requires data to recognize patterns, and there’s so many unique variations that it will never be perfect. It would mainly function as a “first screen.” Given the complexity of many imagining modalities, nobody (hospital or non-radiologist) is going to be comfortable signing their name on the report, so radiologists will always be needed. I never understand the hoopla around AI.
 
Maybe more than 30% of the workload. It’ll be a great productivity tool for me. I’ll sign it those reports.

It’ll be a long time before a referring doc use those tools to bypass radiologists.
Why would they need you to read them? If I were the pulm crit doc I would happily "sign" the normal read and collect the RVU.
 
Why would they need you to read them? If I were the pulm crit doc I would happily "sign" the normal read and collect the RVU.

The hilarious thing about all of this is, once it becomes uniformly adopted, there will be a modification to CMS reimbursement such that reads on normal CXRs will be slashed to 1/1000th their current rate. Or they will simply not be reimbursed at all, and completely covered by the technical fee which will also probably be miniscule, because this AI will not need any modification by the owning software dev team once implemented. The way the AI is implemented is rigorous in-house training on normal plain films (thereby averting the issue of generalizability temporarily, until you get the next portable or stationary machine), parameter adjustment to ratchet up pathology sensitivity, double-read monitoring, and then implementation. After it’s implemented, presumably any time you install a new machine you’ll have to retrain the algorithm (or each machine gets its own unique algorithm, to address generalizability), but this and any other step doesn’t actually require any involvement from the dev team.

So why would I remain in contract with somebody who isn’t doing any value-added dev work? I’d negotiate my contract fees way down or walk.

This is what’s so weird to me about the AI-dev startups. The only way to stay in business is to keep developing more and more algorithms and applications, but that hill becomes exponentially more difficult to climb as the pathologies you build for become rarer and rarer (and eventually you’ll run into the generalizability issue once the pathologies you’re evaluating for require testing datasets distinct from your implementation site). Eventually hospitals are going to realize you aren’t really doing anything, so why are they paying you like you’re hired staff?

Either let me buy the software and be done with it once it’s finished training, or I’ll just walk.
 
Last edited:
  • Like
Reactions: 2 users
Why would they need you to read them? If I were the pulm crit doc I would happily "sign" the normal read and collect the RVU.

Ok, so you are a pulm doc. Who pays for the AI software? Is your pulm group going go shell out 100k a year so you can make 5 bucks a day off CXR?

Those type of software only make sense in rad.

It’s like saying why would I need amazon to delivery me stuff when I can invest in my private driver to pick up and deliver my stuff.

Trust me, the economy of scale is different between having 1 cxr a day on your pt vs. 500 a day.
 
  • Like
Reactions: 1 user
Top