Research in Machine Learning & Artificial Intelligence

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I'm really enjoying this discussion, for the record. And I don't mean to gang up on the only person avidly defending AI, but I can't resist picking at an argument or two.

I appreciate you addressing the EKG issue, Naijaba, but am unconvinced by your argument that EKGs are less lucrative than chest radiographs. The work RVUs generated for the professional component of both studies are almost equivalent (0.15 for EKG, 0.18 for CXR amounting to roughly $5.40 and $6.50 a pop at the 2017 conversion rate, not accounting for geographic variability). I would argue that more EKGs are performed and interpreted day-to-day than CXRs. And if you're going to make and the argument that value-based care will result in decreased reimbursement for CXRs due to the supposedly low clinical utility of a radiologist's interpretation, then I really don't understand how one could make the argument that CXR-reading AI is more lucrative than EKG-reading AI.

As to the claim that neurosurgery residents think they come out of residency reading neuroimaging studies as well as a fellowship-trained neuroradiologist, I wholeheartedly disagree with that idea and think that those surgeons are deluding themselves and putting their patients at risk. Knowing how to look at a tumor on a brain MR from the surgeon's perspective is very different from looking at a brain MR performed with the tumor protocol from the radiologist's perspective. For one, surgeons hate dealing with incidental findings when they don't read the study. How do you think they'd feel if now they were not only responsible for handling the follow-up and management of such findings, but also for detecting them in the first place? But also, I can tell you from personal experience, that even cardiologists trained in cardiac imaging struggle with detecting incidental findings on cardiac MR, even though they understand cardiac physiology better than radiologists and, as a result, may do a better job of interpreting the cardiac portion of the exam. Neurosurgeons are no better, though many are convinced otherwise. I've seen several cases where the surgeon makes a grave error by assuming he/she understood the imaging well enough to act. But on the other hand, I learn a lot from them by discussing how they look at different pathologies in planning their operative approach. Working together, we can use our own strengths to help each other and do the best by our patients.

Now, reverting to the neural networks vs. neuronal processing issue... Your analogy does not hold up as well as you seem to think. Transistors and neurons behave very differently. First of all, it is a gross oversimplification to say that neurons are electrical switches controlled by electricity. For one, they are not purely electrical (as is assumed by the cable model I mentioned above), but rather their conduction of signals occurs by both electrochemical and molecular biological means. Nor are they purely switches. When an action potential is generated along a neuron's axon, there are many other factors that determine the strength of the neurotransmitter response by the presynaptic neuron with as many factors determining the postsynaptic neuron's response to the neurotransmitter release generated. Whereas, a transistor in a logic gate generates an essentially binary output (allowing that even this is somewhat idealized). Put enough transistors and logic gates together and the resulting computer can estimate the output of a neuron with reasonable accuracy... However, to use those results as an argument in support of the assertion that neurons and transistors are the same is like saying that my analytical solution of a definite integral is the same as my calculator's numerical solution. Sure, the results are nearly indistinguishable, but the methods are entirely different. Also, my brain will generate the analytic solution much faster than my calculator could. And my calculator will generate the numerical solution even faster, while my brain struggles to crunch the numbers for the numerical solution.

Members don't see this ad.
 
As to the claim that neurosurgery residents think they come out of residency reading neuroimaging studies as well as a fellowship-trained neuroradiologist, I wholeheartedly disagree with that idea and think that those surgeons are deluding themselves and putting their patients at risk. Knowing how to look at a tumor on a brain MR from the surgeon's perspective is very different from looking at a brain MR performed with the tumor protocol from the radiologist's perspective. For one, surgeons hate dealing with incidental findings when they don't read the study. How do you think they'd feel if now they were not only responsible for handling the follow-up and management of such findings, but also for detecting them in the first place? But also, I can tell you from personal experience, that even cardiologists trained in cardiac imaging struggle with detecting incidental findings on cardiac MR, even though they understand cardiac physiology better than radiologists and, as a result, may do a better job of interpreting the cardiac portion of the exam. Neurosurgeons are no better, though many are convinced otherwise. I've seen several cases where the surgeon makes a grave error by assuming he/she understood the imaging well enough to act. But on the other hand, I learn a lot from them by discussing how they look at different pathologies in planning their operative approach. Working together, we can use our own strengths to help each other and do the best by our patients.
Wholehearted agree with your wholehearted disagreement...would say orthropods are best at interpreting imaging but even they have their limitations (thats why they are surgeons and we are radiologists)...with that said, if I was a medical student considering radiology, AI would be a consideration, who knows what can be developed in 10-20 years...
 
I think 15 years is a good number for widespread adoption. I know there's a lot of focus on AI and its impact on radiology, but I think there are other factors that diagnostic radiologist should be concerned about. The PACS was a major boon to radiologists' throughput, but it also gave the referring physician easy-access to images. Images are no longer siloed in the radiologist's workroom, and residents routinely learn to read images within their domain. The finances haven't caught up with this situation, but the current radiology reimbursement model is quite at odds with value-based care. Let me give some concrete examples:

1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense.

2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

The question about AI / machine learning is set against the backdrop of the these observations about value-based care. I'm fascinated by radiology because it has long been the one speciality that values innovation and embraces technology. I think that the future of radiology is similar to MSK/Breast/IR => more procedures and interaction with patients with less reads. The read volume can be reduced by a) Not reading every image on the PACS as noted above and b) Using machine learning / AI to screen out simpler reads such as normals.
lol.
 
Members don't see this ad :)
A lot of clinicians will claim that they can read images, some may even claim that they can read images well (from my interactions with neurosurgery, they aren't the best. Ortho are probably the best at reading their own).

I can see how someone earlier on in training may feel that way about clinicians coming up with the right diagnosis. However, DR isn't just the right diagnosis.

It's the right dx, done fast, all the time, plus all incidentals.
 
No offense to Naijaba, but you guys are arguing with someone who hasn't spent a single day training as a radiologist. I feel like unless you have assumed the role, you really have no concrete idea as to how AI will actually affect radiologists.
 
That's a huge issue I have with all of this. It seems like the loudest voices in the room don't really have any idea what a radiologist does or how he/she does it. We shouldn't be surprised by this because even many people in medicine, to include our fellow physicians, don't know either. All of the examples I've seen about what AI can do or is on the verge of doing are gross oversimplifications of only a small part of what I do.

The examples are so inadequate, sometimes laughably so, that the tendency may be to be dismissive, which is dangerous in its naivety. We ignore this issue at our own peril, even if it's not an existential peril. But in the meantime, I'll sleep well until a radiologist with a robust understanding of the issue (they are out there) advises me to toss and turn.
 
  • Like
Reactions: 2 users
The examples are so inadequate, sometimes laughably so, that the tendency may be to be dismissive, which is dangerous in its naivety. We ignore this issue at our own peril, even if it's not an existential peril. But in the meantime, I'll sleep well until a radiologist with a robust understanding of the issue (they are out there) advises me to toss and turn.

Your three sentences are excellent. Non-radiologist aren't well-positioned to comment on radiology practice while non-computer scientists aren't the best to comment on machine learning. I'm not a practicing radiologist; my opinions are from a business/computer science/medical student side of things. I'll have more experience in the coming years to frame the discussion.
 
As a computer science who is familiar with the industry, when is a machine learning network capable of reading mammogram coming? What about for chest x rays? What about general AI, are we close to general AI/machine consciousness?
 
  • Like
Reactions: 1 user
The one thing I don't understand from the anti-AI group is the question of liability. If AI was truly capable of being accurate 99.9% of the time, I don't think it's hard to believe that the company behind the tech would be OK with assuming liability of its tech baby.

And for those referring to the failure of EKG reads and CAD, were those movements spearheaded by Silicon Valley? If not, this is a completely different dynamic that definitely warrants extra attention.
 
The one thing I don't understand from the anti-AI group is the question of liability. If AI was truly capable of being accurate 99.9% of the time, I don't think it's hard to believe that the company behind the tech would be OK with assuming liability of its tech baby.

And for those referring to the failure of EKG reads and CAD, were those movements spearheaded by Silicon Valley? If not, this is a completely different dynamic that definitely warrants extra attention.

The question is why hasn't silicon valley able to tackle EKG yet? The money is there.

I have to be honest, as someone who used to live there and have a lot of friends there, there is always a lot of overblown hype.
 
As a computer science who is familiar with the industry, when is a machine learning network capable of reading mammogram coming? What about for chest x rays? What about general AI, are we close to general AI/machine consciousness?

I don't know about EKG, but here's the state-of-the-art for mammography: http://www.nature.com/articles/srep27327

From the abstract:
"The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. "

Chinese authors don't get a lot of respect in the U.S., but their results are consistent with deep learning in other areas of computer vision.

If you're interested in software that does this: Deep Learning in Mammography: Diagnostic Accuracy of a Multipurpose Image Analysis Software in the Detection of Breast Cancer. - PubMed - NCBI
From that article:
"In conclusion, we showed that deep learning algorithms designed for generic image analysis can be trained to detect breast cancer on mammography data with high diagnostic accuracy (AUC = 0.82) comparable to experienced radiologists (AUC = 0.79–0.87)."
What's interesting is that they use a generic deep learning software called ViDi:


This software isn't even designed for radiology. Imagine if such a software incorporated factors such as clinical history...
 
Excellent New Yorker article by the author of "The Emperor of Maladies", Siddhartha Mukherjee: A.I. VERSUS M.D.

Highlights of the article:
-He straightaway addresses old-school rules-based systems vs. deep learning.
-He addresses old technology failing on mammography datasets
-Highlights how deep learning has already surpassed dermatologists
-Covers the "black box" problem with deep-learning (e.g. do we really know what it's doing?)

I think the most salient paragraph is the quote by the Dr. David Bickers, Chair of Dermatology at Columbia:

“Believe me, I’ve tried to understand all the ramifications of Thrun’s paper,” he said. “I don’t understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I don’t think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?”

These are the same questions that radiologist must ask themselves now, and even more so for diagnostic radiologists who provide diagnoses-only (i.e. are not treating providers).

Edit: It's my personal belief that MDs must understand the math behind these systems. Why do we have physics and chemistry on the MCAT, but spend 80% of our time sitting behind computer systems without understanding how they work? Computer science and math are becoming essential to medicine, especially in the technical fields like radiology/radiation oncology/neurosurgery/etc.

I am a radiology resident who's conducting deep learning projects. I have created algorithms to analyze CXR, mammograms, and CTs, so I thought might be able to contribute to this discussion. In this article, I think Hinton has very little to no idea what radiologists do in their daily workflow / thought processes. Radiology is not just about pattern recognition but integration & synthesis of imaging patterns to correlate with patient's clinical status. Same large opacity in the chest on CXR can carry a widely different differential diagnoses based on patient's clinical status. Same nodule on chest CT can carry very different probability of malignancy depending on patient's status. When I read CXR, it's not just about pattern recognition of opacities. When I see that the person had a CABG surgery, I go through a mental checklist of boolean expressions (i.e. pneumothorax? retained sponge? expected atelectasis? expected retained surgical material? expected surgical complications? expected tubes & lines in anatomical position (and if not where could it be traveling in anatomical space)? if the surgeon was Dr. A, he tends to prefer XYZ approach which can have ABC complication, do I see that? I see a tiny line on the CXR and is it pneumothorax? What is the potential clinical harm to patient if I overcall pneumothorax or miss a small pneumothorax? This patient also has underlying interstitial lung disease and what's the consequence of that in this image pattern?). These cannot be achieved by AI unless it has very broad & accurate understanding of clinical medicine as a whole. On the other hand, I do see that AI's "objective" analysis of CXR image pattern can be a nice supplementary opinion for radiologists.

There's good reason why radiologists are MDs, and not PhDs or college graduates who learned to recognize patterns really well. I strongly doubt that AI can replace radiologists unless it becomes so good to the point they can take over medicine as a whole and have better understanding of medical pathophysiology & management than physicians. Also, to be a completely autonomous system, it needs to not make critical errors. Some companies boast that their algorithms achieve 87% accuracy while doctors achieve 85%. What they don't tell you is that the 13% of errors might have resulted in killing the patient. I worked on several deep learning algorithms that on paper appear to be outperforming doctors, but when analyzing errors, makes very serious mistakes that can kill or permanently disable patients. For this reason too, I don't think AI will replace radiologists or physicians any time soon.

Let's set aside these clinical challenges at the moment and focus on technical challenges of image pattern recognition alone though. There needs to be a shift in terms of large-scale data collection (not just big collection but well structured, organized, annotated, and very accurate) AND algorithmic breakthrough to address subtle problems with diagnosis. I think these are difficult but solvable issues and people are actively addressing them. Deep learning is incredibly good at seeing a macroscopic image and classifying (such as ImageNet cases), but when it comes down to finding subtle lesion in a sea of normal findings / noises, its performance drops significantly. Again, there are ways to mitigate this and people are actively exploring, but I don't think we're at complete solution yet.

On the bright side, I think deep learning has incredible potential for application to radiology and has the capability of surpassing human vision (including radiologist vision) for pattern recognition. Current chest CT nodule detectors on average detect nodules better than most radiologists at this time (though plagued by false positives). Mammograms already achieve on par or even better accuracy than most general radiologists (again plagued by false positives but much better than prior CAD systems). These algorithms are actively getting integrated into software package and getting used in clinical workflow. Prostate MRs are now analyzed together with machine learning. Many machine learning engineers are empirically testing which areas of radiology and which specific problems in radiology are easily solvable by deep learning, which areas can be solved with minimal technological breakthrough, and which areas are probably impossible for deep learning. There are certain areas here and there where deep learning can very nicely supplement and even surpass human radiologists, but definitely not in general and not without radiologist supervision.

PS. I have to respectfully but firmly disagree that radiologists must understand math behind deep learning. What's more important for radiologists is to understand the medical/surgical pathophysiology and correlate that with what can be seen in imaging and what cannot be seen on imaging. You do not have to be a software engineer to use software. Most radiologists will be users of machine learning, not developers. However, what I agree with you is that those radiologists who understand the mathematics & technical aspect behind deep learning will know how to trouble shoot the AI algorithms better and also have the opportunity to participate in this exciting field of research and help patients.
 
Last edited:
  • Like
Reactions: 4 users
I am a radiology resident who's conducting deep learning projects. I have created algorithms to analyze CXR, mammograms, and CTs, so I thought might be able to contribute to this discussion. In this article, I think Hinton has very little to no idea what radiologists do in their daily workflow / thought processes. Radiology is not just about pattern recognition but integration & synthesis of imaging patterns to correlate with patient's clinical status. Same large opacity in the chest on CXR can carry a widely different differential diagnoses based on patient's clinical status. Same nodule on chest CT can carry very different probability of malignancy depending on patient's status. When I read CXR, it's not just about pattern recognition of opacities. When I see that the person had a CABG surgery, I go through a mental checklist of boolean expressions (i.e. pneumothorax? retained sponge? expected atelectasis? expected retained surgical material? expected surgical complications? expected tubes & lines in anatomical position (and if not where could it be traveling in anatomical space)? if the surgeon was Dr. A, he tends to prefer XYZ approach which can have ABC complication, do I see that? I see a tiny line on the CXR and is it pneumothorax? What is the potential clinical harm to patient if I overcall pneumothorax or miss a small pneumothorax? This patient also has underlying interstitial lung disease and what's the consequence of that in this image pattern?). These cannot be achieved by AI unless it has very broad & accurate understanding of clinical medicine as a whole. On the other hand, I do see that AI's "objective" analysis of CXR image pattern can be a nice supplementary opinion for radiologists.

There's good reason why radiologists are MDs, and not PhDs or college graduates who learned to recognize patterns really well. I strongly doubt that AI can replace radiologists unless it becomes so good to the point they can take over medicine as a whole and have better understanding of medical pathophysiology & management than physicians. Also, to be a completely autonomous system, it needs to not make critical errors. Some companies boast that their algorithms achieve 87% accuracy while doctors achieve 85%. What they don't tell you is that the 13% of errors might have resulted in killing the patient. I worked on several deep learning algorithms that on paper appear to be outperforming doctors, but when analyzing errors, makes very serious mistakes that can kill or permanently disable patients. For this reason too, I don't think AI will replace radiologists or physicians any time soon.

Let's set aside these clinical challenges at the moment and focus on technical challenges of image pattern recognition alone though. There needs to be a shift in terms of large-scale data collection (not just big collection but well structured, organized, annotated, and very accurate) AND algorithmic breakthrough to address subtle problems with diagnosis. I think these are difficult but solvable issues and people are actively addressing them. Deep learning is incredibly good at seeing a macroscopic image and classifying (such as ImageNet cases), but when it comes down to finding subtle lesion in a sea of normal findings / noises, its performance drops significantly. Again, there are ways to mitigate this and people are actively exploring, but I don't think we're at complete solution yet.

On the bright side, I think deep learning has incredible potential for application to radiology and has the capability of surpassing human vision (including radiologist vision) for pattern recognition. Current chest CT nodule detectors on average detect nodules better than most radiologists at this time (though plagued by false positives). Mammograms already achieve on par or even better accuracy than most general radiologists (again plagued by false positives but much better than prior CAD systems). These algorithms are actively getting integrated into software package and getting used in clinical workflow. Prostate MRs are now analyzed together with machine learning. Many machine learning engineers are empirically testing which areas of radiology and which specific problems in radiology are easily solvable by deep learning, which areas can be solved with minimal technological breakthrough, and which areas are probably impossible for deep learning. There are certain areas here and there where deep learning can very nicely supplement and even surpass human radiologists, but definitely not in general and not without radiologist supervision.

PS. I have to respectfully but firmly disagree that radiologists must understand math behind deep learning. What's more important for radiologists is to understand the medical/surgical pathophysiology and correlate that with what can be seen in imaging and what cannot be seen on imaging. You do not have to be a software engineer to use software. Most radiologists will be users of machine learning, not developers. However, what I agree with you is that those radiologists who understand the mathematics & technical aspect behind deep learning will know how to trouble shoot the AI algorithms better and also have the opportunity to participate in this exciting field of research and help patients.

Hi hantah, thanks for your well-thought-out reply. There are definite limitations to what models can do at this point. We have models for clinical interpretation and models for imaging, but I'm not aware of a model that integrates the two. Even if such a model exists, it probably can't do multi-step clinical reasoning.

The issue with mathematics is a tough one. You're right that most radiologists will be consumers of the software and need not know the mathematics.

My reason for suggesting that more time be devoted to mathematics and computer science in undergraduate studies is to empower research projects in these areas. There's a very steep learning curve to training a deep learning model, yet there's enormous research funding for deep learning projects. Members of my research lab have received major research grants (HHMI, SIR, etc.) by writing proposals that focus on "deep learning." The problem is, if you've never used a command-line before or written a Python script, it's very unlikely you'll be able to train an accurate model, let alone push the state-of-the-art. The result is that major breakthroughs come from computer science gurus without insight into the medical science (or clinical reasoning as you describe). Conversely, if you have a clinician who is also adept at the math, you can have interesting results. For example, when I was at my Columbia interview a certain resident was talking about training a deep learning model using k-space frequencies instead of the Fourier transformed image. I don't know if it will work, but it's certainly an idea only a radiology-resident would try.
 
Members don't see this ad :)
Hi hantah, thanks for your well-thought-out reply. There are definite limitations to what models can do at this point. We have models for clinical interpretation and models for imaging, but I'm not aware of a model that integrates the two. Even if such a model exists, it probably can't do multi-step clinical reasoning.

The issue with mathematics is a tough one. You're right that most radiologists will be consumers of the software and need not know the mathematics.

My reason for suggesting that more time be devoted to mathematics and computer science in undergraduate studies is to empower research projects in these areas. There's a very steep learning curve to training a deep learning model, yet there's enormous research funding for deep learning projects. Members of my research lab have received major research grants (HHMI, SIR, etc.) by writing proposals that focus on "deep learning." The problem is, if you've never used a command-line before or written a Python script, it's very unlikely you'll be able to train an accurate model, let alone push the state-of-the-art. The result is that major breakthroughs come from computer science gurus without insight into the medical science (or clinical reasoning as you describe). Conversely, if you have a clinician who is also adept at the math, you can have interesting results. For example, when I was at my Columbia interview a certain resident was talking about training a deep learning model using k-space frequencies instead of the Fourier transformed image. I don't know if it will work, but it's certainly an idea only a radiology-resident would try.

I agree that radiologists who understand deep learning algorithms & math will have more to contribute in this era (in terms of research). I studied mathematics before med school and specifically focused on machine learning, thinking somewhere along your line. This led to my current series of projects with deep learning. Not sure if you're in the stage of applying for med school or residency, but there's constant struggle to find the right balance between two areas of study. Med students & residents first need to become a competent clinician before doing these research, but then again completely neglecting these research topics is also not a good idea. I had to re-learn all the algorithms again because I had almost exclusively focused on clinical medicine in med school.

It's true that if you've never touched python script and dealt with the network architecture of CNN/RNN, it would be near impossible to troubleshoot or even properly run the algorithms. But again, this is the role of a small number of academic radiologists and ML engineers who actually specialize in intersection of deep learning with radiology (like 1% or less at this point), not your majority clinical radiologists (95-99%) or academic radiologists focusing on other topics. I do not think people should actively seek out this path of studying math and then medicine and then pursuing a combined path afterwards, unless they have genuine commitment and enthusiasm (just because it's hot topic right now). I feel that pursuing combined topics add a lot of stress. For example, I have full time job as a radiology resident and must spend 1-2 hours everyday studying clinical radiology to protect patients on call. I spend the rest of time on studying deep learning & doing projects..which leaves me very little time to have fun, meet friends, and exercise. It's my choice so I am still happy, but I would have definitely burnt out if I did not truly enjoy both radiology & machine learning.

On a side note, in my opinion, I would probably caution against writing off certain approach of deep learning as completely stupid. I have very little knowledge about MRI physics (no domain knowledge) so I cannot comment too much on your k-space and fourier transform example, but I've seen what I thought was very stupid approach, turning out to work really well. I've seen people literally putting in cell phone photos of EEG into CNN (which I thought was dumb) but then getting very insightful results. I think deep learning remains an empirical science at this point. I don't know if you're a fan of Kaggle competitions, but there was a competition on finding non-randomness among randomness. The non-linearity of neural network can produce a lot of meaningful results even when the provided data appears not well processed. I even wonder about turning the first layer neurons into fourier transform activation function and seeing what happens, for example. Anyways, these were just some of my personal thoughts.
 
  • Like
Reactions: 1 users
I agree that radiologists who understand deep learning algorithms & math will have more to contribute in this era (in terms of research). I studied mathematics before med school and specifically focused on machine learning, thinking somewhere along your line. This led to my current series of projects with deep learning. Not sure if you're in the stage of applying for med school or residency, but there's constant struggle to find the right balance between two areas of study. Med students & residents first need to become a competent clinician before doing these research, but then again completely neglecting these research topics is also not a good idea. I had to re-learn all the algorithms again because I had almost exclusively focused on clinical medicine in med school.

It's true that if you've never touched python script and dealt with the network architecture of CNN/RNN, it would be near impossible to troubleshoot or even properly run the algorithms. But again, this is the role of a small number of academic radiologists and ML engineers who actually specialize in intersection of deep learning with radiology (like 1% or less at this point), not your majority clinical radiologists (95-99%) or academic radiologists focusing on other topics. I do not think people should actively seek out this path of studying math and then medicine and then pursuing a combined path afterwards, unless they have genuine commitment and enthusiasm (just because it's hot topic right now). I feel that pursuing combined topics add a lot of stress. For example, I have full time job as a radiology resident and must spend 1-2 hours everyday studying clinical radiology to protect patients on call. I spend the rest of time on studying deep learning & doing projects..which leaves me very little time to have fun, meet friends, and exercise. It's my choice so I am still happy, but I would have definitely burnt out if I did not truly enjoy both radiology & machine learning.

On a side note, in my opinion, I would probably caution against writing off certain approach of deep learning as completely stupid. I have very little knowledge about MRI physics (no domain knowledge) so I cannot comment too much on your k-space and fourier transform example, but I've seen what I thought was very stupid approach, turning out to work really well. I've seen people literally putting in cell phone photos of EEG into CNN (which I thought was dumb) but then getting very insightful results. I think deep learning remains an empirical science at this point. I don't know if you're a fan of Kaggle competitions, but there was a competition on finding non-randomness among randomness. The non-linearity of neural network can produce a lot of meaningful results even when the provided data appears not well processed. I even wonder about turning the first layer neurons into fourier transform activation function and seeing what happens, for example. Anyways, these were just some of my personal thoughts.

Hi hantah, I'm going to send you a PM. You're the type of person I was hoping to meet on this forum, and the kind of person I looked for on the interview trail. I think we could share ideas and even potentially collaborate on a research or industry endeavor.

Also, I wasn't writing off the radiology resident's idea to use k-space; I was pointing it out as a good example of having a background in math + radiology.
 
Last edited:
I agree that radiologists who understand deep learning algorithms & math will have more to contribute in this era (in terms of research). I studied mathematics before med school and specifically focused on machine learning, thinking somewhere along your line. This led to my current series of projects with deep learning. Not sure if you're in the stage of applying for med school or residency, but there's constant struggle to find the right balance between two areas of study. Med students & residents first need to become a competent clinician before doing these research, but then again completely neglecting these research topics is also not a good idea. I had to re-learn all the algorithms again because I had almost exclusively focused on clinical medicine in med school.

It's true that if you've never touched python script and dealt with the network architecture of CNN/RNN, it would be near impossible to troubleshoot or even properly run the algorithms. But again, this is the role of a small number of academic radiologists and ML engineers who actually specialize in intersection of deep learning with radiology (like 1% or less at this point), not your majority clinical radiologists (95-99%) or academic radiologists focusing on other topics. I do not think people should actively seek out this path of studying math and then medicine and then pursuing a combined path afterwards, unless they have genuine commitment and enthusiasm (just because it's hot topic right now). I feel that pursuing combined topics add a lot of stress. For example, I have full time job as a radiology resident and must spend 1-2 hours everyday studying clinical radiology to protect patients on call. I spend the rest of time on studying deep learning & doing projects..which leaves me very little time to have fun, meet friends, and exercise. It's my choice so I am still happy, but I would have definitely burnt out if I did not truly enjoy both radiology & machine learning.

On a side note, in my opinion, I would probably caution against writing off certain approach of deep learning as completely stupid. I have very little knowledge about MRI physics (no domain knowledge) so I cannot comment too much on your k-space and fourier transform example, but I've seen what I thought was very stupid approach, turning out to work really well. I've seen people literally putting in cell phone photos of EEG into CNN (which I thought was dumb) but then getting very insightful results. I think deep learning remains an empirical science at this point. I don't know if you're a fan of Kaggle competitions, but there was a competition on finding non-randomness among randomness. The non-linearity of neural network can produce a lot of meaningful results even when the provided data appears not well processed. I even wonder about turning the first layer neurons into fourier transform activation function and seeing what happens, for example. Anyways, these were just some of my personal thoughts.
I find the k-space idea intriguing for two reasons.

1. Many experiments in modern visual neuroscience suggest the possibility that we humans might decode/encode visual stimuli in terms of spatial frequency. Turns out we're actually quite good at recognizing and classifying objects when only the low-spatial frequency data is represented in an image.
2. Given adequate sampling of k-space, the k-space representation and the object itself are mathematically equivalent, so why wouldn't it be reasonable to plug such data into a CNN and see what kinds of results it produces.

However, I think a natural extension of this idea would be to play around with training the network. One thought would be to train one network with only low-spatial frequency data (the center of k-space) and another with high-spatial frequency data, then take the output from layer 1 of each network (the so-called feature extraction layer), combine it and train a third network with that data. Then you could compare the accuracy of each trained network to see if there is any difference between training methods. Even if no difference exists, that result alone could offer some insight into how the networks are identifying features in the data.
 
2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

I'm a neuroradiologist. Neurologists and neurosurgeons are pretty good at reading > 90% of the studies, particularly when the findings are only in their area of interest. The problem is, it's hard to tell which 90%, and that 10% can result in immense problems. In my career, I've seen several colossal errors because physicians made decisions on the images without the reports. But they were right most of the time.

Imagine you had a keyboard with 100 keys, one of which would set off a stick of dynamite. Your IT guy tells you, "Don't worry, I'm 99% sure each key is safe."

Automated diagnosis will come, but in stages. There will be a lot of intermediate steps, where software may point out key features, quantify findings, etc. We already have a few, like the example above, RAPID. It's great and it works about 90% of the time. But you show it a tumor and it can't tell you it's a tumor; it's just looking for strokes.
 
I think 15 years is a good number for widespread adoption. I know there's a lot of focus on AI and its impact on radiology, but I think there are other factors that diagnostic radiologist should be concerned about. The PACS was a major boon to radiologists' throughput, but it also gave the referring physician easy-access to images. Images are no longer siloed in the radiologist's workroom, and residents routinely learn to read images within their domain. The finances haven't caught up with this situation, but the current radiology reimbursement model is quite at odds with value-based care. Let me give some concrete examples:

1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense.

2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

The question about AI / machine learning is set against the backdrop of the these observations about value-based care. I'm fascinated by radiology because it has long been the one speciality that values innovation and embraces technology. I think that the future of radiology is similar to MSK/Breast/IR => more procedures and interaction with patients with less reads. The read volume can be reduced by a) Not reading every image on the PACS as noted above and b) Using machine learning / AI to screen out simpler reads such as normals.
The more I hear about Naijabbas views on, well, anything, really, the less I get concerned about AI. I mean, the shear cluelessness of everything you write... I think you should stop posting until after your R1 year about anything related to radiology.
 
The more I hear about Naijabbas views on, well, anything, really, the less I get concerned about AI. I mean, the shear cluelessness of everything you write... I think you should stop posting until after your R1 year about anything related to radiology.

With what aspects of that post do you disagree?
 
It seems there's a lot of disagreement about my examples, perhaps they are incorrect. It wouldn't be the first time I've been wrong. My point was that radiology does a lot of reads. The solution to increasing volumes has been to innovate (PACS, dictaphones, templates, etc.). AI is another step to handle the increasing volume. The alternative to growing volumes, albeit unsettling to the field of radiology, is to allow the referring physician or AI to handle a part of the volume.


A lot of clinicians will claim that they can read images, some may even claim that they can read images well (from my interactions with neurosurgery, they aren't the best. Ortho are probably the best at reading their own).

I can see how someone earlier on in training may feel that way about clinicians coming up with the right diagnosis. However, DR isn't just the right diagnosis.

It's the right dx, done fast, all the time, plus all incidentals.

No offense to Naijaba, but you guys are arguing with someone who hasn't spent a single day training as a radiologist. I feel like unless you have assumed the role, you really have no concrete idea as to how AI will actually affect radiologists.

That's a huge issue I have with all of this. It seems like the loudest voices in the room don't really have any idea what a radiologist does or how he/she does it. We shouldn't be surprised by this because even many people in medicine, to include our fellow physicians, don't know either. All of the examples I've seen about what AI can do or is on the verge of doing are gross oversimplifications of only a small part of what I do.

The examples are so inadequate, sometimes laughably so, that the tendency may be to be dismissive, which is dangerous in its naivety. We ignore this issue at our own peril, even if it's not an existential peril. But in the meantime, I'll sleep well until a radiologist with a robust understanding of the issue (they are out there) advises me to toss and turn.

The more I hear about Naijabbas views on, well, anything, really, the less I get concerned about AI. I mean, the shear cluelessness of everything you write... I think you should stop posting until after your R1 year about anything related to radiology.
 
Imaging volume actually plateaued. Either way, saying letting referring clinicians handle imaging due to volume is like saying letting psychiatist perform surgery because surgery volume is too high. Referalls wouldn't be referals if they had full imaging training (except in the case of IR)
 
  • Like
Reactions: 1 users
Imaging volume actually plateaued. Either way, saying letting referring clinicians handle imaging due to volume is like saying letting psychiatist perform surgery because surgery volume is too high. Referalls wouldn't be referals if they had full imaging training (except in the case of IR)

Gotcha. I guess the advice people are trying to give me on this forum is that the gap between radiology and internal medicine is just as wide as the gap between surgery and psychiatry. That is something I need to appreciate.
 
Gotcha. I guess the advice people are trying to give me on this forum is that the gap between radiology and internal medicine is just as wide as the gap between surgery and psychiatry. That is something I need to appreciate.

It's something that you will acutely appreciate but others will not. As I can see you may have been influenced by clinician who tell you that radiologist add no value, can be replaced (not augmented) by AI and they themselves can do your job.
 
  • Like
Reactions: 1 user
AI and machine learning is an interesting development, but it's nothing to really get excited about. Unless one is an administrator.

Reads from radiologists are already pretty good. The way we are envisioning the endgame of AI, I doubt it will significantly improve the quality of reads, but it will significantly increase the speed of an adequate preliminary read and therefore the number of studies that can be performed (although there's probably overutilization already!). The bottleneck will then become how fast one can scan. AI is a robot, designed to do a human's job. Those interesting in cutting costs to save insurance company money will be ecstatic to combine this robot with increasing APP utilization. This is how all of medicine will work in the future it seems, from surgeons to family practice: replace the high cost MDs with more efficient solutions. Usually efficient solutions cut waste and save you money... but it's not going to save the MDs anything. This technology will not translate into increased reimbursement for any doctor because reimbursements aren't that "sticky" any more. One has effectively commoditized the report and MDs are now "quality control" for the assembly line. If you think radiology is tedious now..... It also probably won't lead to significantly improved patient outcomes (unless measuring AI vs. non-radiologist reads). It won't mean more free time. This will mean fewer radiologists who are probably working the same hours checking and signing off on many more machine-prelimmed studies. This is not some Shangri-la scenario. The computer scientists who pretend to be MDs really don't give a **** about the future outcome since they all plan on cashing in on some fantasy IPO for their machine learning start up, or moving up into administrator positions.

This is not about medicine. This is not about patients. This is not about intellectual curiosity. This is not about improved accuracy.
The endgame is a technology to save health care systems money by removing high cost MDs. Maybe that's what the system needs. But let's not pretend that this is for the good of mankind or something. We don't need to double or triple our utilization and we have enough radiologists. And we will have radiologists for the next 100 years at least, even if radiology departments no longer exist.
 
  • Like
Reactions: 1 users
We've pretty much exhausted this topic already, but as an engineer (formal education is not in software engineering per se, though I grew up using programming languages and taught myself machine learning) and a radiology resident, I would also like to add my perspective. I think AI replacing a radiologist is overhyped and sensationalistic but AI can be tremendously helpful to radiologists. Some barriers include:

1) The technology isn't there (in fact it's far from it). The simplest modalities with the least variations (mammography and CXRs) aren't there. If one modality is challenged, there are 100s more. And we are coming up with new stuff every year under our disposable, so how do you replace an imaging prognosticator that hasn't been discovered/invented yet? In addition, as many people have mentioned over and over again, we do more than just locate white dots on a mammogram. There is a lot of analysis and medical thought in each read (ideally, there should be).

2) The legality and implementation of it will be a nightmare. Anyone who has been a medical student can tell how hectic the logistics of a hospital is.

3) The use of AI may decrease the # of radiologists needed, but will not eliminate the need for radiologists. Furthermore, radiologists roles are fluid. What we did 30 years ago aren't the same as what we do today. Radiologists can and will adapt to the shifting landscape and take on a different role if required (for example, carry on a supervisory role of software diagnosis reads if it ever happens, reviewing ambigious cases, become more clinically oriented as a consultant, to name a few)

4) In academia there will always be a need for radiologists to do research of diseases and of imaging modalities - functional MRI, diffusion MRI, to name a few. Every year, more and more diseases can be diagnosed by imaging alone. For example, biopsies are not needed for HCC and can be replaced by a triple phase scan.


I think what will end up happening is that, if implemented, machine learning software in radiology will become a helpful tool for radiologists to better diagnose and quantify disease and its processes (a net positive for our field). The world-class academic radiology departments have begun to realize that, and are themselves getting involved either within or by partnering with industry:

-BWH/MGH: BWH, MGH Partner to Advance Artificial Intelligence in Health Care

-UCSF: Paging Dr. Algorithm: GE And UCSF Bring Machine Learning To Radiology


The healthcare landscape is increasingly demanding that each service prove their utility - and the trend towards quantification of imaging is actually a great tool for radiologists to do so. For example, in the future, if a patient with mild cognitive impairment can into the neurologist’s office, and when the neurologists cannot diagnose base on clinical history alone, they can refer to the neuroradiologist for imaging. If the neuroradiologist produces a report stating that "this is most consistently pre-clinical alzheimer disease, extremely high probability (given the amyloid burden found on PET in a classic distribution, FDG PET), less likely NPH, vascular dementia, DLBD", recommend LP to assess for tau protein level), then our service is extremely valuable and drives clinical care. Instead of hedging and stating “cannot exclude this and that” or “clinical correlation recommended” we can say something with more confidence. My prediction is that radiology will become increasingly valuable as it becomes more data-driven and quantitative (as all medical specialties should be, with or without AI). If we are able to continue to improve diagnostic precision non-invasively, radiology will not disappear, it will become even more relevant. Using machine learning to quantify imaging data and better diagnose diseases is one of the main reasons I chose radiology.


For students who are interested in radiology but worried about AI taking over your future jobs as radiologists...realize that AI/machine learning isn’t limited to radiology. There is AI development in pretty much every specialty (sans psychiatry) including the ROAD specialties.

1) Anesthesiology: The time came and passed, and anesthesiologists appeared to have won...for now. This is also a good cautionary tale of man vs machine, despite the fact that machines are good enough to be FDA approved. It’s game over for the robot intended to replace anesthesiologists

2) Ophthalmology: http://jamanetwork.com/journals/jama/fullarticle/2588763

3) Dermatology: http://www.nature.com/nature/journal/v542/n7639/full/nature21056.html

4) Surgery: Supervised autonomous robotic soft tissue surgery | Science Translational Medicine
 
  • Like
Reactions: 1 users
The legality will probably not end up being that problematic. The radiologist will be liable because she or he must sign off on the automated read. AI in rads for the first 200 years will probably be just a super resident, not a replacement.

Increasing diagnostic imaging precision in whatever form is very exciting. I don't think it's the same automated machine learning that's been proposed here, though. If it improves diagnosis, then I'm all for it. If it is really just a way to cut costs through increasing volume by eliminating the human element, then it's a business decision, not a medical one.
 
Last edited:
  • Like
Reactions: 1 users
This is a fantastic thread, and I wanted to post just to say that I enjoyed reading through it (from the perspective of neurologist). I posted a link in the companion thread to an article about how a SV type thinks rads will be obsolete in 5 years. Just like Watson was going to make heme-onc obsolete. Just like I was told in med school that MRI was going to make neurology obsolete. Or that nurses were going to make anesthesiology obsolete. Or that PCP"s will make specialists obsolete, or vice versa. One of the most refreshing things about this thread was how familiar it all is: change is coming, sky is coming down, or it isn't or it won't.

I take the view that things are going to stay the same more than they will change. It is, however, REALLY hard to make predictions about the future and the impact of technology. There are clearly two emerging threats to radiology: telemedicine, resulting in increased supply and competition per read, thus lower prices. And these AI systems that can give a pre-lim read.

While much of the thread focuses on feasibility, there's a political angle here. Emerging technologies have a way of very much helping the people at the top and the entrenched powers. Radiologists form an entrenched power at the top. Both these technologies face an uphill battle if they don't complement and add value - TO RADIOLOGISTS. If they don't add value to radiologists, then they will be pitched. A system that's not nearly perfect doesn't add value - it actually subtracts.

Will radiology change? Totally yes! A radiologist might oversee the pre-lim reads, perhaps get more clinical, do more procedures, but reports of radiology's death are greatly overstated.
 
  • Like
Reactions: 2 users
This is a fantastic thread, and I wanted to post just to say that I enjoyed reading through it (from the perspective of neurologist). I posted a link in the companion thread to an article about how a SV type thinks rads will be obsolete in 5 years. Just like Watson was going to make heme-onc obsolete. Just like I was told in med school that MRI was going to make neurology obsolete. Or that nurses were going to make anesthesiology obsolete. Or that PCP"s will make specialists obsolete, or vice versa. One of the most refreshing things about this thread was how familiar it all is: change is coming, sky is coming down, or it isn't or it won't.

I take the view that things are going to stay the same more than they will change. It is, however, REALLY hard to make predictions about the future and the impact of technology. There are clearly two emerging threats to radiology: telemedicine, resulting in increased supply and competition per read, thus lower prices. And these AI systems that can give a pre-lim read.

While much of the thread focuses on feasibility, there's a political angle here. Emerging technologies have a way of very much helping the people at the top and the entrenched powers. Radiologists form an entrenched power at the top. Both these technologies face an uphill battle if they don't complement and add value - TO RADIOLOGISTS. If they don't add value to radiologists, then they will be pitched. A system that's not nearly perfect doesn't add value - it actually subtracts.

Will radiology change? Totally yes! A radiologist might oversee the pre-lim reads, perhaps get more clinical, do more procedures, but reports of radiology's death are greatly overstated.

Hi neglect, thanks for your thoughts. Since you're a neurologist, I wonder if I could get your take iSchemaView RAPID (Home - iSchemaView RAPID). I often use them as an example of where radiology is headed. They are one of the few companies who have found success with the model:

CT/MRI Image Acquisition -> PACS -> Cloud -> PACS -> Radiologist

Here the "Cloud" performs some computation and generates a novel image/graph/measurement/calculation that's stored back on the hospital PACS. The radiologist can choose to view the new image (or not) alongside the original image. The main takeaway, consistent with what you've written, is that iSchemaView RAPID provides more accurate prediction of infarct volume following thrombectomy. The flip side of the legality argument is, "What if we know algorithms provide more accurate results than radiologists?" Shouldn't radiologists be obligated to use these technologies especially in acute care situations? Note that iSchemaView RAPID is not a machine-learning system. From my understanding it uses conventional (deterministic) image-processing techniques to identify the ischemic penumbra.
 
Hi neglect, thanks for your thoughts. Since you're a neurologist, I wonder if I could get your take iSchemaView RAPID (Home - iSchemaView RAPID). I often use them as an example of where radiology is headed. They are one of the few companies who have found success with the model:

CT/MRI Image Acquisition -> PACS -> Cloud -> PACS -> Radiologist

Here the "Cloud" performs some computation and generates a novel image/graph/measurement/calculation that's stored back on the hospital PACS. The radiologist can choose to view the new image (or not) alongside the original image. The main takeaway, consistent with what you've written, is that iSchemaView RAPID provides more accurate prediction of infarct volume following thrombectomy. The flip side of the legality argument is, "What if we know algorithms provide more accurate results than radiologists?" Shouldn't radiologists be obligated to use these technologies especially in acute care situations? Note that iSchemaView RAPID is not a machine-learning system. From my understanding it uses conventional (deterministic) image-processing techniques to identify the ischemic penumbra.

I'd say neuroquant is another example of an interesting use of computers to help augment reads. Both this and the RAPID iSchemaView do something that computers do very well: stay in bounds. A human says: computer, I want you to calculate the size of the infarct volume. Or: I want you to calculate the size of this hippocampus. Humans aren't so good at that.

But this data isn't that great to really change management, drive a diagnosis. It is a way to augment a read. It is a BP cuff - it measures something very specific and discrete. Plug a seizure, brain tumor or big MS plaque in there, ask RAPID a question and garbage in, garbage out.

So I agree, this is where computer reads should go: into augmenting existing reads.
 
  • Like
Reactions: 1 user
I'm wary of bringing these threads back to life, but I think radiologists and trainees should be up-to-date on the latest in machine learning. I think this is the biggest breakthrough of the year: https://arxiv.org/abs/1703.06870

It didn't really peak my interest until now; a couple weeks ago a free implementation was made available (thanks HackerNews): GitHub - matterport/Mask_RCNN: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

You can read the abstract (or the paper) but suffice to say the model allows segmentation of objects within a single image at a pixel-level. Prior radiology deep learning approaches have focused on a single problem ("does this patient have a pneumothorax", "what's the bone age", "how many malignant lung lesions"). These models are too specific to be used as a "pre-read" of images; they only consider specific diagnoses. With a Mask R-CNN, and an enormous training set, you could identify and apply pixel-perfect boundaries to findings in an image. If you then tie each of these segmented objects to a workup algorithm (e.g. the pixels of a pneumothorax could be sent to an algorithm for volumetric measurement), you get very close to an automated report.

Exciting times for anyone who has a lot of data.
 
  • Like
Reactions: 1 user
I'm wary of bringing these threads back to life, but I think radiologists and trainees should be up-to-date on the latest in machine learning. I think this is the biggest breakthrough of the year: https://arxiv.org/abs/1703.06870

It didn't really peak my interest until now; a couple weeks ago a free implementation was made available (thanks HackerNews): GitHub - matterport/Mask_RCNN: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

You can read the abstract (or the paper) but suffice to say the model allows segmentation of objects within a single image at a pixel-level. Prior radiology deep learning approaches have focused on a single problem ("does this patient have a pneumothorax", "what's the bone age", "how many malignant lung lesions"). These models are too specific to be used as a "pre-read" of images; they only consider specific diagnoses. With a Mask R-CNN, and an enormous training set, you could identify and apply pixel-perfect boundaries to findings in an image. If you then tie each of these segmented objects to a workup algorithm (e.g. the pixels of a pneumothorax could be sent to an algorithm for volumetric measurement), you get very close to an automated report.

Exciting times for anyone who has a lot of data.


https://arxiv.org/abs/1710.09829
 
I'm wary of bringing these threads back to life, but I think radiologists and trainees should be up-to-date on the latest in machine learning. I think this is the biggest breakthrough of the year: https://arxiv.org/abs/1703.06870

It didn't really peak my interest until now; a couple weeks ago a free implementation was made available (thanks HackerNews): GitHub - matterport/Mask_RCNN: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

You can read the abstract (or the paper) but suffice to say the model allows segmentation of objects within a single image at a pixel-level. Prior radiology deep learning approaches have focused on a single problem ("does this patient have a pneumothorax", "what's the bone age", "how many malignant lung lesions"). These models are too specific to be used as a "pre-read" of images; they only consider specific diagnoses. With a Mask R-CNN, and an enormous training set, you could identify and apply pixel-perfect boundaries to findings in an image. If you then tie each of these segmented objects to a workup algorithm (e.g. the pixels of a pneumothorax could be sent to an algorithm for volumetric measurement), you get very close to an automated report.

Exciting times for anyone who has a lot of data.

So if you believe this, I'd imagine you resigned your residency spot today?
 
  • Like
Reactions: 1 user
With a Mask R-CNN, and an enormous training set, you could identify and apply pixel-perfect boundaries to findings in an image. If you then tie each of these segmented objects to a workup algorithm (e.g. the pixels of a pneumothorax could be sent to an algorithm for volumetric measurement), you get very close to an automated report.

Exciting times for anyone who has a lot of data.

This is where AI will impact radiology. It will be great for assessing global volumes of pleural effusions, size of liver lesions, etc. will vastly improve workflow. It will do what robots do best, menial tasks that everyone hates. I don’t, however, think it will have any complex diagnostic capabilities in our lifetimes.
 
2014_10_10_11_44_39_19_RoadToRSNA_ResponsiveHeader_700.gif


Artificial Intelligence Preview
Sponsored by Philips

AI Scientific and Educational Presentations at RSNA 2017
AI can predict invasiveness of lung adenocarcinoma
Sunday, November 26 | 10:45 a.m.-10:55 a.m. | SSA05-01 | Room S404CD
A Japanese group has found that an artificial intelligence (AI) algorithm can predict the level of pathological invasiveness of lung adenocarcinoma nearly as accurately as a highly experienced radiologist.
Deep learning can assess malignancy risk of lung nodules
Sunday, November 26 | 10:55 a.m.-11:05 a.m. | SSA12-02 | Room S403A
In this scientific session, researchers will describe how a deep-learning algorithm can provide an objective measure for assessing the malignancy risk of a lung nodule.
Chest x-ray algorithm shows why it made its diagnosis
Sunday, November 26 | 11:35 a.m.-11:45 a.m. | SSA12-06 | Room S403A
A team from India will present a deep-learning algorithm that highlights the areas of the chest x-ray that led to the algorithm's diagnosis.
Machine learning may enhance utility of FFR-CT
Sunday, November 26 | 11:45 a.m.-11:55 a.m. | SSA04-07 | Room S504AB
A machine-learning algorithm shows potential for facilitating the use of fractional flow reserve (FFR) calculations on coronary CT angiography studies, according to research being presented in this scientific session.
Deep learning can identify malpositioned feeding tubes
Sunday, November 26 | 11:45 a.m.-11:55 a.m. | SSA12-07 | Room S403A
In this talk, researchers will highlight the potential of deep learning for speeding up the detection of malpositioned feeding tubes in critically ill patients.
AI detects large pneumothoraces on chest x-ray
Sunday, November 26 | 12:05 p.m.-12:15 p.m. | SSA12-09 | Room S403A
Artificial intelligence (AI) algorithms can automatically detect large pneumothoraces on chest x-ray -- potentially speeding up detection and reporting of these critical findings, according to researchers from Philadelphia.
Deep learning helps find lung nodules on chest x-ray
Monday, November 27 | 11:00 a.m.-11:10 a.m. | SSC03-04 | Room S504CD
A U.K. team has found that deep learning-powered computer-aided detection software could potentially act as a second reader for chest radiographs.
Breast MRI neural network predicts treatment response
Monday, November 27 | 11:50 a.m.-12:00 p.m. | RC215-17 | Arie Crown Theater
In this session, researchers will discuss how neural networks based on a breast MRI tumor dataset can help clinicians predict patient response to neoadjuvant chemotherapy.
AI exploits tumor imaging features to predict survival
Monday, November 27 | 11:50 a.m.-12:00 p.m. | SSC04-09 | Room E353A
Artificial intelligence (AI) can make use of tumor heterogeneity features on MRI to accurately predict the survival of metastatic colon cancer patients, according to a study by Harvard researchers.
Machine learning can help diagnose Crohn's disease
Monday, November 27 | 11:50 a.m.-12:00 p.m. | SSC05-09 | Room E451A
A machine-learning technique can diagnose Crohn's disease with high sensitivity and specificity, researchers from Italy report.
AI can detect, characterize kidney stones
Monday, November 27 | 3:40 p.m.-3:50 p.m. | SSE12-05 | Room E353A
An artificial intelligence (AI) algorithm can be used to accurately detect and characterize kidney stones, according to researchers from Boston.
Machine learning helps predict lithotripsy outcome
Monday, November 27 | 3:50 p.m.-4:00 p.m. | SSE12-06 | Room E353A
A Swiss group will present its experience in using CT texture analysis and machine learning to predict the successful treatment of kidney stones with shock-wave lithotripsy.
Deep learning helps forecast cancer treatment response
Tuesday, November 28 | 10:30 a.m.-10:40 a.m. | SSG13-01 | Room S404AB
The combination of deep learning and radiomics features could help radiologists estimate the likelihood of a bladder cancer patient responding to neoadjuvant chemotherapy.
Algorithm detects, segments lung nodules on CT
Tuesday, November 28 | 10:40 a.m.-10:50 a.m. | SSG13-02 | Room S404AB
In this scientific session, a team from imaging software developer Arterys will present its deep learning-based approach to detecting and segmenting lung nodules on CT scans.
Deep learning yields real-time coronary calcium scoring
Tuesday, November 28 | 10:50 a.m.-11:00 a.m. | SSG13-03 | Room S404AB
A deep-learning algorithm can swiftly quantify coronary artery calcium scores on low-dose CT lung screening exams, according to researchers from the Netherlands.
More may not always be better in deep learning
Tuesday, November 28 | 11:00 a.m.-11:10 a.m. | SSG13-04 | Room S404AB
Having more layers in a convolutional neural network doesn't necessarily lead to better performance for medical imaging tasks, reports a group from Chicago.
Deep learning with breast MRI boosts lesion detection
Tuesday, November 28 | 11:20 a.m.-11:30 a.m. | RC315-14 | Arie Crown Theater
A deep-learning method using multiparametric breast MRI improves automated detection and characterization of breast lesions, according to research being presented at this Tuesday morning session.
CAD enables lower radiation dose in CT lung screening
Tuesday, November 28 | 11:20 a.m.-11:30 a.m. | SSG13-06 | Room S404AB
In this talk, researchers will describe how computer-aided detection (CAD) can lead to marked reductions in radiation dose on low-dose CT lung cancer screening scans, even when image slice thickness is altered.
AI assesses cardiovascular risk on routine chest CT
Tuesday, November 28 | 11:40 a.m.-11:50 a.m. | SSG13-08 | Room S404AB
A Dutch team has found that its artificial intelligence (AI) algorithm can automatically perform cardiovascular risk assessment from routine chest CT studies.
Deep learning can spot significant coronary stenosis
Tuesday, November 28 | 11:50 a.m.-12:00 p.m. | SSG02-09 | Room S504AB
In this session, researchers will describe how deep learning shows potential for identifying patients with functionally significant coronary artery stenosis.
Machine learning differentiates brain tumors
Tuesday, November 28 | 3:10 p.m.-3:20 p.m. | SSJ19-02 | Room N228
Machine learning can distinguish between glioblastoma multiforme and primary central nervous system lymphoma on multiparametric MRI, according to researchers from Japan.
Algorithm virtually enhances resolution of microdose CT
Tuesday, November 28 | 3:10 p.m.-3:20 p.m. | SSJ22-02 | Room S403B
A major drawback of lowering CT radiation dose is a loss in image quality, but researchers found that microdose CT can deliver high-quality scans with the aid of a deep-learning algorithm.
Machine learning predicts working memory performance
Tuesday, November 28 | 3:30 p.m.-3:40 p.m. | SSJ19-04 | Room N228
This Tuesday afternoon session will reveal how machine learning can predict a person's working memory performance by analyzing brain white-matter microstructure.
Deep-learning software boosts breast US performance
Tuesday, November 28 | 3:40 p.m.-3:50 p.m. | SSJ02-05 | Room E450A
When it comes to identifying breast cancer, deep-learning software for breast ultrasound achieves diagnostic accuracy comparable to that of radiologists, Swiss researchers have found.
Finding breast cancer: How do computers compare with radiologists?
Wednesday, November 29 | 10:40 a.m.-10:50 a.m. | SSK02-02 | Room E451A
How do deep-learning algorithms compare with radiologists when it comes to finding cancer on mammography? Find out in this Wednesday morning presentation.
Machine learning can help predict KRAS mutation status
Wednesday, November 29 | 10:40 a.m.-10:50 a.m. | SSK07-02 | Room E353A
Machine learning and quantitative MRI features can assist in predicting the KRAS mutation status of tumors in patients with metastatic colon cancer, according to Harvard researchers.
CADx software performs well for bone age assessment
Wednesday, November 29 | 10:50 a.m.-11:00 a.m. | RC513-10 | Room E352
A computer-aided diagnosis (CADx) software application can accurately perform automated bone age assessment in children, a German group has found.
CAD software tracks changes in brain metastases
Wednesday, November 29 | 11:00 a.m.-11:10 a.m. | RC505-09 | Room E451B
Computer-aided detection (CAD) software can be used to detect and quantify changes in brain metastases on MRI, according to researchers from Philadelphia.
Pairing AI, radiologists improves bone age assessment
Wednesday, November 29 | 11:00 a.m.-11:10 a.m. | RC513-11 | Room E352
In this scientific session, researchers will explain why artificial intelligence (AI) and radiologists are better together when it comes to bone age assessment.
Deep learning can predict infarction risk after stroke
Wednesday, November 29 | 11:00 a.m.-11:10 a.m. | SSK15-04 | Room N226
In this morning talk, researchers will describe how deep learning can help guide treatment decisions for patients with acute ischemic stroke.
AI may enhance MRI-guided adaptive radiation therapy
Wednesday, November 29 | 3:00 p.m.-3:10 p.m. | SSM12-01 | Room S404CD
Artificial intelligence (AI) can provide automatic contouring of tumors and organs to support daily MRI-guided adaptive radiation therapy, researchers will report in this Wednesday session.
Machine learning forecasts survival in glioma patients
Wednesday, November 29 | 3:10 p.m.-3:20 p.m. | SSM12-02 | Room S404CD
Machine learning using MRI radiomic features may be able to predict the survival of patients with gliomas, according to this study from a research group in Taiwan.
Machine learning can determine onset of stroke symptoms
Wednesday, November 29 | 3:50 p.m.-4:00 p.m. | SSM12-06 | Room S404CD
A South Korean research team will describe the potential of machine learning for the crucial task of determining when acute ischemic stroke patients began experiencing symptoms.
3D CADv predicts recurrence of pulmonary nodules on CT
Thursday, November 30 | 11:40 a.m.-11:50 a.m. | SSQ18-08 | Room S403B
Researchers from Japan have demonstrated that 3D computer-aided detection and volumetry (CADv) software applied to CT scans can predict the resurgence of malignant lung nodules.
Deep learning may sharply increase specificity of CCTA
Thursday, November 30 | 11:50 a.m.-12:00 p.m. | SSQ02-09 | Room S502AB
The combination of a deep-learning algorithm and visual stenosis grading could significantly boost the specificity of coronary CT angiography (CCTA) for detecting functionally significant stenosis, a Dutch team has found.
Deep learning can quantify fat around the heart
Friday, December 1 | 10:30 a.m.-10:40 a.m. | SST02-01 | Room E450A
A deep-learning algorithm can rapidly segment and quantify the volume of thoracic fat surrounding the heart, according to researchers from Cedars-Sinai.
Deep learning can predict stenosis on fast SPECT-MPI
Friday, December 1 | 11:00 a.m.-11:10 a.m. | SST02-04 | Room E450A
Researchers have found that deep learning can improve the detection of potentially significant ischemic defects on raw, high-speed SPECT myocardial perfusion imaging (MPI) studies.
Breast MRI neural networks predict recurrence scores
Friday, December 1 | 11:20 a.m.-11:30 a.m. | SST01-06 | Room E450B
Researchers in New York have found that deep-learning networks can be trained with breast MRI data to predict Oncotype DX recurrence scores.
 
Foo — “cool” list and all, but some of those are garbage. A computer can diagnose a misplaced dobhoff 84% of the time? That’s great, I can do it essentially 100% of the time in about 3 seconds. So that’s not as impressive as it sounds.
 
A driverless shuttle bus crashed less than two hours after it was launched in Las Vegas on Wednesday. The city's officials had been hosting an unveiling ceremony for the bus, described as the US' first self-driving shuttle pilot project geared towards ...

An onlooker at the scene said: "Yeah, I saw the whole thing. The driverless shuttle ran into a group of Radiologists that were enjoying the Las Vegas sights and not paying any attention".

;-) ;-) ;-)
 
Foo — “cool” list and all, but some of those are garbage. A computer can diagnose a misplaced dobhoff 84% of the time? That’s great, I can do it essentially 100% of the time in about 3 seconds. So that’s not as impressive as it sounds.

I was not trying to impress any one.

RSNA to Publish Three New Journals
RSNA will begin publishing three new subspecialty journals in 2019. The journals will be published solely online and will cover cancer imaging, cardiothoracic imaging, and machine learning/artificial intelligence. The new journals will complement Radiology and RadioGraphics and provide a way to keep practicing physicians and imaging researchers up-to-date on the best emerging science in each subspecialty.

CaptainSSO, look out, don't get hit by that driverless bus!
 
Top