Machine learning and radiology

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Veryotaku999

New Member
5+ Year Member
Joined
Oct 4, 2016
Messages
3
Reaction score
0
Haha well said - the guy seems almost bitter, and seems to be going way out of his way to predict the demise of Radiology. medicine shifting to outpatient setting and 5-10 years for machine replacement? What has this guy been smoking?

I don't think machines will replace radiologists any sooner than an advanced WebMD will replace your primary care doctor or a anesthetic delivery machine (yes they do exist) will replace an Anesthesiologist.

Not sure why we even give this joker a platform to talk.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Hey, my background is in computer science, with a master's specialization in theoretical computer science / machine learning. I absolutely believe computers will replace the basic reads in radiology. I like to make an analogy with pathology. Do pathologists review CMPs/CBCs/LFTs? Nope! They perform more involved reads such as bone marrow biopsies, peripheral smears, surgical pathology, etc. The low ROI of plain films, redundancy and accuracy of automated methods makes economic sense for these tests to be automated.

Plain films, ventilation-perfusion scans, 2D ultrasounds, and pretty much anything outside 3D volumetric reads and angiograms can be automated with technologies currently in existence.

If anyone is interested in this topic, I think the path looks something like this:

1. Develop a simple system reading plain-film chest X-rays including production of a standardized report.
2. Run clinical trials demonstrating the superiority of the method to traditional reads.
3. Obtain FDA approval for automated reads.


FDA approval for algorithms is the real hurdle. A system that saves money and has FDA approval is readily adopted by hospitals. There's one company I come back to frequently as evidence of this future: iSchemaView They provide an automated system for the identification of regions of blood flow. Their device has FDA approval and is at numerous hospitals throughout the control. Now granted it's used for acute stroke treatment, but it's evidence that the FDA is at least willing to consider algorithmic solutions to existing problems.
 
Hey, my background is in computer science, with a master's specialization in theoretical computer science / machine learning. I absolutely believe computers will replace the basic reads in radiology. I like to make an analogy with pathology. Do pathologists review CMPs/CBCs/LFTs? Nope! They perform more involved reads such as bone marrow biopsies, peripheral smears, surgical pathology, etc. The low ROI of plain films, redundancy and accuracy of automated methods makes economic sense for these tests to be automated.

Plain films, ventilation-perfusion scans, 2D ultrasounds, and pretty much anything outside 3D volumetric reads and angiograms can be automated with technologies currently in existence.

If anyone is interested in this topic, I think the path looks something like this:

1. Develop a simple system reading plain-film chest X-rays including production of a standardized report.
2. Run clinical trials demonstrating the superiority of the method to traditional reads.
3. Obtain FDA approval for automated reads.


FDA approval for algorithms is the real hurdle. A system that saves money and has FDA approval is readily adopted by hospitals. There's one company I come back to frequently as evidence of this future: iSchemaView They provide an automated system for the identification of regions of blood flow. Their device has FDA approval and is at numerous hospitals throughout the control. Now granted it's used for acute stroke treatment, but it's evidence that the FDA is at least willing to consider algorithmic solutions to existing problems.

accuracy of automated reads? says who?

yes the millions, probably even billions being invested in this pursuit have yielded minimal fruit, but you with your BS and your masters, you can automate the plain film.

the pathology comparison is terrible and nonsensical.
 
accuracy of automated reads? says who?

yes the millions, probably even billions being invested in this pursuit have yielded minimal fruit, but you with your BS and your masters, you can automate the plain film.

the pathology comparison is terrible and nonsensical.

You have the same position that many people had when it was believed that speech recognition was an unsolvable problem. In the past 15 years incredible breakthroughs have been made by revisiting algorithms once discarded as useless in the 1970s. These are the "deep learning" and "neural network" methods that are being trained for self driving cars and Google's autocomplete. Speech recognition went from 88-90% accurate to 95%+. It is these algorithms to which I am referring. There hasn't been millions spent, let alone billions. The problem with these algorithms is access to data. The algorithms are specifically designed to get better the more data you have. This is what the Aunt Minnie article is suggesting. I'll stress it one more time, the algorithms are NOT the same as the ones in Watson or in previous prediction systems. Give me 100 million Chest X-rays with the proper diagnosis and, yes, I will train a model that will match the best radiologist. The method is clear, it's a matter of getting the data. And, of course, there's no guarantee that any hospital will use it. But it will work.
 
You have the same position that many people had when it was believed that speech recognition was an unsolvable problem. In the past 15 years incredible breakthroughs have been made by revisiting algorithms once discarded as useless in the 1970s. These are the "deep learning" and "neural network" methods that are being trained for self driving cars and Google's autocomplete. Speech recognition went from 88-90% accurate to 95%+. It is these algorithms to which I am referring. There hasn't been millions spent, let alone billions. The problem with these algorithms is access to data. The algorithms are specifically designed to get better the more data you have. This is what the Aunt Minnie article is suggesting. I'll stress it one more time, the algorithms are NOT the same as the ones in Watson or in previous prediction systems. Give me 100 million Chest X-rays with the proper diagnosis and, yes, I will train a model that will match the best radiologist. The method is clear, it's a matter of getting the data. And, of course, there's no guarantee that any hospital will use it. But it will work.
The problem is with the subjectivity. There often are not proper diagnosis particularly on a CXR or low level imaging study. The level of interobserver variabilty in bibasilar atelectasis alone would defunct the system.
 
  • Like
Reactions: 1 user
I would take anything that Zeekie Emmanuel spews with his mouth with a grain of salt. This is a guy who specifically said that Radiology is in his hit list. He clearly has a bias against the field. He also has interesting opinions about old people.

Him and his brother, the mayor of Chicago, are the epitome of incompetence.

Hopefully he will be replaced after the next election.
 
@Naijaba Stick to computers, leave medicine to the doctors that actually know what they're talking about.
 
The problem is with the subjectivity. There often are not proper diagnosis particularly on a CXR or low level imaging study. The level of interobserver variabilty in bibasilar atelectasis alone would defunct the system.

It's this variability that computers can normalize. Deep learning systems are not hard coded with features. In traditional machine learning you'd tell the system to look for specific features. For example, to recognize a vehicle you could have a "tire" feature or a "windshield" feature. This is how Watson learns - it's a rules-based system.

With deep learning you provide training examples and the system learns what to look for. It's hard to explain, but it's one of those "ah-ha!" moments when you see how it's working. Using the example above, a deep learning system will identify "a circular shape" (i.e. wheels) or "a rectangular shape" (i.e. windshield) or "lines on the road" or "sidewalk shape" or "headlight rectangles". No single object determines the outcome, but the summation of these learned attributes determines whether a car is present. What's compelling is that the individual attributes can be in any orientation (i.e. the car could be facing any direction), so the system doesn't suffer from orientation or positional biases to the extent of older models. It's fascinating stuff. I'm not sure how to convince naysayers other than by providing examples. All of the following are recent (<15 years) results from deep learning:

1. Face recognition approaches human levels: https://research.facebook.com/publi...human-level-performance-in-face-verification/

2. Improving histopathology using deep learning: http://www.nature.com/articles/srep26286

3. Deep learning for pathology image analysis: http://www.jpathinformatics.org/art...=7;issue=1;spage=29;epage=29;aulast=Janowczyk

4. Emerging companies: http://www.enlitic.com/ https://www.zebra-med.com/

5. Lots more publications here (disclaimer: I worked in Dr. Rubin's lab): https://rubinlab.stanford.edu/node/897
 
It's this variability that computers can normalize. Deep learning systems are not hard coded with features. In traditional machine learning you'd tell the system to look for specific features. For example, to recognize a vehicle you could have a "tire" feature or a "windshield" feature. This is how Watson learns - it's a rules-based system.

With deep learning you provide training examples and the system learns what to look for. It's hard to explain, but it's one of those "ah-ha!" moments when you see how it's working. Using the example above, a deep learning system will identify "a circular shape" (i.e. wheels) or "a rectangular shape" (i.e. windshield) or "lines on the road" or "sidewalk shape" or "headlight rectangles". No single object determines the outcome, but the summation of these learned attributes determines whether a car is present. What's compelling is that the individual attributes can be in any orientation (i.e. the car could be facing any direction), so the system doesn't suffer from orientation or positional biases to the extent of older models. It's fascinating stuff. I'm not sure how to convince naysayers other than by providing examples. All of the following are recent (<15 years) results from deep learning:

1. Face recognition approaches human levels: https://research.facebook.com/publi...human-level-performance-in-face-verification/

2. Improving histopathology using deep learning: http://www.nature.com/articles/srep26286

3. Deep learning for pathology image analysis: http://www.jpathinformatics.org/art...=7;issue=1;spage=29;epage=29;aulast=Janowczyk

4. Emerging companies: http://www.enlitic.com/ https://www.zebra-med.com/

5. Lots more publications here (disclaimer: I worked in Dr. Rubin's lab): https://rubinlab.stanford.edu/node/897
I agree that planar imaging studies may eventually go the way of lab analyzers, but the problem is determining what the proper diagnosis is.

Multiple things look the same. This is not "oh there's some super subtle differences", it's that the findings can look 100% identical and the diagnosis is then determined by clinical history. The planar radiograph is not the definitive diagnostic exam, but rather a screening test on the way to the definitive test (CT or MR or even ultrasound).

Getting machine usable patient history is not a trivial problem for now it is our job to help sway the diagnosis one way or another.

I find machine learning as a "doctor replacement" a silly endeavor. It should be used to do things humans can't do.
 
I think machine learning will be awesomely helpful. I find reading chest CT extremely tedious because of nodule hunting. If i could just have a CAD software pick up nodules for me i would be happy. That and PET/CT. It would be nice to have a software be able to quickly compare sizes and FDG uptake of lesions which is another painfully long chore.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
I think machine learning will be awesomely helpful. I find reading chest CT extremely tedious because of nodule hunting. If i could just have a CAD software pick up nodules for me i would be happy. That and PET/CT. It would be nice to have a software be able to quickly compare sizes and FDG uptake of lesions which is another painfully long chore.

I'm a fourth year med student applying into DR/IR. I have a knowledge gap when it comes to doing reads. I definitely don't claim to be an expert and appreciate the insight of practicing radiologists. I worked as a software developer for a PACS vendor before medical school as well as doing research in machine learning.

I agree that X-rays do not provide the definitive diagnosis. The product I'm envisioning is more for referring physicians than radiologists. A referring physician could run the classifier and get a quick report on the X-ray and decide whether to order another study.

There is room for computer-aided diagnoses in radiology, and I think machine learning will play a key role in those products as well.

For what it's worth, I think that these developments are about 20 years off. Automation won't affect the job market until that time. Radiology is at a point where it can embrace machine learning and take ownership of these algorithms without anyone being displaced. This is the vision that I see for the future of radiology.
 
Last edited:
I think it is interesting that everyone assumes X-ray will be the first to go. Radiographs are actually much harder to read than CT and MRI. All of the overlapping anatomy/pathology, changes in rotation, and subtle changes in density are not problems experienced in cross sectional imaging. It is an art to read these in some sense. If anything I think CT will go first to machine learning. Think about how symmetric and predictable a head CT is.
 
I think it is interesting that everyone assumes X-ray will be the first to go. Radiographs are actually much harder to read than CT and MRI. All of the overlapping anatomy/pathology, changes in rotation, and subtle changes in density are not problems experienced in cross sectional imaging. It is an art to read these in some sense. If anything I think CT will go first to machine learning. Think about how symmetric and predictable a head CT is.

ya there's the weird hierarchy among non-rads where it seems like people think x-ray are useless or not important but an MRI saves the world.
 
I'm a fourth year med student applying into DR/IR. I have a knowledge gap when it comes to doing reads. I definitely don't claim to be an expert and appreciate the insight of practicing radiologists. I worked as a software developer for a PACS vendor before medical school as well as doing research in machine learning.

I agree that X-rays do not provide the definitive diagnosis. The product I'm envisioning is more for referring physicians than radiologists. A referring physician could run the classifier and get a quick report on the X-ray and decide whether to order another study.

There is room for computer-aided diagnoses in radiology, and I think machine learning will play a key role in those products as well.

For what it's worth, I think that these developments are about 20 years off. Automation won't affect the job market until that time. Radiology is at a point where it can embrace machine learning and take ownership of these algorithms without anyone being displaced. This is the vision that I see for the future of radiology.

Your "vision" cannot be further away from the truth. And I highly doubt that someone who has worked in the real world thinks that anything will be automated in medicine. Especially in the US healthcare system.

There are programs out there for mammograms that can "diagnose" cancer. If you were a real medical student that has done electives in radiology, you would've come across this, and you would've also heard how everyone thinks they're a joke. Anything that might have a 0.0000001 chance of being cancer is automatically diagnosed as cancer.

The best example on how machines can be very useless in medicine is machine readings of EKG. Very simple task, lines on a piece of paper that can be measured easily. But there is no doctor in the world that would trust a machine's reading of an EKG.

I've said it before and I'll say it again. People need someone to sue when something goes wrong, which happens more than we'd like to in medicine.
A radiologist that diagnosis everything as a rare disease or cancer would be out of a job in no time. Show me a company that is willing to take on the risk of misdiagnosing people's scans. Oh and lets not talk about the security of these computer systems and what would happen if someone managed to manipulate the algorithm, cuz that's a whoooole different rant.
 
when radiology can be automated, the whole employment landscape of the world will be drastically different. Who do you think is going to get replaced first, a radiologist vs the kid who sits behind mcdonalds counter or a store clerk? What do you think happens when those millions of unskilled laborers are out of jobs? Not good things. I worry about that much more and think that is in the absolute near future much more than medicine becoming automated.
 
when radiology can be automated, the whole employment landscape of the world will be drastically different. Who do you think is going to get replaced first, a radiologist vs the kid who sits behind mcdonalds counter or a store clerk? What do you think happens when those millions of unskilled laborers are out of jobs? Not good things. I worry about that much more and think that is in the absolute near future much more than medicine becoming automated.

Automation means putting people out of jobs. Meaning governments will face two options: either create a nanny state with government giving people allowances to live on, or there'll be 40-50% unemployment, which leads to poverty, disease, violence etc...
Imagine automation of psychiatry. "And. How. Does. That. Make. You. Feel. Mr. Kazaki" *punches computer*
 
Do you even take neuoradiology call yet bro?

I think it is interesting that everyone assumes X-ray will be the first to go. Radiographs are actually much harder to read than CT and MRI. All of the overlapping anatomy/pathology, changes in rotation, and subtle changes in density are not problems experienced in cross sectional imaging. It is an art to read these in some sense. If anything I think CT will go first to machine learning. Think about how symmetric and predictable a head CT is.
 
Gonna have to say it if it since it has not been explicitly said. Computer aided diagnosis will be helpful to bring attentuin to asymmetry in things loke neuro (first off nobodies head is perfectly symmetrical) or to things like nodules. I dont doubt we will have more support and i welcome it.

But. To. Make. A. Diagnosis. Is a HUGE leap.

Lots of the conjecture in this thread are assumptions made by medical students, interns , and junior rad residents who haven't even read the gamut of radiology on call.
 
ya there's the weird hierarchy among non-rads where it seems like people think x-ray are useless or not important but an MRI saves the world.

There is a hierarchy. I can look and formulate an impression on aknee xray in about 20 seconds. Fracture or not. Effusion or not. And it is truly easy to read. But the real diagnosis is made knee mri. And beleive me a knee mri is a lot more complicated with much more anatomy to look it. There are tons of subtle findings on knee mri that may mean somthing or not, much much more often than a plain film of the knee. It is simply showing your ignorance and lack of experience in radiology when you say such things.

You know, like how skull xrays are so hard to read and diagnose things with than MRI.
 
Anyone who is worried about computers taking over radiology has never read mammo with computer-aided detection. Honestly, if a med student pointed out all the nonsense that CAD does I'd tell them to go to the library and read for the rest of the rotation. I'll start to worry when every automated ECG read doesn't say "non-specific ST wave changes."

As for study hierarchy, I agree that clinicians tend to think in terms of XR < CT < MR or esoteric nucs study. Few cases of Pagets or fibrous dysplasia need CT & MR, but trying to convince anyone of that seems to be a losing battle. Even worse, the folks prescribing about half the average radiation exposure that Americans receive today know next to nothing about radiation. Try explaining to an ED doc that ruling out their million to one chance of X has a one in a thousand chance of giving the patient cancer down the road, doesn't get me very far.
 
Last edited:
I've said it before and I'll say it again: Vast swaths of primary care are easier to automate than radiology.

Think about it. Patient comes in, fills out their list of meds on an tablet. Automated labs get drawn by phlebotomist. Blood pressure gets entered by an assistant. Lab shows HgA1C too high. Diabetes meds adjusted according to algorithm.

Radiology is just targeted for discussion like this because all the data is already in a computer. It doesn't take much work to automate simple primary care tasks.
 
Gonna have to say it if it since it has not been explicitly said. Computer aided diagnosis will be helpful to bring attentuin to asymmetry in things loke neuro (first off nobodies head is perfectly symmetrical) or to things like nodules. I dont doubt we will have more support and i welcome it.

But. To. Make. A. Diagnosis. Is a HUGE leap.
I agree with all the above. The FDA does too. If something makes an automated diagnosis, that is going to either be a Class II or Class III device. Super regulated for approval.

The "assistant" type stuff will come first, because it can skate by under the "computerized decision support" as class I.

After having presented my work at this conference (http://siim.org/general/custom.asp?page=2016CMIMI), there's a lot of promise. @Naijaba is talking about new techniques totally different from CAD. However, there's still HUGE work to be done before this ever reaches the reading room, including my own projects. In particular, these approaches require HUUUUUUUGE training sets of high-quality images with high-quality annotations. This can be solved with time and/or money, but getting the 510ks or PMAs through the FDA will take even more time and money.
 
I agree with all the above. The FDA does too. If something makes an automated diagnosis, that is going to either be a Class II or Class III device. Super regulated for approval.

The "assistant" type stuff will come first, because it can skate by under the "computerized decision support" as class I.

After having presented my work at this conference (http://siim.org/general/custom.asp?page=2016CMIMI), there's a lot of promise. @Naijaba is talking about new techniques totally different from CAD. However, there's still HUGE work to be done before this ever reaches the reading room, including my own projects. In particular, these approaches require HUUUUUUUGE training sets of high-quality images with high-quality annotations. This can be solved with time and/or money, but getting the 510ks or PMAs through the FDA will take even more time and money.

Super agree. I'm excited though because I think if we begin the process now, then 20 years from now we'll get that FDA approval.

Also, didn't know about that conference! Would love to attend.
 
The deal is he shares a mean streak with his brother and for whatever reason takes particular exception to radiologists existing.
 
Emmanuel just got NEJM to publish another one of his blowhard articles.

http://www.nejm.org/doi/full/10.1056/NEJMp1606181

What is the deal? Is this a sign that CMS is going to repeat 2007-era cuts?

I don't know anything about the author, but his second two paragraphs are excellent. It's the same point I was trying to make in the other thread: expert systems like existing EKG readers and even Watson use a rules-based approach that, "work the way an ideal medical student would: they take general principles about medicine and apply them to new patients." It's no wonder these systems aren't perfect! But compare that to machine learning, "machine learning approaches problems as a doctor progressing through residency might: by learning rules from data...where machine learning shines is in handling enormous numbers of predictors — sometimes, remarkably, more predictors than observations — and combining them in nonlinear and highly interactive ways."
 
Top