Deep Learning in Radiology

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Naijaba

Full Member
15+ Year Member
Joined
Apr 2, 2007
Messages
1,060
Reaction score
118
Hi all, I'm pretty interested in the incorporation of deep learning into radiology. I've setup a blog to help track the emergence of deep learning companies, research articles and expert discussions all with a focus on radiology. My first post is a survey of the landscape of companies:

https://deeplearningradiology.wordp...ng-list-of-deep-learning-radiology-companies/

I'd be interested to hear about any other companies you're aware of that are trying to bring deep learning into radiology practice. It's an exciting time, and I think as future (and current) radiologists, we owe it to ourselves to participate in this movement.
 
"They claim to be able to detect fractures as small as 0.01% in an X-ray image."

Huh
 
"They claim to be able to detect fractures as small as 0.01% in an X-ray image."

Huh

Yeah I was wondering about that, their website says, "Enlitic's deep learning technology can detect tiny fractures as small as 0.01% of an X-ray image." I guess they mean relative to the pixel dimensions of the image.
 
Cad has been doing this for 30 years. Never gained traction because of so many false positives. So has anyone actually accomplished anything substantial in deep learning radiology?

Or are we just seeing outlandish out of context claims like above to reel in investors?
 
Last edited by a moderator:
I've been working on a completely optimized deep learning radiology workstation, but given the limitations of image processing algorithms, I think the future really is going to lie in a combination of deep learning and neural integration.

a-clockwork-orange-puremovies-620x299.jpg


The workstation of the future. JACR is interested.
 
Last edited:
Google just came out with another breakthrough: https://arxiv.org/pdf/1701.06538v1.pdf

Results: 1000-fold increase in the capacity of a neural network. They demonstrate state-of-the-art performance on machine translation (e.g. English to Spanish)

What it means: One reasonable counterargument to deep learning in radiology has been, "An algorithm can identify lung lesions, but that's it, what if something else is going on in the image?" The new model from Google introduces the notion of sub-networks within a large neural network. Each sub-network is activated or not depending on the input. We can imagine a scenario where a patient with a pleural effusion and pneumonia activates two independent pathways within the neural network. Really amazing stuff, it's getting close to the human mind.
 
Google just came out with another breakthrough: https://arxiv.org/pdf/1701.06538v1.pdf

Results: 1000-fold increase in the capacity of a neural network. They demonstrate state-of-the-art performance on machine translation (e.g. English to Spanish)

What it means: One reasonable counterargument to deep learning in radiology has been, "An algorithm can identify lung lesions, but that's it, what if something else is going on in the image?" The new model from Google introduces the notion of sub-networks within a large neural network. Each sub-network is activated or not depending on the input. We can imagine a scenario where a patient with a pleural effusion and pneumonia activates two independent pathways within the neural network. Really amazing stuff, it's getting close to the human mind.

Getting close? Okie doke
 
Google just came out with another breakthrough: https://arxiv.org/pdf/1701.06538v1.pdf

Results: 1000-fold increase in the capacity of a neural network. They demonstrate state-of-the-art performance on machine translation (e.g. English to Spanish)

What it means: One reasonable counterargument to deep learning in radiology has been, "An algorithm can identify lung lesions, but that's it, what if something else is going on in the image?" The new model from Google introduces the notion of sub-networks within a large neural network. Each sub-network is activated or not depending on the input. We can imagine a scenario where a patient with a pleural effusion and pneumonia activates two independent pathways within the neural network. Really amazing stuff, it's getting close to the human mind.

I think I'll still put my bet in the pile that I'll be fine to finish residency and fellowship and start practicing.
 
Google just came out with another breakthrough: https://arxiv.org/pdf/1701.06538v1.pdf

Results: 1000-fold increase in the capacity of a neural network. They demonstrate state-of-the-art performance on machine translation (e.g. English to Spanish)

What it means: One reasonable counterargument to deep learning in radiology has been, "An algorithm can identify lung lesions, but that's it, what if something else is going on in the image?" The new model from Google introduces the notion of sub-networks within a large neural network. Each sub-network is activated or not depending on the input. We can imagine a scenario where a patient with a pleural effusion and pneumonia activates two independent pathways within the neural network. Really amazing stuff, it's getting close to the human mind.


You sound like a person predicting the imminent cure of all diseases when viral gene vectors were introduced in the 70s. How has that worked out so far? These things not as simple as they sound in practice.

You're saying it's getting close to the human mind? It hasn't even accomplished the base function you're describing of detecting lung lesions. It's literally "find the white spot on the black background," and decades of research into this has failed miserably. I could train a second grader to do it in 2 minutes.
 
The only thing that has me more concerned than, say, when CAD was introduced, is it seems Silicon Valley is really getting behind it time this. Can anyone comment if that was the same for previous "deep learning" initiatives such as CAD?
 
Top