AI for Derm

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

D&D

New Member
5+ Year Member
Joined
Feb 17, 2017
Messages
3
Reaction score
2
"The researchers started with a Google-developed algorithm primed to differentiate cats from dogs."
-Yep, and you can do this at home: http://adilmoujahid.com/posts/2016/06/introduction-deep-learning-python-caffe/

Great articles overall, worth some discussion. I believe medicine will be profoundly affected by machine learning (for the better). I'm just not sure how the course of adoption will play out. There are basically two paths:

1. Departments adopt machine learning (i.e. your radiology or dermatology department purchases the technology). I think this will be the more likely pathway and is already happening in pathology. Pathology is incredibly well positioned for this - they already use computers to incubate cell lines and perform lab measurements. More importantly, the leadership structure of the modern pathology lab is amenable to machine learning. Pathologist oversee pathology technologists who perform the initial read. If you've ever been in an IR or MSK biopsy, you've probably run into the path tech who exams the slices and determines if more tissue is needed and makes an initial call. The initial reads can be done by machine learning.

2. Referring providers adopt the technology - This is the less likely path, but the most dangerous for radiology/dermatology departments. Take melanoma as an example. If a primary care provider can use a machine learning tool to identify melanoma, he or she can directly refer to a general surgeon for surgical removal. This avoids the standard referral to a dermatologist. A similar scenario is even more likely in radiology as providers routinely read their own images and use the radiologist's read as confirmatory.

How it plays out is ultimately up to the leadership of each institution. FDA approval usually signals an avalanche of change. Overall, the Wired article hits the nail on the head:

"The key to avoiding being replaced by computers, Topol says, is for doctors to allow themselves to be displaced instead. 'Most doctors in these fields are overtrained to do things like screen images for lung and breast cancers,' he says. 'Those tasks are ideal for delegation to artificial intelligence.' When a computer can do the job of a single radiologist, the job of the radiologist expands—perhaps to monitoring multiple AI systems and using the results to make more comprehensive treatment plans. Less time drawing on X-rays, more time talking patients through options."
 
Referring providers adopt the technology - This is the less likely path, but the most dangerous for radiology/dermatology departments. Take melanoma as an example. If a primary care provider can use a machine learning tool to identify melanoma, he or she can directly refer to a general surgeon for surgical removal.

This would probably eliminate the need for any knowledge workers at all. The patient can take a picture of his or her mole, have it analyzed and then get approved by the insurance company to have it taken off by a specialized mid-level. All we would really need would be specialized barbers who, instead of spending years learning how to cut hair, spend years learning how to cut out a tumor.

A similar scenario is even more likely in radiology as providers routinely read their own images and use the radiologist's read as confirmatory.

This is already how it works. Given that some referring docs don't trust a report from a radiologist that's been practicing for 20 years, it's unclear to me how they're going to rely on an autonomous computer generated report that states:

"1. Acute appendicitis (likelihood 85% +/- 5%). *

*Please not that AIWizard interpretations cannot be considered a substitute for visual interpretation of the films, presence of unexpected artifacts may render percentages inaccurate. AIW corp does not accept legal responsibility for treatment based off this diagnostic read."

It would make more sense for the theoretical magical computer reader to generate a report that the radiologist can check... but, as has been stated over and over on this forum, this already exists for mammography CAD and does not save anyone any time. The moment I see a breast CAD that works at 99+% is the moment I'll start looking around for another job.
 
It would make more sense for the theoretical magical computer reader to generate a report that the radiologist can check... but, as has been stated over and over on this forum, this already exists for mammography CAD and does not save anyone any time. The moment I see a breast CAD that works at 99+% is the moment I'll start looking around for another job.

There has been incremental progress in mammographgy CAD:

http://www.nature.com/articles/srep27327
http://www.auntminnieeurope.com/index.aspx?sec=ser&sub=def&pag=dis&ItemID=613330

My 2 cents: I regard the above papers as "guaranteed to get published", "low-hanging fruit", "rush to be first", etc. When convolutional neural networks came out in 2012, mammography CAD was an obvious target. These papers show promise but they don't wow and amaze. The former used 44,090 while the latter used 1,200. These numbers are minuscule when it comes to deep learning. Despite this, they still beat out the best in class algorithms. What we need is a pooling of some 1,000,000+ images. Then you'll have your working CAD.

Edit: At my MGH interview they mentioned that Google was interested in partnering with them to gain access to their mammography dataset. That would be a fantastic partnership in terms of moving AI forward in radiology, albeit I'd like to see radiologists develop out this technology not the monstrosity that is Google.
 
Article just came out: "MD Anderson Benches IBM Watson In Setback For Artificial Intelligence In Medicine"

https://www.forbes.com/sites/laurel...liam-survivor-founder-connector/#1595de153d13

Many in the AI community feel that Watson has done more harm than good. It has created false expectations for machine learning in medicine. To be clear, Watson was originally a rules-based system, not a deep-learning system. IBM started adding deep learning to Watson in 2015, but its core is still rules-based. Here's another article arguing that Watson is more hype that substance: https://mentalmodels4life.net/2016/08/07/ibm-watson-fake-it-till-you-make-it/
 
Top