Will AI take away our jobs?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

boanssi

Full Member
10+ Year Member
Joined
Jul 18, 2012
Messages
100
Reaction score
2
The question has already been discussed in similar topics by colleagues from other specialties (eg. anaesthesiology, radiology) and the consensus is that AI won’t be taking our careers anytime soon. What do you think?

This article from JAMA just came out:
Machine Learning Screen for Diabetic Retinopathy and Other Eye Diseases

Members don't see this ad.
 
My SO is in medical school and I wanted to get some insight into how her profession views my field.

In my experience, it really comes down to the training data, and how well you can construct the information. The paper you linked isn't that great, as the JAMA paper's training signal comes from the current state of the art (people) rather than ground truth and uses the equivalent (or less) information than those who labeled the images. In addition, VGG net isn't the best suited for this task.

The temporal architectures we work with are hovering around 1B parameters, more than enough to capture any signal in an image. Achieving parity will be a result of how well can we leverage additional information (knowledge graphs, temporal information, and unsupervised information) in a model as well just getting more training data. After talking with some MD's on this, the only difficult signal that I don't know how to handle is the sensory information.

I took a look at the radiology thread, and there is little discussion on *why* a parametric model cannot replace certain core responsibilities. What's your intuition on this issue?
 
Last edited:
I don't really belong here as I am a ML postdoc; however, my wife is in medical school and I wanted to get some insight into how her profession views my field.

In my experience, it really comes down to the training data, and how well you can construct the information. The paper you linked isn't that great, as the JAMA paper's training signal comes from the current state of the art (people) rather than ground truth and uses the equivalent (or less) information than those who labeled the images. In addition, VGG net isn't the best suited for this task.

The temporal architectures we work with are hovering around 1B parameters, more than enough to capture any signal in an image. Achieving parity will be a result of how well can we leverage additional information (knowledge graphs, temporal information, and unsupervised information) in a model as well just getting more training data. After talking with some MD's on this, the only difficult signal that I don't know how to handle is the sensory information.

I took a look at the radiology thread, and there is little discussion on *why* a parametric model cannot replace certain core responsibilities. What's your intuition on this issue?

What do you mean by "sensory information"?
 
Members don't see this ad :)
What do you mean by "sensory information"?

Iterative close observations through the combination touch, smell, and sight (not at images) . Any information passed on by a person will be lossy, and we are just now learning to map 2D images to 3D space via some clever domain adaptation training. I don't work in robotics, so the best idea is using reinforcement learning (RL) to learn when to ask for specific additional information from a Physician/Assistant w.r.t. the patient. There is some promise of this, as we've developed a RL model that almost achieves parity with physical therapists for musculoskeletal issues.
 
Last edited:
If they can't even make an EHR that works well and doesn't severely slow down the clinic; how are they going to make an AI machine that takes over the job of the physician?

Honestly the most time consuming part of being a doctor is collecting all the data (from the patient history, chart, images, labs and then integrating the data, and determining what is accurate and what isn't) and unfortunately this requires someone with a lot of education to be done properly. Once that is done then interpreting the data takes me 5 seconds. Then explaining it to the patient takes a lot of time as well.

I can imagine a computer interpreting the data from a single non-verbal source (like an image) but not collecting it or integrating it from multiple sources and interpreting the change over time. Any single source of info will just lead to a long laundry list of possible diagnoses (it could help us recall all the possibilities but not really take over our job).

Maybe AI will take over our jobs well after we're dead. But considering most video games are 1000x more complicated and work far better than my EHR, making medical applications must not be well incentivized for software companies.

All the talk about software companies revolutionizing healthcare with AI is just a PR stunt -- they haven't contributed much to healthcare so far. Their primary focus is probably trying to get an AI program that can figure out the best stocks/investments so that they can make unbelievable amounts of bank.
 
Last edited:
If they can't even make an EHR that works well and doesn't severely slow down the clinic; how are they going to make an AI machine that takes over the job of the physician?

Honestly the most time consuming part of being a doctor is collecting all the data (from the patient history, chart, images, labs and then integrating the data, and determining what is accurate and what isn't) and unfortunately this requires someone with a lot of education to be done properly. Once that is done then interpreting the data takes me 5 seconds. Then explaining it to the patient takes a lot of time as well.

I can imagine a computer interpreting the data from a single non-verbal source (like an image) but not collecting it or integrating it from multiple sources and interpreting the change over time. Any single source of info will just lead to a long laundry list of possible diagnoses (it could help us recall all the possibilities but not really take over our job).

Maybe AI will take over our jobs well after we're dead. But considering most video games are 1000x more complicated and work far better than my EHR, making medical applications must not be well incentivized for software companies.

All the talk about software companies revolutionizing healthcare with AI is just a PR stunt -- they haven't contributed much to healthcare so far. Their primary focus is probably trying to get an AI program that can figure out the best stocks/investments so that they can make unbelievable amounts of bank.

There are a few fallacies I wanted to address:
(1) People making EHR typically only have an undergrad education, and they have no connection with the machine learning community (save for maybe IBM trying to get their version going with Watson...which isn't the best).
(2) Software companies are not the ones making machine learning models (save for those with big research labs like Google/Microsoft/etc). That's similar to comparing nurses to physicians in terms of job description, or Outlook to Microsoft Research AI lab.
(3) Almost all current state of the art models are a series of learned weights and non-linear transformations that learn to attend to information given some reward/loss signal. This means that even the most complicated video game is a completely different object compared to the AI task.

The whole idea behind machine learning is learning a signal from noise. So you're right in that preprocessing image files, formatting files, etc, is not in the realm of ML for healthcare. However, evaluating how images/labs/charts/patient history interact with each other is within the domain of ML. And the beauty of that is you can then look at the gradients or internal activations of a model and see what information plays the greatest role in a decision. This would then allow a non ML person to identify why the model arrived at its conclusion, or how much one diagnosis overlaps with another.

And you're absolutely correct; I don't expect true AI (think movies) to take over healthcare in a binary fashion at all. What I hope to see is an interaction between physician and model to improve healthcare for the patient. Something like patient presents with...f(<features, images, time series data>) -> {further action, possible diagnoses (ranked)} along with why it arrived at those outcomes. And then these interactions (physician agrees/vetoes) would be used to further improve the model, along with patient outcomes.

I assure you, the primary focus of the big research labs doing healthcare is to truly improve the field. I can't comment on IBM watson since they're marketing their service, but the majority of this work is free research without a paywall. The main issue is the lack of overlap between MDs and ML PhDs to incorporate this into the current healthcare system.

Quick question: Why is there no push to regulate EHR? I'd argue that this is one of the biggest slow downs, and I know there is some work in learning a latent EHR representation exactly for this reason.
 
Top