If they can't even make an EHR that works well and doesn't severely slow down the clinic; how are they going to make an AI machine that takes over the job of the physician?
Honestly the most time consuming part of being a doctor is collecting all the data (from the patient history, chart, images, labs and then integrating the data, and determining what is accurate and what isn't) and unfortunately this requires someone with a lot of education to be done properly. Once that is done then interpreting the data takes me 5 seconds. Then explaining it to the patient takes a lot of time as well.
I can imagine a computer interpreting the data from a single non-verbal source (like an image) but not collecting it or integrating it from multiple sources and interpreting the change over time. Any single source of info will just lead to a long laundry list of possible diagnoses (it could help us recall all the possibilities but not really take over our job).
Maybe AI will take over our jobs well after we're dead. But considering most video games are 1000x more complicated and work far better than my EHR, making medical applications must not be well incentivized for software companies.
All the talk about software companies revolutionizing healthcare with AI is just a PR stunt -- they haven't contributed much to healthcare so far. Their primary focus is probably trying to get an AI program that can figure out the best stocks/investments so that they can make unbelievable amounts of bank.
There are a few fallacies I wanted to address:
(1) People making EHR typically only have an undergrad education, and they have no connection with the machine learning community (save for maybe IBM trying to get their version going with Watson...which isn't the best).
(2) Software companies are not the ones making machine learning models (save for those with big research labs like Google/Microsoft/etc). That's similar to comparing nurses to physicians in terms of job description, or Outlook to Microsoft Research AI lab.
(3) Almost all current state of the art models are a series of learned weights and non-linear transformations that learn to attend to information given some reward/loss signal. This means that even the most complicated video game is a completely different object compared to the AI task.
The whole idea behind machine learning is learning a signal from noise. So you're right in that preprocessing image files, formatting files, etc, is not in the realm of ML for healthcare. However, evaluating how images/labs/charts/patient history interact with each other is within the domain of ML. And the beauty of that is you can then look at the gradients or internal activations of a model and see what information plays the greatest role in a decision. This would then allow a non ML person to identify why the model arrived at its conclusion, or how much one diagnosis overlaps with another.
And you're absolutely correct; I don't expect true AI (think movies) to take over healthcare in a binary fashion at all. What I hope to see is an interaction between physician and model to improve healthcare for the patient. Something like patient presents with...f(<features, images, time series data>) -> {further action, possible diagnoses (ranked)} along with why it arrived at those outcomes. And then these interactions (physician agrees/vetoes) would be used to further improve the model, along with patient outcomes.
I assure you, the primary focus of the big research labs doing healthcare is to truly improve the field. I can't comment on IBM watson since they're marketing their service, but the majority of this work is free research without a paywall. The main issue is the lack of overlap between MDs and ML PhDs to incorporate this into the current healthcare system.
Quick question: Why is there no push to regulate EHR? I'd argue that this is one of the biggest slow downs, and I know there is some work in learning a latent EHR representation exactly for this reason.