AI Docs within a decade

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I'm all for "new approaches" in artificial intelligence and all for people trying, I just think it has limits, and that it won't, 1) replace doctors, or 2) take over the human race. And if you had proof either of those two things were going to happen, you'd have replied with it.

Until AI can replace the least skilled jobs, I have very little confidence it can replace the highest skilled. AI hasn’t even successfully replaced the dog pooper scooper, let alone my job that took 4 years of undergrad, 4 years of medical school, 3 years of residency, a year of fellowship and years of clinical & human skill refinement. So, when AI can figure out how to replace the job of cleaning up dog poo in my backyard (my other job) not only will you have convinced me and won the arguement handily, but I’ll be the first one to pay top dollar to buy that technology from you. But until then, I’m not going to worry in the least, about being rendered jobless by an AI computer program.

Good news! The dog poo problem has already been solved, so rest easy. Unfortunately, there really isn't a monetary incentive to roll it out in a commercial form as the mechanical portion of it will be pretty pricey.

If I can convince you of one thing, it will be to stop calling this stuff artificial intelligence. You're absolutely right, AI is so damn difficult that we need a whole new paradigm shift to get there. However, we are getting very good at learning specific tasks. And these functions that do this are a whole bunch of non linear transformations, which is great if there is a consistent signal. For example, we've already reached parity with human translators by using these non linear functions, and the same in Go, and other complex competitive environments that rely on heuristic reasoning. These are human created problems, and yet we've set up a framework that reaches a better minima on the problem than a human can.

I'm not saying these functions will replace you, I'm saying that those 7 years of education learning a signal can be done via gradient optimization. Now, these updates need a lot more data than humans, but they do better given enough of it for a certain task such as diagnosing a patient. This means that while you're not familiar with something, the function will have identified some underlying issues not readily apparent given your very human bias.

Lastly, those 'AI' programs you've seen are not AI, and are most likely just fourier transformations with logistic regression.

If it helps, I don't believe any of these models will take over the human race. And I don't give a crap about the real life interactions (robots, etc) as that's more an engineering problem than it is a machine learning one.

Members don't see this ad.
 
Good news! The dog poo problem has already been solved, so rest easy. Unfortunately, there really isn't a monetary incentive to roll it out in a commercial form as the mechanical portion of it will be pretty pricey.

If I can convince you of one thing, it will be to stop calling this stuff artificial intelligence. You're absolutely right, AI is so damn difficult that we need a whole new paradigm shift to get there. However, we are getting very good at learning specific tasks. And these functions that do this are a whole bunch of non linear transformations, which is great if there is a consistent signal. For example, we've already reached parity with human translators by using these non linear functions, and the same in Go, and other complex competitive environments that rely on heuristic reasoning. These are human created problems, and yet we've set up a framework that reaches a better minima on the problem than a human can.

I'm not saying these functions will replace you, I'm saying that those 7 years of education learning a signal can be done via gradient optimization. Now, these updates need a lot more data than humans, but they do better given enough of it for a certain task such as diagnosing a patient. This means that while you're not familiar with something, the function will have identified some underlying issues not readily apparent given your very human bias.

Lastly, those 'AI' programs you've seen are not AI, and are most likely just fourier transformations with logistic regression.

If it helps, I don't believe any of these models will take over the human race. And I don't give a crap about the real life interactions (robots, etc) as that's more an engineering problem than it is a machine learning one.

It's interesting that you mention that machine learning requires a lot more data points than human learning. I think that's going to be a huge problem in the medical space, as clean medical data is extremely scarce. It may not seem that way given the huge amounts of visit health data is extremely messy, poorly accessible, highly regulated, and compartmentalized (between and within health systems).
 
  • Like
Reactions: 1 user
Imagine the panic when computers first started to be used!

I remember stories when the Di Vinci surgical device came out saying, "Surgeons will soon be replaced with operating robots."
And when CT's and MRI's became more prevalent, "Who needs a doctor to examine you and listen to your heart and lungs, when the machine just looks inside you and tells you what's wrong!?"

But we all know it's not that simple. But it doesn't stop the click-bait pop-culture articles from creating a lot of hype & conversation. It's entertaining to talk about, I suppose, but not worth much (if any) worry in my head.
 
Members don't see this ad :)
I love gloom and doom talk. Its way easier to engage when FI. I dont think we are remotely close. We need a dog pooper scooper. A maid for my house and frankly something that can reliably fold my laundry. After that I will worry about a machine communicating with people and their families in the ED.

Unless there are major advances in medical technology (Like a scanner that can diagnose and treat a patient) we have nothing to worry about.

Imagine the panic when computers first started to be used!
I know. The doom and gloom stuff gets put on ignore and makes me chuckle, more and more, as the years go by. There's just something inherent in human nature, I think, where people want to believe apocalyptic predictions, for some reason.

Remember "Y 2 K" when all the world's computers were going to crash at 12:00 midnight Jan 1, 2000, and send the world back into the stone ages?
When nuclear holocaust was coming "any time" during the Reagan era?
Google "Asteroid hitting Earth" and there's plenty of apocalyptic articles about that, too.
Remember when they predicted the planet would be destroyed irreversibly "in 10 years"? Yep. That was 12 years ago.
And before that "a coming ice age" and "overpopulation"?
Remember when switching to ICD10 was going to be the end of the world, too?

Yeah. It wasn't a big deal. It's usually not.
 
  • Like
Reactions: 1 user
Good news! The dog poo problem has already been solved, so rest easy. Unfortunately, there really isn't a monetary incentive to roll it out in a commercial form as the mechanical portion of it will be pretty pricey.

If I can convince you of one thing, it will be to stop calling this stuff artificial intelligence. You're absolutely right, AI is so damn difficult that we need a whole new paradigm shift to get there. However, we are getting very good at learning specific tasks. And these functions that do this are a whole bunch of non linear transformations, which is great if there is a consistent signal. For example, we've already reached parity with human translators by using these non linear functions, and the same in Go, and other complex competitive environments that rely on heuristic reasoning. These are human created problems, and yet we've set up a framework that reaches a better minima on the problem than a human can.

I'm not saying these functions will replace you, I'm saying that those 7 years of education learning a signal can be done via gradient optimization. Now, these updates need a lot more data than humans, but they do better given enough of it for a certain task such as diagnosing a patient. This means that while you're not familiar with something, the function will have identified some underlying issues not readily apparent given your very human bias.

Lastly, those 'AI' programs you've seen are not AI, and are most likely just fourier transformations with logistic regression.

If it helps, I don't believe any of these models will take over the human race. And I don't give a crap about the real life interactions (robots, etc) as that's more an engineering problem than it is a machine learning one.
Either way, it's interesting stuff. I'm glad people are working on it.
 
  • Like
Reactions: 1 user
It's interesting that you mention that machine learning requires a lot more data points than human learning. I think that's going to be a huge problem in the medical space, as clean medical data is extremely scarce. It may not seem that way given the huge amounts of visit health data is extremely messy, poorly accessible, highly regulated, and compartmentalized (between and within health systems).

Spot on! We can get around the noisy issue, but the number of data points is the critical issue for these current approaches to be successful. I'm not sure how familiar you are with embeddings, but there have been approaches to create a universal embedding from different EMRs which have shown success in that they can encode and decode information accurately. Unfortunately, only certain labs have access to this data meaning that research in this domain is substantially behind other areas.

There is another side, called Generative Adversarial Networks (GAN), that can generate synthetic patient information that's indistinguishable (according to cardiologists). However, this obfuscates the original HIPAA data so while it seems normal for a human, models trained on this synthetic information do suffer a statistically significant decay in performance due to this slight distribution shift. Correcting this shift then allows a person to then rediscover patient information contained in the information space of the parameters of the GAN. Differential privacy really isn't my field, I just know everything has a trade off concerning it.
 
  • Like
Reactions: 1 user
What containment problem?
Is this a real problem that exists, or an imagined problem?

The Containment Problem was probably best cinematically captured in the film Ex Machina. I recommend it.

While it is currently an imagined problem it is one that must be solved before it exists if our solution is to have any possibility of succeeding.

(Roughly) The Containment Problem asks how to ensure that, if we create an artificial intelligence which exceeds our own intellectual abilities, this same AI doesn't destroy us?
 
... if we create an artificial intelligence which exceeds our own intellectual abilities, this same AI doesn't destroy us?
This is not possible. If we create an artificial intelligence "smarter than us," then we're actually smarter than we thought and then therefore smarter than the AI we just created. Otherwise we couldn't have been smart enough to create something that smart. Another way to look at it, is that a human race too stupid to create any technology, artificial intelligence included, without an off button, can't possibly be intelligent enough to create something with intelligence that exceeds its own.
 
This is not possible. If we create an artificial intelligence "smarter than us," then we're actually smarter than we thought and then therefore smarter than the AI we just created. Otherwise we couldn't have been smart enough to create something that smart. Another way to look at it, is that a human race too stupid to create any technology, artificial intelligence included, without an off button, can't possibly be intelligent enough to create something with intelligence that exceeds its own.

Therefore you must conclude we were created by an intelligence greater than ours, no?
 
  • Like
Reactions: 1 user
Therefore you must conclude we were created by an intelligence greater than ours, no?
Yes.

And think of how absurd it would be to say the opposite, that we created God, an intelligence greater than ours. That's essentially what the AI-apocolypse crowd is saying we're going to do accidentally, with AI, although it would be an "evil greater intelligence" that destroys us. It's patently absurd.

I find the possibility that a superior intelligence could create an inferior one, as very believable. God (superior) creating humans (inferior) fits this model.
I don't find the opposite concept, humans creating any being or technology superior to them (God, all powerful AI, or otherwise) as very believable.
 
  • Like
Reactions: 1 user
Yes.

And think of how absurd it would be to say the opposite, that we created God, an intelligence greater than ours. That's essentially what the AI-apocolypse crowd is saying we're going to do accidentally, with AI, although it would be an "evil greater intelligence" that destroys us. It's patently absurd.

I find the possibility that a superior intelligence could create an inferior one, as very believable. God (superior) creating humans (inferior) fits this model.
I don't find the opposite concept, humans creating any being or technology superior to them (God, all powerful AI, or otherwise) as very believable.

Cool. You go your way & I'll go mine.

I find Natural Selection to be the most compelling explanation for the origin of intelligence, and I think that possible unintended consequences of current computer science research should be taken seriously.
 
  • Like
Reactions: 2 users
I find Natural Selection to be the most compelling explanation for the origin of intelligence
I also think natural selection happens, and explains a lot. But I don’t think it explains everything.

I think that possible unintended consequences of current computer science research should be taken seriously.
I agree completely and I think this statement makes a lot of sense. But this is a much more sober presentation of the issue, than the usual ones which come off more like a low budget science fiction apocalypse movie.
 
Last edited:
  • Like
Reactions: 1 user
"Artificial intelligence in medicine: not ready for prime time" Artificial intelligence in medicine: not ready for prime time


"An IBM Watson Health executive said doctors liked the program. However, the article quoted a physician user from Jupiter Hospital in Florida as saying, 'This product is a piece of s**t,' adding that it was unusable for most cases."
 
Thanks for raising these concerns! Hopefully I can address some of then:

- First, AI doesn't exist. AI is the type of word used to describe common sense intelligence which involves highly effective transfer learning. What we're talking about is ML (machine learning).

- The report you referenced is in terms of complete automation. The reason why it's 0.4% is due to the physical interaction with the patient. That's primarily limited by robotics rather than ML as the number of sensors available is so limited (consider the number of sensors in the body, there is no way for a computer to handle this without burning a hole into itself). Furthermore, I fully agree that ML's role in medicine is not to conduct physical interactions as it will be most effective at offloading the diagnostic process.

- Watson was always viewed as a complete joke within the research community. They did some clever stuff for jeopardy, but that's it. There is a reason why IBM is losing all of their researchers, and I would not use their state as an evaluation of the progress of ML within the medical community (or really anywhere)

- The structural advantages of ML approaches over humans is the lack of bias and mental decline due to sleep/stress/etc. These models will be able to stay up to date on current studies, and will have "seen" hundreds of thousands of patients during training along with their outcomes. This means that within the realm of {lab reports, all electronic signals (images, graphs, values), history, description}, the model will most likely be more effective than a physician. However, UI will probably be an issue for a long time, meaning physical examinations will be hard to encode.

- I completely agree, physicians will be critical for physical interactions with the patient. I don't see this changing for a long time unless the current paradigm changes in our field. This also goes for emotional support.

- Your $200000 software is most likely not in this area of work, so I wouldn't compare it. I'd argue it's similar to comparing NP vs MD/DO.

- It's extremely hard to get this stuff into practice as the FDA doesn't like a lot of the SoA approaches.

The 200,000 dollar piece of software was in feature detection of a digital image. It’s hard to get more in this area of work without literally talking about the existing solutions (all of which are mediocre).

You also make a huuuuuuge assumption regarding patient charts. Have you any experience in medicine? I am legitimately confused. Most of the time in the emergency department you know what’s in the chart? Nothing. Hospital systems are garbage at communicating. We can’t even get the equivalent of a .doc format to work and people are worried about machine learning lol.

The reality is physicians may be replaced by machines one day, after 90+% of other jobs have been replaced. Fine by me. Only sucks if you’re the first to go.

Finally while I hope that robots replace everything, I see more stagnation and slowing compared to the rapid progress of years before. We are running into the limits of physics it seems, whether we like it or not. TSMC Details 5 nm Process Tech: Aggressive Scaling, But Thin Power and Performance Gains

Used to be every 2 years we got a node shrink with perfect uniform scaling, lower production costs, and massive power and performance gains. Today the shrinks no longer really mean what they say they are, and the power and performance gains are nothing like they used to be.

Would not surprise me at all if we got a quasi futuristic state that never lives up to the futuristic dreams of nerds on the internet. Electric cars? Yes. Flying cars? Well...not really panning out.
 
  • Like
Reactions: 1 users
The 200,000 dollar piece of software was in feature detection of a digital image. It’s hard to get more in this area of work without literally talking about the existing solutions (all of which are mediocre).

You also make a huuuuuuge assumption regarding patient charts. Have you any experience in medicine? I am legitimately confused. Most of the time in the emergency department you know what’s in the chart? Nothing. Hospital systems are garbage at communicating. We can’t even get the equivalent of a .doc format to work and people are worried about machine learning lol.

The reality is physicians may be replaced by machines one day, after 90+% of other jobs have been replaced. Fine by me. Only sucks if you’re the first to go.

Finally while I hope that robots replace everything, I see more stagnation and slowing compared to the rapid progress of years before. We are running into the limits of physics it seems, whether we like it or not. TSMC Details 5 nm Process Tech: Aggressive Scaling, But Thin Power and Performance Gains

Used to be every 2 years we got a node shrink with perfect uniform scaling, lower production costs, and massive power and performance gains. Today the shrinks no longer really mean what they say they are, and the power and performance gains are nothing like they used to be.

Would not surprise me at all if we got a quasi futuristic state that never lives up to the futuristic dreams of nerds on the internet. Electric cars? Yes. Flying cars? Well...not really panning out.

Could you link the 200k software? "Feature detection" is as abstract as saying that something helps a patient. I have a strong suspicion that it's a company masquerading its product as something fancier.

I'm surprised, as the hospital we work with has charts for patients. Also, we don't need a consistent format, there's a lot of work in agnostic representations because of the exact reason you describe. This allows for multiple formats to be viewed (even entirely handwritten notes) on the same manifold.

Lastly, I agree. Robots are a long ways off exactly due to reason you listed. However, models are not bound by this bottleneck currently and there is no reason to think that faster chips will solve our current research problems.
 
Top