This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Good points but how about if we assume that AI is much much faster than BI at the interpretation aspect?
The rate limiting factor is still how fast the final signing radiologist can work.

Members don't see this ad.
 
Members don't see this ad :)
Will this "AI scare" lead to radiologists breaking more into the clinical rhelm?
It didn’t lead them to become more clinical during the job market glut of ‘08-‘16. No reason to think a very unlikely threat of a glut would make them do so now
 
Visual skills have evolved in animals much longer than logic or language skills. So for human beings learning to drive could be achieved in few days compared to say becoming a Doctor or Lawyer. But for machines., visual data is order of magnitude heavier (3 hour 4K movie) vs 3 hour reading book content. In future, the amount of contribution that Radiologist could bring to his/her job will be a Lot less compared to what they have been doing now. It will be even low for specialist like oncologist, neurology or general physician. For human brain, training to be a Physician takes longer and needs hard work than trained to be a Gardener or Driver. That is why Physicians are paid a lot of money now. Currently or in near future, for Machines to be trained to be a Physician or an attorney, it will be more cost effective. This paradox of value of human learning vs machine learning is hard to digest. AI will get better and better. Just need to re-think this transition and accept the fact that Human Doctors will be less useful in the future than they are now. Fighting this and trying to stay important is dis-service to patients. Think of as an individual when your time has come to retire. You will pass the baton to next generation and just do some advisory role with low contribution. Similarly the whole profession has to think about transition from its current contributing state... Of-course everyone deserves a good living and the Onus is on the society to change current economic doctrines on how to facilitate this transition.
 
Last edited:
Visual skills have evolved in animals much longer than logic or language skills. So for human beings learning to drive could be achieved in few days compared to say becoming a Doctor or Lawyer. But for machines., visual data is order of magnitude heavier (3 hour 4K movie) vs 3 hour reading book content. In future, the amount of contribution that Radiologist could bring to his/her job will be a Lot less compared to what they have been doing now. It will be even low for specialist like oncologist, neurology or general physician. For human brain, training to be a Physician takes longer and needs hard work than trained to be a Gardener or Driver. That is why Physicians are paid a lot of money now. Currently or in near future, for Machines to be trained to be a Physician or an attorney, it will be more cost effective. This paradox of value of human learning vs machine learning is hard to digest. AI will get better and better. Just need to re-think this transition and accept the fact that Human Doctors will be less useful in the future than they are now. Fighting this and trying to stay important is dis-service to patients. Think of as an individual when your time has come to retire. You will pass the baton to next generation and just do some advisory role with low contribution. Similarly the whole profession has to think about transition from its current contributing state... Of-course everyone deserves a good living and the Onus is on the society to change current economic doctrines on how to facilitate this transition.
This remains sci-fi for the time being.
 
  • Like
Reactions: 2 users
Visual skills have evolved in animals much longer than logic or language skills. So for human beings learning to drive could be achieved in few days compared to say becoming a Doctor or Lawyer. But for machines., visual data is order of magnitude heavier (3 hour 4K movie) vs 3 hour reading book content. In future, the amount of contribution that Radiologist could bring to his/her job will be a Lot less compared to what they have been doing now. It will be even low for specialist like oncologist, neurology or general physician. For human brain, training to be a Physician takes longer and needs hard work than trained to be a Gardener or Driver. That is why Physicians are paid a lot of money now. Currently or in near future, for Machines to be trained to be a Physician or an attorney, it will be more cost effective. This paradox of value of human learning vs machine learning is hard to digest. AI will get better and better. Just need to re-think this transition and accept the fact that Human Doctors will be less useful in the future than they are now. Fighting this and trying to stay important is dis-service to patients. Think of as an individual when your time has come to retire. You will pass the baton to next generation and just do some advisory role with low contribution. Similarly the whole profession has to think about transition from its current contributing state... Of-course everyone deserves a good living and the Onus is on the society to change current economic doctrines on how to facilitate this transition.
It's clear you have a profound misunderstanding of how you actually learn and practice medicine/radiology as do most AI doomers. And I will be the first to give in when AI can compare lung nodules. For the love of God do me this one favor ChatGPT. That's all I want for Christmas
 
Last edited:
  • Like
Reactions: 3 users
It's clear you have a profound misunderstanding of how you actually learn and practice medicine/radiology as do most AI doomers. And I will be the first to give in when AI can compare lung nodules. For the love of God do me this one favor ChatGPT. That's all I want for Christmas
It is not about AI doomers. View AI as just a tool to help our lives better.

Please note AI is improving with Technology scaling and algorithm.

This is already an improvement recently when we compare AI 15years ago -> Diagnostic efficiency of artificial intelligence for pulmonary nodules based on CT scans

And also please watch recent Ted talk by Eric Topol ->

It is just about patients getting benefit and any work is just a service to other folks. If AI could compare lung nodules better, it will be Christmas gift for those who are affected. Infact just CNN's could be tuned .. and dont need to rely on ChatGPT for imaging. Work is underway where General Prupose AI like GPT4 could use other narrow specialized AI tools or software tools.

Humility is all we need to make US health care better. If Hospitals and Government regulation only allow human beings for final say on results and still Doctors paid the same, its Ok. At the end all that matters is patients getting better treatment with lower cost.
 
Last edited:
It is not about AI doomers. View AI as just a tool to help our lives better.

Please note AI is improving with Technology scaling and algorithm.

This is already an improvement recently when we compare AI 15years ago -> Diagnostic efficiency of artificial intelligence for pulmonary nodules based on CT scans

And also please watch recent Ted talk by Eric Topol ->

It is just about patients getting benefit and any work is just a service to other folks. If AI could compare lung nodules better, it will be Christmas gift for those who are affected. Infact just CNN's could be tuned .. and dont need to rely on ChatGPT for imaging. Work is underway where General Prupose AI like GPT4 could use other narrow specialized AI tools or software tools.

Humility is all we need to make US health care better. If Hospitals and Government regulation only allow human beings for final say on results and still Doctors paid the same, its Ok. At the end all that matters is patients getting better treatment with lower cost.

Your comments read like you have very little familiarity with healthcare in general, let alone radiology
 
  • Like
  • Care
Reactions: 2 users
It is not about AI doomers. View AI as just a tool to help our lives better.

Please note AI is improving with Technology scaling and algorithm.

This is already an improvement recently when we compare AI 15years ago -> Diagnostic efficiency of artificial intelligence for pulmonary nodules based on CT scans

And also please watch recent Ted talk by Eric Topol ->

It is just about patients getting benefit and any work is just a service to other folks. If AI could compare lung nodules better, it will be Christmas gift for those who are affected. Infact just CNN's could be tuned .. and dont need to rely on ChatGPT for imaging. Work is underway where General Prupose AI like GPT4 could use other narrow specialized AI tools or software tools.

Humility is all we need to make US health care better. If Hospitals and Government regulation only allow human beings for final say on results and still Doctors paid the same, its Ok. At the end all that matters is patients getting better treatment with lower cos

First stop saying we, it's clear your not a radiologist. Second, from the article: "AI had a significantly higher misdiagnosis rate and a markedly lower true negative rate". Clearly u just googled AI lung nodules and hyperlinked the first article. And the article had nothing to do with COMPARING lung nodules from previous studies nor to my knowledge has there been any significant evidence in actually being able to compare studies using AI, which btw is a HUGE hurtle for useful implementation which people seem to forget about. Finding lung nodules is not the hard part. Comparing them is. Third, TED talks are just BS pop science for normies to feel like they're learning something when they are really just wasting time on their phones. Using one as evidence for anything reinforces the fact you're not a radiologist, probably not even a doctor. Fourth, this isn't about humility this is about an over hyped toy with no real utility that's being pushed by tech doomers who know nothing about radiology but want to act like AI can solve every problem because they cant get venture capitalists to invest in their 15th uber knockoff anymore.

AI is for cheating on college essays and eliminating the need for customer service reps by making it so difficult to get someone to remove the double charge from my credit card bill that I just give up and pay it anyways
 
  • Like
Reactions: 4 users
over hyped toy with no real utility that's being pushed by tech doomers

AI is for cheating on college essays and eliminating the need for customer service reps by making it so difficult to get someone to remove the double charge from my credit card bill that I just give up and pay it anyways

The above two phrases indicates that it will be hard for you to be open in finding optimal solutions. I do agree that I went overboard on saying that future human doctor will be less useful than they are today without knowing what type of problems future could unfold. I apologies for that. There are 1000's of Doctors, Medical students and Premeds really care and focused on the wellbeing of patients and health of the society. I apologies to all of you. I am just frustrated on Per Capita health spending of US compared to most nations in the world. It is not just the cost of Doctors, probably the whole health care system in US. GPT4, Med-Palm 2, Alphafold are no hype. I agree problems needs to be solved in AI, when they solve it will get only better than what it is today. Computing power have increased in PC's like 10,000 times over 20 years and Internet access throughput even in higher order of magnitude. Commercial LLM's are just less than 2 years old. Convolutional Neural network, GAN have started to show promises only in last 5 years. We are in AI bubble like dotcom bubble in 2000's. What happened 10 years after dotcom bubble burst in 2000-2001 ? Billions of subscribers to social media, Millions using online shopping, streaming video.. If I have to go back and live in 1990's as a kid without all these.. it is hard for me to imagine. I think there is no point talking about all these changes. We'll live through this and change ourself accordingly whatever is needed for us to be relevant. The people who could see this early might be in a better position.. thats all.
 
The above two phrases indicates that it will be hard for you to be open in finding optimal solutions. I do agree that I went overboard on saying that future human doctor will be less useful than they are today without knowing what type of problems future could unfold. I apologies for that. There are 1000's of Doctors, Medical students and Premeds really care and focused on the wellbeing of patients and health of the society. I apologies to all of you. I am just frustrated on Per Capita health spending of US compared to most nations in the world. It is not just the cost of Doctors, probably the whole health care system in US. GPT4, Med-Palm 2, Alphafold are no hype. I agree problems needs to be solved in AI, when they solve it will get only better than what it is today. Computing power have increased in PC's like 10,000 times over 20 years and Internet access throughput even in higher order of magnitude. Commercial LLM's are just less than 2 years old. Convolutional Neural network, GAN have started to show promises only in last 5 years. We are in AI bubble like dotcom bubble in 2000's. What happened 10 years after dotcom bubble burst in 2000-2001 ? Billions of subscribers to social media, Millions using online shopping, streaming video.. If I have to go back and live in 1990's as a kid without all these.. it is hard for me to imagine. I think there is no point talking about all these changes. We'll live through this and change ourself accordingly whatever is needed for us to be relevant. The people who could see this early might be in a better position.. thats all.
You have a fundamental lack of understanding of healthcare economics in America. We spend a bunch of money because a bunch of parasitic losers in administration cost the systems greatly. Physicians cost the system something like 10%.

Healthcare is a straight up undercover jobs program in America. You are cheering for AI finding some application in rads when it should be making a bunch of losers unemployed if you really want to change things like you supposedly do.

You are getting this reaction because you have no idea what you are talking about about and then coming into a forum for radiologists to tell them they are too fat and happy. It's shockingly tone deaf.
 
  • Like
Reactions: 6 users
Members don't see this ad :)
Why waste time arguing with an AI chatbot troll?

 
  • Haha
Reactions: 1 user
Physicians cost the system something like 10%.

Yes I think I saw 8% for 2023. Physician compensation is not the driver of cost and if you believe otherwise you certainly have no idea at the macroeconomics at play. Completely agree that a lot of healthcare is a jobs program and in many locations healthcare is the #1 employer. Going to be a tough cycle to break and likely won't until there is a catastrophic collapse because you know... stuff :)
 
massive day for AI for those following. sora and gemini 1.5. wow. pretty insane. if this rate of progress keeps up, a lot of white collar work (and absolutely radiology included) could be threatened very, very soon. i stand by my 5 year prediction. buckle up.
 
  • Haha
  • Like
  • Inappropriate
Reactions: 4 users
massive day for AI for those following. sora and gemini 1.5. wow. pretty insane. if this rate of progress keeps up, a lot of white collar work (and absolutely radiology included) could be threatened very, very soon. i stand by my 5 year prediction. buckle up.

Ok 👍
 
massive day for AI for those following. sora and gemini 1.5. wow. pretty insane. if this rate of progress keeps up, a lot of white collar work (and absolutely radiology included) could be threatened very, very soon. i stand by my 5 year prediction. buckle up.
Lol okay...nothing will be different in 5 years. But sure, just throw out random numbers
 
I predict that in 5 years, he will still be saying job loss will happen within 5 years.

Twenty years ago this guy would be one of the homeless LA panhandlers with a marker and cardboard sign saying “the end is near.”
 
i'm in the same boat as you all. i hope/ i pray i am wildly wrong. but i can also appreciate how mind boggling this technology is... look where video generation was 2 years ago. if this same rate of progress applies to other areas, not a lot of jobs are safe.

it's natural to be defensive, especially when your livelihood is at stake.
 
  • Like
Reactions: 1 user
i'm in the same boat as you all. i hope/ i pray i am wildly wrong. but i can also appreciate how mind boggling this technology is... look where video generation was 2 years ago. if this same rate of progress applies to other areas, not a lot of jobs are safe.

it's natural to be defensive, especially when your livelihood is at stake.

It’s been a common tactic among AI pushers to demonstrate a new gimmick, such as an ability to generate videos, but without overcoming the limitations of the old gimmicks. The architectural principles behind these CNNs are not new, they’re just applying it to new arenas they hadn’t before, which makes everyone go “Oooo” and garners a lot of press. But the thing that made them suck three years ago make them still suck today.

Edit: I’m predicting it now. In the next 5years we’re going to have music-making AI which is going to create some bomb jams at first glance, but upon closer analysis are going to be slightly off and not quite where we want them to be. Good enough for tiktoks, not good enough to frankly replace musicians. This AI craze is going to be a very expensive demonstration of “real close just isn’t good enough.”
 
Last edited:
  • Like
Reactions: 1 user
i'm in the same boat as you all. i hope/ i pray i am wildly wrong. but i can also appreciate how mind boggling this technology is... look where video generation was 2 years ago. if this same rate of progress applies to other areas, not a lot of jobs are safe.

it's natural to be defensive, especially when your livelihood is at stake.
This problem will not be unique to Physicians. It will impact Large numbers of Software Programmers, Attorneys, Accountants as well. The society has to change by adapting to new socio-economic doctrines. Only thing that worries me is we have categories of people who (1) Exaggerates AI will kill us. (2) Underestimate AI that it is a phony trick and in the delusion that Human Intellect has some special place in the Universe. We need more people who is altruistic and think through that If AI scales up from the current rate, how we are going to transition our society and derive answers from basic question on what is purpose of life, happiness and suffering.
 
This problem will not be unique to Physicians. It will impact Large numbers of Software Programmers, Attorneys, Accountants as well. The society has to change by adapting to new socio-economic doctrines. Only thing that worries me is we have categories of people who (1) Exaggerates AI will kill us. (2) Underestimate AI that it is a phony trick and in the delusion that Human Intellect has some special place in the Universe. We need more people who is altruistic and think through that If AI scales up from the current rate, how we are going to transition our society and derive answers from basic question on what is purpose of life, happiness and suffering.
You don’t believe that human intellect is, as far as we know, unique in the universe?
 
  • Like
Reactions: 1 user
You don’t believe that human intellect is, as far as we know, unique in the universe?
Yes, as far as we know. Please note "As far as we know" has inherent limitation, isn't it ? Also couple of points why I have reasons to believe that human intellect could be surpassed. (1) Even though we have amazing intellectual outcomes from folks like Einstein or Spinoza, they are still limited within their field. With too much information, in modern days we can train human brain to only have narrow specialties such as Telecom Engineers or Oncologist. (2) Before homo "Something", probably the intellect of Australopithecus or even before that the intellect of Chimpanzees were probably unique in Planet Earth. Even with 100 times less connectivity than human brain, GPT4 can tackle wide amount of problem from USMLE to AP Calculus. In current form GPT4 is far less productive than Many Human beings, but both biological evolution and the technological evolution gives us all indication, the glorification of human intelligence will be just transient. But what is really important is our Joy, suffering and curiosity to find meaning for life and still plenty of mysteries to unravel in biology and physical sciences.
 
Last edited:
Yes, as far as we know. Please note "As far as we know" has inherent limitation, isn't it ? Also couple of points why I have reasons to believe that human intellect could be surpassed. (1) Even though we have amazing intellectual outcomes from folks like Einstein or Spinoza, they are still limited within their field. With too much information, in modern days we can train human brain to only have narrow specialties such as Telecom Engineers or Oncologist. (2) Before homo "Something", probably the intellect of Australopithecus or even before that the intellect of Chimpanzees were probably unique in Planet Earth. Even with 100 times less connectivity than human brain, GPT4 can tackle wide amount of problem from USMLE to AP Calculus. In current form GPT4 is far less productive than Many Human beings, but both biological evolution and the technological evolution gives us all indication, the glorification of human intelligence will be just transient. But what is really important is our Joy, suffering and curiosity to find meaning for life and still plenty of mysteries to unravel in biology and physical sciences.
So you agree that human intellect has a special place in the universe?

You're living in a science fiction world friend. Is it possible? Sure. Is it around the corner? Hell no.
 
  • Like
Reactions: 1 user
. Is it possible? Sure. Is it around the corner?
Technology growth is exponential. First Electronic Calculator ( with Vacuum Tubes ) which weighs around 33 pounds came in 1963. Deep blue beat World Chess Champion in 1997, AlphaGo beat World Go champion in 2016, GPT4 released in 2023. If you look at the gap between each significant development, it is 32, 19, 7... It is possible next significant development will happen in 3 to 4 years. Also this chatGPT hype has prompted investors to jump into this. So lots of funding going on like dot com bubble. There will be a bubble burst, but this bubble also drives innovation faster. Its hard to predict. But since the trajectory, results shows promise that enables several large corporation pouring Billions is something. Its not a bad thing to be prepared and having some contingency plan in place. I feel following results of Med-Palm 2/GPT4, watching talk by Dr Eric Topol MD, Reading the book by Dr Isaac Kohane MD/PhD could help. Instead of putting our head into sand, it is better to observe the development and see how we can adapt.
 
Last edited:
Technology growth is exponential. First Electronic Calculator ( with Vacuum Tubes ) which weighs around 33 pounds came in 1963. Deep blue beat World Chess Champion in 1997, AlphaGo beat World Go champion in 2016, GPT4 released in 2023. If you look at the gap between each significant development, it is 32, 19, 7... It is possible next significant development will happen in 3 to 4 years. Also this chatGPT hype has prompted investors to jump into this. So lots of funding going on like dot com bubble. There will be a bubble burst, but this bubble also drives innovation faster. Its hard to predict. But since the trajectory, results shows promise that enables several large corporation pouring Billions is something. Its not a bad thing to be prepared and having some contingency plan in place. I feel following results of Med-Palm 2/GPT4, watching talk by Dr Eric Topol MD, Reading the book by Dr Isaac Kohane MD/PhD could help. Instead of putting our head into sand, it is better to observe the development and see how we can adapt.
I agree that we should look forward and try to anticipate/drive how this changes our field.

As for the rest, I’ll believe it when I see it.
 
  • Like
Reactions: 1 user
Technology growth is exponential

Until it isn’t. A predictive model for growth is only trusted as far as we expect the premise underlying the model to apply… and we don’t have a premise for the model of exponential growth in this circumstance.

We actually have a counter-premise. Semiconductor chips are hitting a wall with increased computational power per unit volume because our transistors and circuits are atoms large. They can’t get smaller. Theres an upper limit to computational power density we can get to with classical machines, and we’re reaching that limit. Which means our CNNs only become more powerful by dumping more and more expensive resources into them. Moreover, quantum computing, if it were to ever arrive, solves the CNN problem less efficiently than classical machines.

Finally: AI still can’t do math. AI still can’t get the fingers right. Doesn’t matter how we train it. Doesn’t matter how we design the network. And we can’t make the networks smaller, faster, better anymore. We’re just finding new gimmicks to apply them to.

It’s not just AI doomsaying. It’s incorrect AI doomsaying.
 
  • Like
Reactions: 1 user
Until it isn’t. A predictive model for growth is only trusted as far as we expect the premise underlying the model to apply… and we don’t have a premise for the model of exponential growth in this circumstance.

We actually have a counter-premise. Semiconductor chips are hitting a wall with increased computational power per unit volume because our transistors and circuits are atoms large. They can’t get smaller. Theres an upper limit to computational power density we can get to with classical machines, and we’re reaching that limit. Which means our CNNs only become more powerful by dumping more and more expensive resources into them. Moreover, quantum computing, if it were to ever arrive, solves the CNN problem less efficiently than classical machines.

Finally: AI still can’t do math. AI still can’t get the fingers right. Doesn’t matter how we train it. Doesn’t matter how we design the network. And we can’t make the networks smaller, faster, better anymore. We’re just finding new gimmicks to apply them to.

It’s not just AI doomsaying. It’s incorrect AI doomsaying.
I also don't believe Quantum computing will have any practical usage in AI for many years to come. You are right on semiconductor technology node size. But, currently Nvidia's H100 is 4 nanometer. Some pilot production is already on 2 nanometer and 1 nm is in R&D. Also with advanced packaging they could stack one die one over other. Unlike human brain which has to be housed inside a small enclosure, these computing resources could reside in large datacenter rooms. We can increase the connection nodes through daisy chaining. [ Please confirm all I told with your ECE professor as well ]. So with all these additional improvements there is room for scaling up the hardware and connectivity. Yes., GPT4 is limited with its Math and reasoning and it is at its infancy. Recently Google published Alpha-Geometry which solved International Olympiad level math Proofs. But still a Long long way to go for AI to be a Genius in Math Proofs and to solve problem with higher level of abstraction. But I believe Coding, Diagnosis/Prognosis of Physicians, Aligning trial situation to a law code or previous ruling etc., could be achieved on par or better than human level in few years.
 
Yeah man according to your exponential model, in 50 years we’ll all be sentient gasclouds in space.

Still gonna be reading plainfilms tho :rolleyes:.

The point was always efficiency growth is not exponential. While there is some efficiency to still be squeezed out, there’s growing rhetoric among computational scientists, which I’m sure is reflected in all our anecdotal experiences, that AI really hasn’t been getting much better the past several years.

But I believe Coding, Diagnosis/Prognosis of Physicians, Aligning trial situation to a law code or previous ruling etc., could be achieved on par or better than human level in few years.

I don’t. It’s akin to saying we’ll get faster than light travel soon because look at how fast we went from the first flight to landing on the moon.

Maybe I should reframe: regardless of the feasibility of AI longterm overcoming human abilities in all things without monstrous infrastructure / cost investment; the implementation will be so slow it won’t make a functional difference to career physicians anyway. If only because the tedious process of verifying the safety of these things—assuming they work, which they don’t currently, and there is good reason to believe they won’t soon—takes decades.

Put another way, supercomputers have been able to solve geometry problems for a while. The problem isn’t whether they can solve it, the problem is whether it’s cheaper. I could build a multimillion dollar supercomputer, or I could just, you know, hire a mathematician salaried at $90k/yr. I don’t question whether particular problems can be solved with enough resource investment, I question the financial feasibility of the approach. And despite planned or RD’d supercomputer builds, the best datapoint for future expense estimates are recent expenses, barring an upheaval in how components are designed, and we don’t have such an upheaval on the horizon save QC, which I can personally guarantee you will not aid with the CNN problem.
 
Last edited:
  • Like
Reactions: 1 user
Anytime you project indefinite continuation of exponential growth you are likely to be wrong. There are plenty of investors who have generated a 50% or greater return in a year. Project that for 20 years with any reasonable amount of starting capital and you own the entire US economy. Most fields develop exponential growth until they run into a wall due to technical limitations

Take things that most people care more about the AI, like housing or food or gas or energy. Despite all our technology, building a house costs more now (even adjusted for inflation) then it did 20 years ago. Food and fuel costs are similar. Despite big breakthroughs in making solar/wind cheap and efficient, we still pay more per KWH. At some point you run into the fact that your limiting factors are resources and labor. Compute/AI may run into a similar resource wall, where greater things can be achieved, but only at significant cost due to compute resources required.
 
  • Like
Reactions: 1 users
Yeah man according to your exponential model, in 50 years we’ll all be sentient gasclouds in space.

Still gonna be reading plainfilms tho :rolleyes:.

The point was always efficiency growth is not exponential. While there is some efficiency to still be squeezed out, there’s growing rhetoric among computational scientists, which I’m sure is reflected in all our anecdotal experiences, that AI really hasn’t been getting much better the past several years.



I don’t. It’s akin to saying we’ll get faster than light travel soon because look at how fast we went from the first flight to landing on the moon.

Maybe I should reframe: regardless of the feasibility of AI longterm overcoming human abilities in all things without monstrous infrastructure / cost investment; the implementation will be so slow it won’t make a functional difference to career physicians anyway. If only because the tedious process of verifying the safety of these things—assuming they work, which they don’t currently, and there is good reason to believe they won’t soon—takes decades.

Put another way, supercomputers have been able to solve geometry problems for a while. The problem isn’t whether they can solve it, the problem is whether it’s cheaper. I could build a multimillion dollar supercomputer, or I could just, you know, hire a mathematician salaried at $90k/yr. I don’t question whether particular problems can be solved with enough resource investment, I question the financial feasibility of the approach. And despite planned or RD’d supercomputer builds, the best datapoint for future expense estimates are recent expenses, barring an upheaval in how components are designed, and we don’t have such an upheaval on the horizon save QC, which I can personally guarantee you will not aid with the CNN problem.
The current state of AI ( Med palm-2 and GPT4) already made significant headlines with passing USMLE, Better clinical outcomes etc., The projections based on current roadmap with technology scaling, die stacking, daisy chaining, other architecture and algorithm improvement etc, will get 50-100X improvement in 5-7 years which is pretty substantial improvement with respect to current state. Beyond that Nobody knows what will happen... Could improve or stagnate.
 
Anytime you project indefinite continuation of exponential growth you are likely to be wrong. There are plenty of investors who have generated a 50% or greater return in a year. Project that for 20 years with any reasonable amount of starting capital and you own the entire US economy. Most fields develop exponential growth until they run into a wall due to technical limitations

Take things that most people care more about the AI, like housing or food or gas or energy. Despite all our technology, building a house costs more now (even adjusted for inflation) then it did 20 years ago. Food and fuel costs are similar. Despite big breakthroughs in making solar/wind cheap and efficient, we still pay more per KWH. At some point you run into the fact that your limiting factors are resources and labor. Compute/AI may run into a similar resource wall, where greater things can be achieved, but only at significant cost due to compute resources required.
Agreed. My statements are based on current roadmap which is laid out for next 5 to 7 years. Nobody knows what will happen after 5 years. Based on the scaling that happened in 10 years, the next 5 years projection of even 50-100X is extremely conservative.
 
If you're so certain AI is going to destroy radiology why don't you go pick up some extra moonlighting shifts to make some money before the collapse of our field instead of spending that time fear mongering about things that are not going to happen.
 
If you're so certain AI is going to destroy radiology why don't you go pick up some extra moonlighting shifts to make some money before the collapse of our field instead of spending that time fear mongering about things that are not going to happen.

He’s not in radiology.
 
If you're so certain AI is going to destroy radiology why don't you go pick up some extra moonlighting shifts to make some money before the collapse of our field instead of spending that time fear mongering about things that are not going to happen.
I never mentioned AI is going to destroy radiology. I just mentioned facts on AI growth, its current roadmap and impact of AI on various career path including Physician, Programmers, Lawyers etc., Every career path re-aligns according to the latest tools available and 'supply and demand' for that work at some point. The better constructive comments could be how certain task can be augmented by current AI and how these transition to better AI could be Managed so it is helpful for both patient and Current Physicians to not have oversupply and getting trained/re-trained on what is most appropriate. There are some good comments and questions by @DoctwoB @Cognovi and few more. Its just a debate and in-depth valid points backed with facts could help us all.
 
Last edited:
  • Like
Reactions: 1 user
I never mentioned AI is going to destroy radiology. I just mentioned facts on AI growth, its current roadmap and impact of AI on various career path including Physician, Programmers, Lawyers etc., Every career path re-aligns according to the latest tools available and 'supply and demand' for that work at some point. The better constructive comments could be how certain task can be augmented by current AI and how these transition to better AI could be Managed so it is helpful for both patient and Current Physicians to not have oversupply and getting trained/re-trained on what is most appropriate. There are some good comments and questions by @DoctwoB @Cognovi and few more. Its just a debate and in-depth valid points backed with facts could help us all.

I think people are probably frustrated with you because you’re coming on a board of healthcare professionals and professional trainees as what I presume is an undergrad STEM major, clearly know very little about the complexities of new technology incorporation into healthcare, know nothing about the process by which new healthcare technologies are vetted prior to entering the clinical space, and know nothing about the IRL implementation of technologies and how they differ from the published idealized use cases (a symptom of the reproducibility crisis).

You’re smart and enthusiastic, and I can appreciate that, but are a bit naive and stubbornly persist when people who definitely know more about the innumerable intangible complexities of a healthcare environment, and are therefore much more qualified than you to determine expected clinical relevance, tell you “slow your roll.” Your response repeatedly has just been “look how fast things are going though in this NEW area” with clear disregard and indifference to the legitimate criticisms others have already levied against your points.

It makes it unpleasant to discuss on this board with you.
 
  • Like
Reactions: 1 user
I never mentioned AI is going to destroy radiology. I just mentioned facts on AI growth, its current roadmap and impact of AI on various career path including Physician, Programmers, Lawyers etc., Every career path re-aligns according to the latest tools available and 'supply and demand' for that work at some point. The better constructive comments could be how certain task can be augmented by current AI and how these transition to better AI could be Managed so it is helpful for both patient and Current Physicians to not have oversupply and getting trained/re-trained on what is most appropriate. There are some good comments and questions by @DoctwoB @Cognovi and few more. Its just a debate and in-depth valid points backed with facts could help us ald
Dude. You come into a radiology forum with absolutely no background in radiology to talk to people that have done at least 8 years plus of higher education and even more years of specialized training/practice. And then you site Ted Talks and bull**** marketing stunts as your evidence for the AI stuff. Not only do you know nothing about radiology, but you clearly don't know about anything else. That's why people are annoyed with you. I could have a more constructive conversation with my my dead grandma, at least should wouldn't site ****ing Ted talks. You've gotta be trolling at this point
 
Top