Because The AI hype is so overblown
The Uber death should show how absurd it is to think AI will replace radiologists. For the past 5 years, Autonomous driving has been accepted as fact to be safer than humans. But at the first sign of a tech failure, testing the product was banned and multiple companies ceased testing on their own. The public’s faith is destroyed and the Timeframe for the autonomous driving was probably set back 5 years. And this was from a scenario where the pedestrian was doing something extremely dangerous. How’s that gonna work when the tech fails in the light of day?
How do you think the FDA is gonna react when early AI rollouts lead to a patients death? You can’t blame the patients for being reckless in this scenario.
And this is completely ignoring the fact that AI is light years away from being considered equal to radiologists at the simplest of tasks.
Because AI can potentially make our jobs easier not eliminate them. Also, for the people with engineering background, this is the best time to start training so that by the time they are finished with their residency, they can start making meaningful contributions to the field when its development infra becomes more structured.
Furthermore, autonomous vehicles face a far greater obstacle to commercial adoption than automation of radiology does, because a car either has a human driver, or it doesn't.
Dermatology.
Dermatology.
Curious if you actually read the article?For anyone following AI developments in healthcare, the FDA just approved the first device that uses ML to screen for diabetic retinopathy without an optometrist reading the image: FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems
The significance is that there is no ocular expert examining the image, it's a sign that the FDA is willing to bypass "expert providers" if automation is shown to be superior.
Curious if you actually read the article?
That article has nothing to do with replacing anyone because the machine is “superior”...? It specifically talks about being for populations who are not able to see their eye doctor as often as they’re supposed to, and if the image is deemed “more than mild diabetic retinopathy” they still go see their eye doctor who does the same exam and determines if treatment is necessary. The article also says it’s designed for clinics with healthcare professionals that aren’t used to dealing with eye diseases- aka a FM doctor/midlevel in BFE that can use a machine to screen his/her patients and ship them to an eye doctor if needed. It’s an automated screening tool, no different than the wonderful EKG machines. The fearmongering on here is ridiculous sometimes.
It specifically talks about being for populations who are not able to see their eye doctor as often as they’re supposed to, and if the image is deemed “more than mild diabetic retinopathy” they still go see their eye doctor who does the same exam and determines if treatment is necessary.
The article also says it’s designed for clinics with healthcare professionals that aren’t used to dealing with eye diseases- aka a FM doctor/midlevel in BFE that can use a machine to screen his/her patients and ship them to an eye doctor if needed.
Would love for AI to spit out unintelligible ICU chest film reports...
The point of this thread is fearmongering, which apparently you’re up for spreading. My analytical skills are just fine, but thank you for your passive aggressiveness.Your analytical thinking skills are lacking if you do not see this development as a significant milestone on the road to the automation of medicine. None of the reasons you gave above as supposedly neutralizing the impact of this development stand up to muster.
Previously, 100% of patients wanting to get their eyes checked had to see a trained provider. Now, only that subset who get tested by the automated system and get an abnormal result will be seen by a trained provider. This represents a loss of business to trained providers equal to whatever fraction of eye exams return "normal" results. What percentage of eye exams are normal? I'd estimate the majority. In other words, if this system and its upcoming iterations gain widespread adoption, they will absorb a large chunk, perhaps the majority, of eye screening business currently going to trained providers of eye screening services. If you cannot see that the FDA approval of this technology for clinical use as of the current year is a major development with troubling implications for the future then you should think harder.
I'm not even sure what point you're trying to make in the above. The theme of this thread is that automation poses a threat to radiologists and by extension other medical professionals. The threat is that thanks to technology, there will no longer be a reliance on the services of trained professionals to provide the services these professionals have been trained for. Above you say, in effect, that the automated system is designed to perform eye screenings without the need for trained eye professionals who were previously needed to perform such screenings. They can now be done by untrained individuals thanks to automation. You are absolutely right that this is the point of the system, but I can't for the life of me understand why you think this supposedly nullifies the notion that this development is a threat? Your analysis confirms the threat, not negates it.
Overall, your thinking on this topic is mushy and incoherent.
The point of this thread is fearmongering...
The screening will be done at places who don’t have access to care (aka they aren’t seeing an eye professional anyway- that’s actually business lost for these people, not what you’re describing), leading to a referral to someone who knows wtf is going on and can either choose to act on it or not. Let me say it again for you, these patients aren’t seeing eye doctors...
All AI is going to do is lessen the workload on multiple specialties (including radiology but not limited to it). No one is being replaced, and if they are, it won’t be radiology first. You can go on seeing the glass as half empty though 👍
is there a limit to how many times I can “like” this post?
radiology is DEFINITELY not going to be the first to go.
”
why so much fear about algorithms replacing a consultancy specialty that requires 5+ years of training, when it seems there’s less worry about the job security of those docs—relying on the rads consults— who train for equal or less time in their own respective field?
i think a much more vulnerable specialty is the generalist / internist / family meds. the field can easily go the way of anesthesiology, with practices made up of more NP’s / PAs / DNPs being overseen by fewer and fewer MDs. think mid levels seeing pts (which already happens), and MDs overseeing them and the AI algorithms, and then referring out whenever something’s not “classic”.
“does pt have diabetes / hypertension / CKD / hyperthyroidism / CHF / sepsis?” and “how to treat it?” is much simpler to codify than, “does postoperative patient have abscess / bowel obstruction / ischemia / pneumonia / leak?” on imaging...and “if we’re not sure, what’s the next best step???”
I'm not really sure how to feel about #1 on this article... (Tech in Asia - Connecting Asia's startup ecosystem)
They're actually marketing AI Image Analytics to Nurses and/or General Health Practitioners and even mentioned less reliance on "limited and expensive specialists". I'm still a medstudent so I don't know if this hype has any merit but how will something like this impact radiology as it stands now?
Well... Is there a push for AI encroachment on these fields? Sure, they *could* be the victims of AI, but the fact that a hunter *could* more easily kill a cat than a buck doesn't mean that the cat has more to fear. If we are the primary target, we are in the greatest danger.
Exactly. It doesn’t mean jack that radiology is the one being targeted. They still have to make it work.
Cancer of all comers has received the vast vast majority of medical research funding. But here we are and so is every form of cancer.
Meanwhile, a random VA doctor who does some research on the side discovers the cure to hepatitis C, and now Hepatobiliary docs worldwide Are going to lose a huge chunk of their patient population
3. If what I expect happens, which is that over our lifetime AI will augment the efficiency of rather then replace many medical jobs, then its not like radiologists will be fired left and right. What will happen is just what happens for other fields with oversupply. Tighter job market for new grads, practices not replacing people who retire, etc. Even in fields with blatant oversupply like path or rad-onc its not like people are getting fired, its just tougher for new folks to find jobs. That may be a problem for grads in 15 years, not people matching now.
Agree with the points made above that express skepticism about AI. I also think a driving force in more applications is increased interest in the procedural applications of radiology (both inside and outside IR) and possibly decreased interest in other specialties. For example, internal medicine (3737 down from 3,837) and pediatrics (1,934 down from 2,056) saw modest decrease in US senior applicants.
It's important to remember that students can only choose from a limited number of specialties, so competitiveness reflects students comparing different specialties. Other specialties have their own sources of uncertainty like changing regulations, new reimbursement policies, emerging procedures and lab tests, etc. in addition to AI. AI is, after all, applicable to any specialty. For example, AI could be very relevant to psychiatry--computers can perform psychometric tests and administer psychotherapy.
You think were going to have computer therapists?? That’s literally one of the few jobs that depends on human interaction and the therapeutic alliances. Even if the robotic AI were perfect it would require a huge change on the part of the humans to accept it and respond to it accordingly.
Just thought I’d pitch in. I’ve got 2 months left as a rads resident. IBM Watson came and gave a presentation at my institution a couple weeks ago. They marketed to us all the latest and greatest stuff they’re working on. I was shocked at how introductory current projects seemed. We all left feeling like our jobs were secure for many decades to come. Watson Health has 7,000 employees, and anybody actually on the team knows that they’re not even close to even starting to work on getting Watson to render diagnoses. They freely admit that. The level of stuff they’re working on is like 50,000 steps below “Watson, read this CT.”
A couple points: 1 is that that machine learning requires massive datasets that theoretically exist in radiology. However what it will be doing is comparing the images to the read by the radiologist in order to "learn" what said diagnosis looks like. That means, at best, it is learning to become as accurate as a radiologist. Where are there are massive datasets of images with a back of the book answer "correct diagnosis"?
This is absolutely false, and presents a common misconception in machine learning, particularly deep learning. Training an AI on 1,000 images does not mean the model will only be as good as the best radiologist performing the training reads. Deep learning is incredibly synergistic, and can produce models that are superior to the gold-standard training set itself! That is, a well-trained model can actually find the incorrect reads in its own training set.
Suppose you have 500 chest xrays with right-sided pneumothoraces and 500 normal chest xrays. For simplicity, we can just label each of the pneumos xrays as "right pneumo" and each of the normals as "normal." Also suppose we purposefully mislabel one of the reads (e.g. make a right-sided pneumothrax read a normal).
The raw images and reads become input to train the model. No information is provided as to why each image was called right pneumo or normal. The algorithm must learn what makes an image that contains a pneumo different from images that do not. And herein lies the true power of AI - the radiologist may be trained to look for a white line above the lung field with decreased contrast or black above it. But maybe there is something else that clues one into a pneumo. Maybe the R lung is 10% smaller than the left, maybe the R upper lobe is 12.3% smaller than the R lower lobe, maybe the patient's diaphragm on the R is slightly deflated (or inflated)....none of these features (or "signs") are looked for when evaluating for a pneumothrax. After the model is trained, we might apply it to the training itself to see if any were misclassified; often this would lead us to find the one we purposefully mislabeled.
This synergy and ability to consider the whole image (e.g. without looking or relying on specific "signs") is one of the reasons deep learning can, and should, replace radiologist reads for limited domain questions (e.g. "Is there a pneumothorax?", "Is there a fracture?", "What is the bone age?"). These pixel-level features may be minuscule and unnamed, yet their aggregate probability often provides a more accurate read.
This is absolutely false, and presents a common misconception in machine learning, particularly deep learning. Training an AI on 1,000 images does not mean the model will only be as good as the best radiologist performing the training reads. Deep learning is incredibly synergistic, and can produce models that are superior to the gold-standard training set itself! That is, a well-trained model can actually find the incorrect reads in its own training set.
maybe what we should say is that we should train on a large enough data set to accurately recapitulate the distribution and variability present in the population you intend to use your classification model on.
One question I have is if we train the model to classify 'incorrect reads' - does that come at a cost to being able to correctly classify when noise is present in the image? I.e. can this fitting the model to a new classifier lead to false negatives?
Another major shortcoming that is seldom discussed in ML/AI, but remains a major problem, is class imbalance. Performance of all models suffer when disease prevalence approaches real world levels (<1:100). I am not aware of a study that has tested clinician/radiologist performance vs AI on a study set that incorporates the true prevalence of a disease.
Do you think you'd do better on a True /False test if you knew beforehand that out of 100 possible , 50 must be True and 50 must be False? You would absolutely weight your likelihood of giving an answer as "T" or "F" accordingly. The same goes for if there were 99 False and 1 True, you would much less likely label an answer as "True".
While I ultimately believe AI is going to be incredibly helpful, I am concerned the hype and claims made are damaging and will delay its implementation. We have to be transparent about what AI can realistically accomplish right now and in the near term, if possible.
The advantage is that you don't need to train entirely on your exact population. Instead, you only need the general task and a small collection to shift the position of the model on the loss manifold to take advantage of the new function contained within the new distribution. In terms of 'incorrect reads', that does not come at a cost. Most approaches in fact do use data perturbation to regularize the model during training, so the 'incorrect reads' are simply where the confidence is significantly high and the predicted label is not the target label.
That aside, I'm just as frustrated as you. The medical community compares the function model(256 pixels) to the output of doctor(same image + outside information). I'm reasonably confident that if a somewhat recent medical history was included in the dataset, the model would outperform us.
I work in NLP, not CV, but we've already had significant results in conducting a differential diagnosis that makes me much rather have a model+MD combo.
Now integrate that NLP program that extracts the EMR with the radiologist's workflow and you're in business.
Honest question, whose responsibility should this fall under? In my field, an integration of recent research is outside of our domain as it does not contribute anything new. My opinion is that since these techniques can significantly improve patient care/reduce ER load, wouldn't most medical professionals push to integrate this as fast as possible for the sake of the patient?
I'm not going to work for free for the sake of the patient, and I'm not going to push for something that will have me stocking shelves at Wal Mart for the sake of the patient, either.