Separate names with a comma.
Discussion in 'Radiology' started by new2018, Mar 27, 2018.
Because The AI hype is so overblown
The Uber death should show how absurd it is to think AI will replace radiologists. For the past 5 years, Autonomous driving has been accepted as fact to be safer than humans. But at the first sign of a tech failure, testing the product was banned and multiple companies ceased testing on their own. The public’s faith is destroyed and the Timeframe for the autonomous driving was probably set back 5 years. And this was from a scenario where the pedestrian was doing something extremely dangerous. How’s that gonna work when the tech fails in the light of day?
How do you think the FDA is gonna react when early AI rollouts lead to a patients death? You can’t blame the patients for being reckless in this scenario.
And this is completely ignoring the fact that AI is light years away from being considered equal to radiologists at the simplest of tasks.
Because AI can potentially make our jobs easier not eliminate them. Also, for the people with engineering background, this is the best time to start training so that by the time they are finished with their residency, they can start making meaningful contributions to the field when its development infra becomes more structured.
I think you are greatly overestimating the extent to which the Uber accident will delay autonomous vehicles. Companies deliberately chose to invest tens of billions of dollars into this technology and regulators deliberately gave the green light to testing. Both the companies and the regulators knew well ahead of time that accidents and deaths were inevitably going to be a part of the process yet chose to proceed anyway. Testing will continue apace once the recency goes down since this was an eventuality that was factored into the business plan before a single dime was even spent on the tech. They might tighten up the hiring process on their human backup drivers to higher than "mentally unstable ex-felon" as a result, though.
Furthermore, autonomous vehicles face a far greater obstacle to commercial adoption than automation of radiology does, because a car either has a human driver, or it doesn't. That's a huge leap to make. On the other hand, you don't have to make that leap for image recognition to disrupt radiology, since you can merely augment radiologist efficiency rather than cut them out entirely. Instead of having 20 radiologists to cover a hospital, you might need only 10 augmented by AI. A vehicle on the other hand has but 1 driver: it's either entirely human or entirely AI, and you have to get both the tech and the public acceptance to the point where you can make that bold, binary leap.
If medical image recognition is developed and adopted in a significant way, it will indeed make radiologists' jobs much easier, to the huge detriment of radiologists. Also, a very, very tiny fraction of radiology applicants have engineering backgrounds and the intention of making a career out of developing image recognition technology. The vast majority just want to do traditional radiology, so the OP is very correct to question why radiology is getting more competitive despite the threat of AI. Whether or not AI is going to make a large impact in the short to medium term is anyone's guess, but the uncertainty is there, and logically speaking, uncertainty should be factored into the price of a stock, whether the stock is an actual equity or a medical specialty.
True. Can’t argue with that logic
The push for automation and AI in radiology in today's world is driven primarily by computer scientists and entrepreneurs who unfortunately don't understand radiology. There are far more radiologists with some background in computer science/engineering than vice versa, yet all the noise comes from the latter group. I myself have an engineering degree and briefly worked as an engineer prior to medical school. Unfortunately, these non-radiologists do not seem to understand that the field is more than just pattern recognition but involves human cognition. While I don't doubt that one day AI will get very close to that point, it will not happen anywhere close to our lifetimes, and by that time, many if not most other jobs and specialties both in and out of medicine will have significant parts taken over by AI. It seems conceptually hard to understand to non-radiologists, as since radiology is computer-based, most people assume it will be the easiest to introduce automation, but it involves far less algorithmic thinking than is seen in many other specialties.
If AI can be shown to do the radiologist's job as well as or better, then I am all for it taking over where it can. It will reduce costs and drive up efficiency tremendously. I have no interest in maintaining human control on any field merely out of self-interest in preserving careers, it would all fall under the broken window theory of artificially supporting the job market. The problem is we are nowhere near the technological capability of doing that, despite what the non-radiologists hope for and imagine.
Would love for AI to spit out unintelligible ICU chest film reports and do a few PICCs/para/thora so I can get a real lunch break!
I always tell new residents to learn to first think as a radiologist rather than search for image patterns, and that is the difference between a smart radiologist and an average one. But I also think that AI will definitely change the way we practice medicine, may be not today but it will be faster than we think.
Yes, it won't be something specific to radiology but let's face it, Radiology is the most technical field in medicine, the infrastructure is already there (digital images and extensive networks, servers, PACS). It is a perfect starting point for AI in medicine. Radiology will be cannibalised first . It will make our job much easier but will definitely decrease the number of radiologists needed at least for conventional reading, may be it will open other career options, We (all medical specialties) must adapt.
I'm going into radiology, but currently doing general surgery internship. My mindset has shifted somewhat over this past year regarding the role of AI in radiology. I also work as a software engineer training ML models for healthcare and have a reasonable idea about what AI can accomplish.
I believe AI/ML is capable of doing some tasks better than a radiologist within the domain of pattern recognition. We are in a renaissance of pattern recognition; it's not like the 80s, it's not like the 90s; it's new, it's state-of-the art, and it's super-human. Here are some examples of these tasks:
a. Is the central venous catheter in the right atrium?
b. Is the nasogastric tube in the stomach?
c. What is the bone age of this patient?
d. Is this lung lesion pre-cancerous?
e. Is there free air under the right hemi-diaphragm?
Each of these results in a critical decision that drives patient care, but is ultimately binary in nature: Should we advance the catheter/tube? Does the patient need an endocrine workup? Should we biopsy the lung? Does the patient need an emergent ex-lap? I believe that highly optimized machine learning systems can read these radiographics better than any individual radiologist, if we constrain the system to answer a single question. Of course, the radiograph may contain a findings beyond the scope of the asked question. Because of this limitation, computers will not replace radiologists anytime soon.
However, I think that radiologists should be aware that AI/ML tools will not stay in the reading room. For example, at least once a week I have to confirm placement of a line/tube/catheter, or verify that a chest tube hasn't caused a pneumothorax when put to water-seal. About 75% of the time, I don't call the reading room and just rely on my own ability to read images. The times when I'm in-doubt I consult my chief resident or fellow (if they are around). If we're still not certain, we call radiology. If there was a freely available tool that could answer our question, we would likely try it first, if only to save time during a busy day of surgery.
You are still in training. In the real world, 99% of lines are read by the referring docs.
If portable chest x rays are where AI will charge the field, then I will be right here holding the gates wide open for them
The experience is very different depending on the practice set up.
In some hospitals, IR does a lionshare of central catheters.
In some hospitals, there is a PICC team and they rely on radiology report to confirm the lines.
Right now, cardiac Nucs software is semi-automatic and its interpretation sucks. And we are talking about a 0 or 1. In Cardiac stress test, they are pretty much only look for changes in myocardium signal after stress compared to rest imaging. Reading a CT abdomen and pelvis is a totally different beast.
To be honest, nobody can predict the future. It may or may not happen. But most of the time, the future is not what even experts predict.
Name a medical field and I will give you good reasons that why it can go down the drain.
There's data in these scans that have clinical implications that humans can't HOPE to comprehend without some sort of new processing/software/AI.
Fortunately, for our jobs anyway, I truly don't think that in our lifetimes, AI will be able to comprehend it either.
I think that, in our lifetime, will augment our jobs, not take our jobs.
Self-teaching is a human trait.
If AI can learn the human way, they are prone to make the same human mistakes like we do.
Mid-levels. Much cheaper for an established derm practice to hire one rather than a new dermatologist...last time I went to a dermatologist I was being seen by their NP until they found out I was a MD. Also was asked by the receptionist if it was ok if the "medical" student (NP student) was allowed to observe
Agree with above.
1- Mid-levels already do a lot in Derm.
2- Let's say AI becomes so sophisticated that it becomes capable of interpreting a CXR. Don't you think the same AI will also be capable of looking at a skin mole and characterizing it? Especially it has the advantage of magnifying the lesion. A lot of work is being done on a technology that can decrease the need for skin biopsy. It is not yet good enough, but who knows what happens in the future.
3- Cosmetic derm: Already a lot of family doctors are doing it.
4- Dermatopathology: If AI becomes capable of reading MRI, it will also be capable of doing dermatopath.
5- Mohs surgery: First of all, it used to pay very well in the past. It pays well now. But it is just a matter of change in reimbursement.
Second: If AI becomes very capable, this process will become semi-automatic and the reimbursement will definitely go down.
Overall, don't think too much about the future of a field. It is just a matter of a new technology that a field can become totally different in a decade. Do what you like and the rest will come.
For anyone following AI developments in healthcare, the FDA just approved the first device that uses ML to screen for diabetic retinopathy without an optometrist reading the image: FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems
The significance is that there is no ocular expert examining the image, it's a sign that the FDA is willing to bypass "expert providers" if automation is shown to be superior.
Curious if you actually read the article?
That article has nothing to do with replacing anyone because the machine is “superior”...? It specifically talks about being for populations who are not able to see their eye doctor as often as they’re supposed to, and if the image is deemed “more than mild diabetic retinopathy” they still go see their eye doctor who does the same exam and determines if treatment is necessary. The article also says it’s designed for clinics with healthcare professionals that aren’t used to dealing with eye diseases- aka a FM doctor/midlevel in BFE that can use a machine to screen his/her patients and ship them to an eye doctor if needed. It’s an automated screening tool, no different than the wonderful EKG machines. The fearmongering on here is ridiculous sometimes.
Your analytical thinking skills are lacking if you do not see this development as a significant milestone on the road to the automation of medicine. None of the reasons you gave above as supposedly neutralizing the impact of this development stand up to muster.
Previously, 100% of patients wanting to get their eyes checked had to see a trained provider. Now, only that subset who get tested by the automated system and get an abnormal result will be seen by a trained provider. This represents a loss of business to trained providers equal to whatever fraction of eye exams return "normal" results. What percentage of eye exams are normal? I'd estimate the majority. In other words, if this system and its upcoming iterations gain widespread adoption, they will absorb a large chunk, perhaps the majority, of eye screening business currently going to trained providers of eye screening services. If you cannot see that the FDA approval of this technology for clinical use as of the current year is a major development with troubling implications for the future then you should think harder.
I'm not even sure what point you're trying to make in the above. The theme of this thread is that automation poses a threat to radiologists and by extension other medical professionals. The threat is that thanks to technology, there will no longer be a reliance on the services of trained professionals to provide the services these professionals have been trained for. Above you say, in effect, that the automated system is designed to perform eye screenings without the need for trained eye professionals who were previously needed to perform such screenings. They can now be done by untrained individuals thanks to automation. You are absolutely right that this is the point of the system, but I can't for the life of me understand why you think this supposedly nullifies the notion that this development is a threat? Your analysis confirms the threat, not negates it.
Overall, your thinking on this topic is mushy and incoherent.
That’s what elderly radiology attendings are for...
Yeah but those are expensive. Come to think of it, so will AI.
The point of this thread is fearmongering, which apparently you’re up for spreading. My analytical skills are just fine, but thank you for your passive aggressiveness.
I’ll gladly reidurate my last point in a simpler fashion so that you may understand it. The screening will be done at places who don’t have access to care (aka they aren’t seeing an eye professional anyway- that’s actually business lost for these people, not what you’re describing), leading to a referral to someone who knows wtf is going on and can either choose to act on it or not. Let me say it again for you, these patients aren’t seeing eye doctors, and now they will be referred by a doctor who either wouldn’t normally screen for them, or just didn’t because he/she wasn’t comfortable doing so. That will actually increase business for these eye doctors seeing as these patients weren’t seeing them on the recommended schedule anyway. Did you catch it that time? It would serve you well to actually click on the article and read it, considering I basically quoted it when I said any patient that the machine screens as more than mild diabetic retinopathy still are referred to an eye doctor for an evaluation of the eye (OMG they’re still getting to do their oh so valuable eye screening and not being replaced by a machine!!!!) to determine if treatment is or isn’t necessary. It is a SCREENING TOOL. This is no different than the FM doctor who is actually confident in eye screenings with his/her diabetic patients who sees something odd and then refers patient to the eye doctor who then looks at the eye again. However, now it can be done in the lobby and save the FM doctor 5 minutes to discuss/examine something else. But then again, you just want to cherry pick to try to make a ridiculous argument that may scare people away from a particular field.
This, again, is no different than an EKG machine reading AFib on a patient who actually doesn’t have it, but the FM doctor refers them to cardiology still because they aren’t comfortable saying yes or no. Said cardiologist sees EKG, may order another to confirm no AFib (OMG he/she gets to screen for disease even though the machine screened for disease), and sends the patient home. It doesn’t decrease anyone’s need in the system, all it provides is a screening tool for less knowledgeable providers so that something big isn’t missed.
All AI is going to do is lessen the workload on multiple specialties (including radiology but not limited to it). No one is being replaced, and if they are, it won’t be radiology first. You can go on seeing the glass as half empty though
is there a limit to how many times I can “like” this post?
radiology is DEFINITELY not going to be the first to go.
why so much fear about algorithms replacing a consultancy specialty that requires 5+ years of training, when it seems there’s less worry about the job security of those docs—relying on the rads consults— who train for equal or less time in their own respective field?
i think a much more vulnerable specialty is the generalist / internist / family meds. the field can easily go the way of anesthesiology, with practices made up of more NP’s / PAs / DNPs being overseen by fewer and fewer MDs. think mid levels seeing pts (which already happens), and MDs overseeing them and the AI algorithms, and then referring out whenever something’s not “classic”.
“does pt have diabetes / hypertension / CKD / hyperthyroidism / CHF / sepsis?” and “how to treat it?” is much simpler to codify than, “does postoperative patient have abscess / bowel obstruction / ischemia / pneumonia / leak?” on imaging...and “if we’re not sure, what’s the next best step???”
Case in point from David Bluemke's recent article in RSNA Radiology. "Radiology in 2018: Are You Working with AI or Being Replaced by AI?" May 2018-
"What can we glean from the FDA approach regarding AI applications? If an AI algorithm can read CT scans, can it also write medical prescriptions? Perhaps I could bypass my internist when I have the flu and instead see Dr Watson. So far, the computer is only licensed to read CT scans.
That may change. IBM Watson for Genomics was tested against 1018 cancer diagnoses that had targeted DNA sequencing of tumor and normal tissue in a study from University of North Carolina School of Medicine. There was 99% agreement with treatment plans from human oncologists (4). And, Watson found treatment options that human doctors missed in 30% of cases. In a different study, Watson analyzed 638 treatment recommendations for breast cancer. The concordance of Watson with treatment recommendations by oncologists was 93%. That study was done at a hospital in India, not the United States. Can Watson compete with Harvard-trained oncologists?"
Well... Is there a push for AI encroachment on these fields? Sure, they *could* be the victims of AI, but the fact that a hunter *could* more easily kill a cat than a buck doesn't mean that the cat has more to fear. If we are the primary target, we are in the greatest danger.