Separate names with a comma.
Discussion in 'Radiology' started by new2018, Mar 27, 2018.
Because The AI hype is so overblown
The Uber death should show how absurd it is to think AI will replace radiologists. For the past 5 years, Autonomous driving has been accepted as fact to be safer than humans. But at the first sign of a tech failure, testing the product was banned and multiple companies ceased testing on their own. The public’s faith is destroyed and the Timeframe for the autonomous driving was probably set back 5 years. And this was from a scenario where the pedestrian was doing something extremely dangerous. How’s that gonna work when the tech fails in the light of day?
How do you think the FDA is gonna react when early AI rollouts lead to a patients death? You can’t blame the patients for being reckless in this scenario.
And this is completely ignoring the fact that AI is light years away from being considered equal to radiologists at the simplest of tasks.
Because AI can potentially make our jobs easier not eliminate them. Also, for the people with engineering background, this is the best time to start training so that by the time they are finished with their residency, they can start making meaningful contributions to the field when its development infra becomes more structured.
I think you are greatly overestimating the extent to which the Uber accident will delay autonomous vehicles. Companies deliberately chose to invest tens of billions of dollars into this technology and regulators deliberately gave the green light to testing. Both the companies and the regulators knew well ahead of time that accidents and deaths were inevitably going to be a part of the process yet chose to proceed anyway. Testing will continue apace once the recency goes down since this was an eventuality that was factored into the business plan before a single dime was even spent on the tech. They might tighten up the hiring process on their human backup drivers to higher than "mentally unstable ex-felon" as a result, though.
Furthermore, autonomous vehicles face a far greater obstacle to commercial adoption than automation of radiology does, because a car either has a human driver, or it doesn't. That's a huge leap to make. On the other hand, you don't have to make that leap for image recognition to disrupt radiology, since you can merely augment radiologist efficiency rather than cut them out entirely. Instead of having 20 radiologists to cover a hospital, you might need only 10 augmented by AI. A vehicle on the other hand has but 1 driver: it's either entirely human or entirely AI, and you have to get both the tech and the public acceptance to the point where you can make that bold, binary leap.
If medical image recognition is developed and adopted in a significant way, it will indeed make radiologists' jobs much easier, to the huge detriment of radiologists. Also, a very, very tiny fraction of radiology applicants have engineering backgrounds and the intention of making a career out of developing image recognition technology. The vast majority just want to do traditional radiology, so the OP is very correct to question why radiology is getting more competitive despite the threat of AI. Whether or not AI is going to make a large impact in the short to medium term is anyone's guess, but the uncertainty is there, and logically speaking, uncertainty should be factored into the price of a stock, whether the stock is an actual equity or a medical specialty.
True. Can’t argue with that logic
The push for automation and AI in radiology in today's world is driven primarily by computer scientists and entrepreneurs who unfortunately don't understand radiology. There are far more radiologists with some background in computer science/engineering than vice versa, yet all the noise comes from the latter group. I myself have an engineering degree and briefly worked as an engineer prior to medical school. Unfortunately, these non-radiologists do not seem to understand that the field is more than just pattern recognition but involves human cognition. While I don't doubt that one day AI will get very close to that point, it will not happen anywhere close to our lifetimes, and by that time, many if not most other jobs and specialties both in and out of medicine will have significant parts taken over by AI. It seems conceptually hard to understand to non-radiologists, as since radiology is computer-based, most people assume it will be the easiest to introduce automation, but it involves far less algorithmic thinking than is seen in many other specialties.
If AI can be shown to do the radiologist's job as well as or better, then I am all for it taking over where it can. It will reduce costs and drive up efficiency tremendously. I have no interest in maintaining human control on any field merely out of self-interest in preserving careers, it would all fall under the broken window theory of artificially supporting the job market. The problem is we are nowhere near the technological capability of doing that, despite what the non-radiologists hope for and imagine.
Would love for AI to spit out unintelligible ICU chest film reports and do a few PICCs/para/thora so I can get a real lunch break!
I always tell new residents to learn to first think as a radiologist rather than search for image patterns, and that is the difference between a smart radiologist and an average one. But I also think that AI will definitely change the way we practice medicine, may be not today but it will be faster than we think.
Yes, it won't be something specific to radiology but let's face it, Radiology is the most technical field in medicine, the infrastructure is already there (digital images and extensive networks, servers, PACS). It is a perfect starting point for AI in medicine. Radiology will be cannibalised first . It will make our job much easier but will definitely decrease the number of radiologists needed at least for conventional reading, may be it will open other career options, We (all medical specialties) must adapt.
I'm going into radiology, but currently doing general surgery internship. My mindset has shifted somewhat over this past year regarding the role of AI in radiology. I also work as a software engineer training ML models for healthcare and have a reasonable idea about what AI can accomplish.
I believe AI/ML is capable of doing some tasks better than a radiologist within the domain of pattern recognition. We are in a renaissance of pattern recognition; it's not like the 80s, it's not like the 90s; it's new, it's state-of-the art, and it's super-human. Here are some examples of these tasks:
a. Is the central venous catheter in the right atrium?
b. Is the nasogastric tube in the stomach?
c. What is the bone age of this patient?
d. Is this lung lesion pre-cancerous?
e. Is there free air under the right hemi-diaphragm?
Each of these results in a critical decision that drives patient care, but is ultimately binary in nature: Should we advance the catheter/tube? Does the patient need an endocrine workup? Should we biopsy the lung? Does the patient need an emergent ex-lap? I believe that highly optimized machine learning systems can read these radiographics better than any individual radiologist, if we constrain the system to answer a single question. Of course, the radiograph may contain a findings beyond the scope of the asked question. Because of this limitation, computers will not replace radiologists anytime soon.
However, I think that radiologists should be aware that AI/ML tools will not stay in the reading room. For example, at least once a week I have to confirm placement of a line/tube/catheter, or verify that a chest tube hasn't caused a pneumothorax when put to water-seal. About 75% of the time, I don't call the reading room and just rely on my own ability to read images. The times when I'm in-doubt I consult my chief resident or fellow (if they are around). If we're still not certain, we call radiology. If there was a freely available tool that could answer our question, we would likely try it first, if only to save time during a busy day of surgery.
You are still in training. In the real world, 99% of lines are read by the referring docs.
If portable chest x rays are where AI will charge the field, then I will be right here holding the gates wide open for them
The experience is very different depending on the practice set up.
In some hospitals, IR does a lionshare of central catheters.
In some hospitals, there is a PICC team and they rely on radiology report to confirm the lines.
Right now, cardiac Nucs software is semi-automatic and its interpretation sucks. And we are talking about a 0 or 1. In Cardiac stress test, they are pretty much only look for changes in myocardium signal after stress compared to rest imaging. Reading a CT abdomen and pelvis is a totally different beast.
To be honest, nobody can predict the future. It may or may not happen. But most of the time, the future is not what even experts predict.
Name a medical field and I will give you good reasons that why it can go down the drain.
There's data in these scans that have clinical implications that humans can't HOPE to comprehend without some sort of new processing/software/AI.
Fortunately, for our jobs anyway, I truly don't think that in our lifetimes, AI will be able to comprehend it either.
I think that, in our lifetime, will augment our jobs, not take our jobs.
Self-teaching is a human trait.
If AI can learn the human way, they are prone to make the same human mistakes like we do.
Mid-levels. Much cheaper for an established derm practice to hire one rather than a new dermatologist...last time I went to a dermatologist I was being seen by their NP until they found out I was a MD. Also was asked by the receptionist if it was ok if the "medical" student (NP student) was allowed to observe
Agree with above.
1- Mid-levels already do a lot in Derm.
2- Let's say AI becomes so sophisticated that it becomes capable of interpreting a CXR. Don't you think the same AI will also be capable of looking at a skin mole and characterizing it? Especially it has the advantage of magnifying the lesion. A lot of work is being done on a technology that can decrease the need for skin biopsy. It is not yet good enough, but who knows what happens in the future.
3- Cosmetic derm: Already a lot of family doctors are doing it.
4- Dermatopathology: If AI becomes capable of reading MRI, it will also be capable of doing dermatopath.
5- Mohs surgery: First of all, it used to pay very well in the past. It pays well now. But it is just a matter of change in reimbursement.
Second: If AI becomes very capable, this process will become semi-automatic and the reimbursement will definitely go down.
Overall, don't think too much about the future of a field. It is just a matter of a new technology that a field can become totally different in a decade. Do what you like and the rest will come.
For anyone following AI developments in healthcare, the FDA just approved the first device that uses ML to screen for diabetic retinopathy without an optometrist reading the image: FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems
The significance is that there is no ocular expert examining the image, it's a sign that the FDA is willing to bypass "expert providers" if automation is shown to be superior.
Curious if you actually read the article?
That article has nothing to do with replacing anyone because the machine is “superior”...? It specifically talks about being for populations who are not able to see their eye doctor as often as they’re supposed to, and if the image is deemed “more than mild diabetic retinopathy” they still go see their eye doctor who does the same exam and determines if treatment is necessary. The article also says it’s designed for clinics with healthcare professionals that aren’t used to dealing with eye diseases- aka a FM doctor/midlevel in BFE that can use a machine to screen his/her patients and ship them to an eye doctor if needed. It’s an automated screening tool, no different than the wonderful EKG machines. The fearmongering on here is ridiculous sometimes.
Your analytical thinking skills are lacking if you do not see this development as a significant milestone on the road to the automation of medicine. None of the reasons you gave above as supposedly neutralizing the impact of this development stand up to muster.
Previously, 100% of patients wanting to get their eyes checked had to see a trained provider. Now, only that subset who get tested by the automated system and get an abnormal result will be seen by a trained provider. This represents a loss of business to trained providers equal to whatever fraction of eye exams return "normal" results. What percentage of eye exams are normal? I'd estimate the majority. In other words, if this system and its upcoming iterations gain widespread adoption, they will absorb a large chunk, perhaps the majority, of eye screening business currently going to trained providers of eye screening services. If you cannot see that the FDA approval of this technology for clinical use as of the current year is a major development with troubling implications for the future then you should think harder.
I'm not even sure what point you're trying to make in the above. The theme of this thread is that automation poses a threat to radiologists and by extension other medical professionals. The threat is that thanks to technology, there will no longer be a reliance on the services of trained professionals to provide the services these professionals have been trained for. Above you say, in effect, that the automated system is designed to perform eye screenings without the need for trained eye professionals who were previously needed to perform such screenings. They can now be done by untrained individuals thanks to automation. You are absolutely right that this is the point of the system, but I can't for the life of me understand why you think this supposedly nullifies the notion that this development is a threat? Your analysis confirms the threat, not negates it.
Overall, your thinking on this topic is mushy and incoherent.
That’s what elderly radiology attendings are for...
Yeah but those are expensive. Come to think of it, so will AI.
The point of this thread is fearmongering, which apparently you’re up for spreading. My analytical skills are just fine, but thank you for your passive aggressiveness.
I’ll gladly reidurate my last point in a simpler fashion so that you may understand it. The screening will be done at places who don’t have access to care (aka they aren’t seeing an eye professional anyway- that’s actually business lost for these people, not what you’re describing), leading to a referral to someone who knows wtf is going on and can either choose to act on it or not. Let me say it again for you, these patients aren’t seeing eye doctors, and now they will be referred by a doctor who either wouldn’t normally screen for them, or just didn’t because he/she wasn’t comfortable doing so. That will actually increase business for these eye doctors seeing as these patients weren’t seeing them on the recommended schedule anyway. Did you catch it that time? It would serve you well to actually click on the article and read it, considering I basically quoted it when I said any patient that the machine screens as more than mild diabetic retinopathy still are referred to an eye doctor for an evaluation of the eye (OMG they’re still getting to do their oh so valuable eye screening and not being replaced by a machine!!!!) to determine if treatment is or isn’t necessary. It is a SCREENING TOOL. This is no different than the FM doctor who is actually confident in eye screenings with his/her diabetic patients who sees something odd and then refers patient to the eye doctor who then looks at the eye again. However, now it can be done in the lobby and save the FM doctor 5 minutes to discuss/examine something else. But then again, you just want to cherry pick to try to make a ridiculous argument that may scare people away from a particular field.
This, again, is no different than an EKG machine reading AFib on a patient who actually doesn’t have it, but the FM doctor refers them to cardiology still because they aren’t comfortable saying yes or no. Said cardiologist sees EKG, may order another to confirm no AFib (OMG he/she gets to screen for disease even though the machine screened for disease), and sends the patient home. It doesn’t decrease anyone’s need in the system, all it provides is a screening tool for less knowledgeable providers so that something big isn’t missed.
All AI is going to do is lessen the workload on multiple specialties (including radiology but not limited to it). No one is being replaced, and if they are, it won’t be radiology first. You can go on seeing the glass as half empty though
is there a limit to how many times I can “like” this post?
radiology is DEFINITELY not going to be the first to go.
why so much fear about algorithms replacing a consultancy specialty that requires 5+ years of training, when it seems there’s less worry about the job security of those docs—relying on the rads consults— who train for equal or less time in their own respective field?
i think a much more vulnerable specialty is the generalist / internist / family meds. the field can easily go the way of anesthesiology, with practices made up of more NP’s / PAs / DNPs being overseen by fewer and fewer MDs. think mid levels seeing pts (which already happens), and MDs overseeing them and the AI algorithms, and then referring out whenever something’s not “classic”.
“does pt have diabetes / hypertension / CKD / hyperthyroidism / CHF / sepsis?” and “how to treat it?” is much simpler to codify than, “does postoperative patient have abscess / bowel obstruction / ischemia / pneumonia / leak?” on imaging...and “if we’re not sure, what’s the next best step???”
Case in point from David Bluemke's recent article in RSNA Radiology. "Radiology in 2018: Are You Working with AI or Being Replaced by AI?" May 2018-
"What can we glean from the FDA approach regarding AI applications? If an AI algorithm can read CT scans, can it also write medical prescriptions? Perhaps I could bypass my internist when I have the flu and instead see Dr Watson. So far, the computer is only licensed to read CT scans.
That may change. IBM Watson for Genomics was tested against 1018 cancer diagnoses that had targeted DNA sequencing of tumor and normal tissue in a study from University of North Carolina School of Medicine. There was 99% agreement with treatment plans from human oncologists (4). And, Watson found treatment options that human doctors missed in 30% of cases. In a different study, Watson analyzed 638 treatment recommendations for breast cancer. The concordance of Watson with treatment recommendations by oncologists was 93%. That study was done at a hospital in India, not the United States. Can Watson compete with Harvard-trained oncologists?"
Well... Is there a push for AI encroachment on these fields? Sure, they *could* be the victims of AI, but the fact that a hunter *could* more easily kill a cat than a buck doesn't mean that the cat has more to fear. If we are the primary target, we are in the greatest danger.
I'm not really sure how to feel about #1 on this article... (Tech in Asia - Connecting Asia's startup ecosystem)
They're actually marketing AI Image Analytics to Nurses and/or General Health Practitioners and even mentioned less reliance on "limited and expensive specialists". I'm still a medstudent so I don't know if this hype has any merit but how will something like this impact radiology as it stands now?
It is not that easy. Just to market something to NPs or family physicians doesn't mean that it will work. For example, if you make a new medication and EVEN if you market it to NPs or doctors, they can not prescribe it and it can not be sold in pharmacies.
Sure, I hear ya. We—radiologists and medical imaging—are in their sights. But, what does “greatest danger” actually look like, chronologically and on a practical level?
Not to pick an argument with you/anyone in particular, but rather just to “argue a case” for how i/one can be comfortable choosing rads as a career...as this is all educated-guess forecasting:
Let’s say, hypothetically, imaging AI—for all modalities, to address every clinical question—already exists. How long before it actually comes to get YOUR job?
First, AI efficacy/non-inferiority must be proven. At least initially, this will be modality-, indication-, and diagnosis-specific—like “AI can diagnose acute chole by ultrasound in young noncirrhotic patients with no surgical or cancer history”, as good as radiologist. I don’t know how to eventually prove equality in terms of just replacing a radiologist...maybe with thousands of heads-up trials comparing every single modality/diagnosis/history/indication/patient population imaginable?...whatever, it will take time. I dunno # of years.
Second, FDA approval. Again, not sure how this shakes out precisely, but it’s not gonna be quick. Again, # of years?
Then, insurance companies and CMS have to decide to reimburse for it (and these financials have to be favorable in order to stimulate investment and implementation). Who knows, and again this is likely case by case rather than blanket replacement.
Then, the cultural shift—which, in medicine, is slow. Medical centers and practices need to begin investing and implementing the AI product(s). Then AI needs to garner favor among clinicians to promote widespread adoption...and finally it needs to come to your particular practice/center. Definitely years/decades.
...And AI-as-radiologist technology doesn’t exist yet, so, this is not a timeline I’m particularly concerned about. Will AI completely replace The Radiologist as a profession for humans? In my or a medical student’s lifetime?...Doubtful. But even if it does, it won’t occur at a pace that our field can’t keep up with in terms of changing, advancing, evolving, where human input will be required.
Final points of reassurance:
-Look how long it took for EMRs/PACs to be in widespread use from the time of the invention of the computer. I did internship less than a handful of years ago (at a non-dysfunctional, academic, tertiary hospital in a major city) where paper charts were still being used. woah.
-Many many times discussing cases or reviewing images with referring clinicians, I’m reminded of how their imaging knowledge is quite “varied”, to put it lightly. I’ve only been in this game a short while, but I’ve already witnessed countless scenarios where clinicians order the wrong study, or don’t know what to order. Or where surgeons want to review a negative/bland study on a patient they’re “concerned about”, and the discussion goes: but what about the gallbladder? Fine. And the stomach? Fine. And the pancreas? Fine. The bowel? Nothing special. The kidneys? Yes, those are kidneys... And these things happen at veteran attending levels.
To summarize, and adress the next post: although tech can advance quickly, medicine is slow to change; and clinicians need more from radiologists than just reports with what organs look like. So no, RadInterest, I’m not too worried about the opera singer wanting to market his AI software to nurses. Good luck to him though.
Exactly. It doesn’t mean jack that radiology is the one being targeted. They still have to make it work.
Cancer of all comers has received the vast vast majority of medical research funding. But here we are and so is every form of cancer.
Meanwhile, a random VA doctor who does some research on the side discovers the cure to hepatitis C, and now Hepatobiliary docs worldwide Are going to lose a huge chunk of their patient population
Raymond Schinazi is a PhD organic chemist. He is a virologist who has contributed to the discovery of other antivirals.
There are a ton of virologist PhDs who have done amazing things
The point is that it’s one guy compared to widespread multifaceted efforts including mega corporations. Money getting pumped into something doesn’t automatically make it happen
A couple points: 1 is that that machine learning requires massive datasets that theoretically exist in radiology. However what it will be doing is comparing the images to the read by the radiologist in order to "learn" what said diagnosis looks like. That means, at best, it is learning to become as accurate as a radiologist. Where are there are massive datasets of images with a back of the book answer "correct diagnosis"?
2 is that reads are dependent on the clinical scenario, and what is put in the "reason for exam" as you know is often woefully inadequate. The same hydronephrosis that could represent a stone or UPJ obstruction may be physiologic in a patient with an ileal conduit. A CT robot that reads stranding around the colon and then spits out a 20 item list of things that could represent isn't really that helpful.
3. If what I expect happens, which is that over our lifetime AI will augment the efficiency of rather then replace many medical jobs, then its not like radiologists will be fired left and right. What will happen is just what happens for other fields with oversupply. Tighter job market for new grads, practices not replacing people who retire, etc. Even in fields with blatant oversupply like path or rad-onc its not like people are getting fired, its just tougher for new folks to find jobs. That may be a problem for grads in 15 years, not people matching now.
I think this is the worst case scenario and I am not confident we will even reach this. After reading about machine learning/AI, I think the best it will do is improve our diagnostic accuracy and efficiency. Detection will be better, for example, when it comes to pulmonary nodules and their measurements, masses, contour abnormalities, etc.
There are still A LOT of issues to solve before AI is as good as what the computer scientists and others predict, and I am very doubtful that the predicted promises of this technology will be met.
Frankly, as I have learned more about deep learning and AI in radiology and progressed further into radiology residency, I have become less concerned by it's potential future impact as a threat. Radiology is far more nuanced than people outside the field think it is. And I was one of those people during intern year last year.
Random musing: is there a way to claim one's reads as one's own intellectual property, so that they can't be gobbled up by AI supercomputers?
What you can do is to produce a report based less on structured reporting but an essay type of report, short, succint, and address the clinical question.
I refuse to break down my report into little chunks to help machine learning.
I like structured reports. To each their own.
Agree with the points made above that express skepticism about AI. I also think a driving force in more applications is increased interest in the procedural applications of radiology (both inside and outside IR) and possibly decreased interest in other specialties. For example, internal medicine (3737 down from 3,837) and pediatrics (1,934 down from 2,056) saw modest decrease in US senior applicants.
It's important to remember that students can only choose from a limited number of specialties, so competitiveness reflects students comparing different specialties. Other specialties have their own sources of uncertainty like changing regulations, new reimbursement policies, emerging procedures and lab tests, etc. in addition to AI. AI is, after all, applicable to any specialty. For example, AI could be very relevant to psychiatry--computers can perform psychometric tests and administer psychotherapy.
No matter how good AI might get, the American public is not going to allow it for at least decades. Look at self driving cars, they have hundreds of thousands of hours on the road but when there is one accident an entire city kicks them out. Automation is replacing a lot of jobs in this country, but medicine will not be one of them, at least not in my generation. Worry about whether you would like the field
You think were going to have computer therapists?? That’s literally one of the few jobs that depends on human interaction and the therapeutic alliances. Even if the robotic AI were perfect it would require a huge change on the part of the humans to accept it and respond to it accordingly.
Psychotherapy apps are actually a booming area of research. Example: Smartphone Cognitive Behavioral Therapy as an Adjunct to Pharmacotherapy for Refractory Depression: Randomized Controlled Trial
Just thought I’d pitch in. I’ve got 2 months left as a rads resident. IBM Watson came and gave a presentation at my institution a couple weeks ago. They marketed to us all the latest and greatest stuff they’re working on. I was shocked at how introductory current projects seemed. We all left feeling like our jobs were secure for many decades to come. Watson Health has 7,000 employees, and anybody actually on the team knows that they’re not even close to even starting to work on getting Watson to render diagnoses. They freely admit that. The level of stuff they’re working on is like 50,000 steps below “Watson, read this CT.”
Yes. The AI stuff currently out there right now anywhere near replacing radiologist input are very narrow tasks:
-bone age of hand radiograph
-segment the muscle and fat on one slice of an abdominal CT
-circle the tip of a PICC on a chest radiograph
-breast density on mammogram
This is absolutely false, and presents a common misconception in machine learning, particularly deep learning. Training an AI on 1,000 images does not mean the model will only be as good as the best radiologist performing the training reads. Deep learning is incredibly synergistic, and can produce models that are superior to the gold-standard training set itself! That is, a well-trained model can actually find the incorrect reads in its own training set.
Suppose you have 500 chest xrays with right-sided pneumothoraces and 500 normal chest xrays. For simplicity, we can just label each of the pneumos xrays as "right pneumo" and each of the normals as "normal." Also suppose we purposefully mislabel one of the reads (e.g. make a right-sided pneumothrax read a normal).
The raw images and reads become input to train the model. No information is provided as to why each image was called right pneumo or normal. The algorithm must learn what makes an image that contains a pneumo different from images that do not. And herein lies the true power of AI - the radiologist may be trained to look for a white line above the lung field with decreased contrast or black above it. But maybe there is something else that clues one into a pneumo. Maybe the R lung is 10% smaller than the left, maybe the R upper lobe is 12.3% smaller than the R lower lobe, maybe the patient's diaphragm on the R is slightly deflated (or inflated)....none of these features (or "signs") are looked for when evaluating for a pneumothrax. After the model is trained, we might apply it to the training itself to see if any were misclassified; often this would lead us to find the one we purposefully mislabeled.
This synergy and ability to consider the whole image (e.g. without looking or relying on specific "signs") is one of the reasons deep learning can, and should, replace radiologist reads for limited domain questions (e.g. "Is there a pneumothorax?", "Is there a fracture?", "What is the bone age?"). These pixel-level features may be minuscule and unnamed, yet their aggregate probability often provides a more accurate read.
AI may or may not be able to do this, it might be validated in a huge multicenter trial with thousands of images, make national news, but the question of being implemented clinically is answering, so what?
It's a matter of workflow implementation, cost, and assuming medicolegal responsibility. It makes sense that it may supplement radiologists but more likely than not it will be expensive, require local manpower, and have many can't read/overcalls/undercalls on poor quality/nonstandard radiographs due to patient condition/habitus or poor technique.
Let's say an AI company develops a program or a cloud-based interface that identifies pneumos and other abnormalities on chest xray. How much is this service going to cost annually and what is the going to be the local PACS interface and manpower required?
Let's say a hospital purchases it and tries to operate it outside of radiology. Radiologists are likely unhappy due to losing professional fees on it (honestly they may be happy to be rid of chest xrays but would understand the future implications). Technologists wants to call the radiologists to see if they should repeat an exposure, "don't ask me, call Watson. I won't read this." Now people use an AI-based interface to interpret their chest xrays, let's just say in a super limited setting like post bronch, thoracentesis, or surgical central line/port placement to assess for pneumo and catheter position. How many of these are going to be flagged as no pneumo but abnormal, nondiagnostic, or some type of flag. The ordering physician (maybe an intern or midlevel provider) calls radiology to sort out this flag. "Woah, woah, did you nominate this to be read by us, this is read by Watson." Point being it would horribly slow down workflow. What is it going to do with incidental lytic lesion of the proximal humerus that needs to be worked up? It sees them? Great! What about distal clavicular osteolysis or a prominent benign glenohumeral subchondral cyst, will those nothing findings be flagged as abnormal needing interpretation? Is the ordering provider willing to take medicolegal responsibility for this?
Let's say a hospital considers purchasing this product for radiology. First, as above, the rate-limiting step will be cost - there are a million AI startup companies and they aren't going to offer their services for free. Great, maybe this helps and pushes studies the AI interprets as having critical findings (e.g., pneumo or malpositioned line) to the front of the worklist. Still, the AI system flags calcified lung nodules, things exterior to the patient, poor techniques studies as abnormal or indeterminant. Does the radiologist have to issue a discrepancy in their dictation? Surely some record of the AI interpretation will need to be retained for medicolegal purposes. How happy and receptive are radiologists going to be when they have to issue disagreements on almost certainly benign findings flagged by the AI or when the almost certainly benign finding flagged by the AI was in fact, not benign and the radiologist faces litigation - "Oh Dr. JoshSt, I see you disagreed with the AI interpretation that this calcified lung nodule may represent an abnormality, are you aware of the study of So and so et al. saying that AI significantly outperformed radiologists at identifying pulmonary nodules that ended up being underlying lung cancer?"
AI may supplement radiologists, but there is so much hype on the research side and poor understanding of how these programs will or will not be able to integrate clinically. There is no doubt that machine and deep learning studies will help us understand imaging appearances and disease processes, but I question the feasibility of purchasing these (will-be) very expensive programs to supplement radiologists and see it as far-fetched to replace radiologists.
Radiology as a field is on top of the AI trend. RSNA just named the editor of their to-be-premiered AI journal: RSNA Publications Online. From that link:
About Radiology: Artificial Intelligence
Held to the same high editorial standards as Radiology, Radiology: Artificial Intelligence, a new RSNA journal to be launched in early 2019, will highlight the emerging applications of machine learning and artificial intelligence in the field of imaging across multiple disciplines.
maybe what we should say is that we should train on a large enough data set to accurately recapitulate the distribution and variability present in the population you intend to use your classification model on.
One question I have is if we train the model to classify 'incorrect reads' - does that come at a cost to being able to correctly classify when noise is present in the image? I.e. can this fitting the model to a new classifier lead to false negatives?
Another major shortcoming that is seldom discussed in ML/AI, but remains a major problem, is class imbalance. Performance of all models suffer when disease prevalence approaches real world levels (<1:100). I am not aware of a study that has tested clinician/radiologist performance vs AI on a study set that incorporates the true prevalence of a disease.
Do you think you'd do better on a True /False test if you knew beforehand that out of 100 possible , 50 must be True and 50 must be False? You would absolutely weight your likelihood of giving an answer as "T" or "F" accordingly. The same goes for if there were 99 False and 1 True, you would much less likely label an answer as "True".
While I ultimately believe AI is going to be incredibly helpful, I am concerned the hype and claims made are damaging and will delay its implementation. We have to be transparent about what AI can realistically accomplish right now and in the near term, if possible.
The advantage is that you don't need to train entirely on your exact population. Instead, you only need the general task and a small collection to shift the position of the model on the loss manifold to take advantage of the new function contained within the new distribution. In terms of 'incorrect reads', that does not come at a cost. Most approaches in fact do use data perturbation to regularize the model during training, so the 'incorrect reads' are simply where the confidence is significantly high and the predicted label is not the target label.
That aside, I'm just as frustrated as you. The medical community compares the function model(256 pixels) to the output of doctor(same image + outside information). I'm reasonably confident that if a somewhat recent medical history was included in the dataset, the model would outperform us.
I work in NLP, not CV, but we've already had significant results in conducting a differential diagnosis that makes me much rather have a model+MD combo.
Now integrate that NLP program that extracts the EMR with the radiologist's workflow and you're in business.
Honest question, whose responsibility should this fall under? In my field, an integration of recent research is outside of our domain as it does not contribute anything new. My opinion is that since these techniques can significantly improve patient care/reduce ER load, wouldn't most medical professionals push to integrate this as fast as possible for the sake of the patient?
I'm not going to work for free for the sake of the patient, and I'm not going to push for something that will have me stocking shelves at Wal Mart for the sake of the patient, either.
The goal of medicine is not to profit off sick people. Radiology is a part of medicine.
In practice, we end up doing a lot of free work for patients, whether we like it or not. If you're not comfortable with that, you may want to choose another profession.
What I object to is a shifting of the money flow. The real goal of AI is not to create cheaper health care for '"he sake of the patient", but divert the money to corporate and tech stakeholders. You'll be stocking shelves for the sake of the non-profit's profit margin.