Zeke Emmanuel says Gas, Rads, and Path will be "displaced" by machines in new NEJM article

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Carbocation1

Full Member
10+ Year Member
Joined
Nov 23, 2012
Messages
692
Reaction score
322
Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Ziad Obermeyer, M.D., and Ezekiel J. Emanuel, M.D., Ph.D.

N Engl J Med 2016; 375:1216-1219September 29, 2016DOI: 10.1056/NEJMp1606181

By now, it’s almost old news: big data will transform medicine. It’s essential to remember, however, that data by themselves are useless. To be useful, data must be analyzed, interpreted, and acted on. Thus, it is algorithms — not data sets — that will prove transformative. We believe, therefore, that attention has to shift to new statistical tools from the field of machine learning that will be critical for anyone practicing medicine in the 21st century.

First, it’s important to understand what machine learning is not. Most computer-based algorithms in medicine are “expert systems” — rule sets encoding knowledge on a given topic, which are applied to draw conclusions about specific clinical scenarios, such as detecting drug interactions or judging the appropriateness of obtaining imaging. Expert systems work the way an ideal medical student would: they take general principles about medicine and apply them to new patients.

Machine learning, conversely, approaches problems as a doctor progressing through residency might: by learning rules from data. Starting with patient-level observations, algorithms sift through vast numbers of variables, looking for combinations that reliably predict outcomes. In one sense, this process is similar to that of traditional regression models: there are outcomes, covariates, and statistical functions linking the two. But where machine learning shines is in handling enormous numbers of predictors — sometimes, remarkably, more predictors than observations — and combining them in nonlinear and highly interactive ways.1 This capacity allows us to use new kinds of data, whose sheer volume or complexity would previously have made analyzing them unimaginable.

Consider a chest radiograph. Some radiographic features might predict an important outcome, such as death. In a standard statistical model, we might use the radiograph’s interpretation — “normal,” “atelectasis,” “effusion” — as a variable. But instead, why not let the data speak for themselves? Leveraging dramatic advances in computational power, digital pixel matrixes underlying radiographs become millions of individual variables. Algorithms then go to work, clustering pixels into lines and shapes and ultimately learning contours of fracture lines, parenchymal opacities, and more. Even traditional insurance claims data can take on a new life: diagnostic codes trace an intricate, dynamic picture of patients’ medical histories, far richer than the static variables for coexisting conditions used in standard statistical models.

Of course, letting the data speak for themselves can be problematic. Algorithms might “overfit” predictions to spurious correlations in the data, or multiple collinear, correlated predictors could produce unstable estimates. Either possibility can lead to overly optimistic estimates of the accuracy of a model and exaggerated claims about real-world performance. These concerns are serious and must be addressed by testing models on truly independent validation data sets, from different populations or periods that played no role in model development. In this way, problems in the model-fitting stage, whatever their cause, will show up as poor performance in the validation stage. This principle is so important that in many data-science competitions, validation data are released only after teams upload their final algorithms built on another publicly available data set.

Another key issue is the quantity and quality of input data. Machine learning algorithms are highly data hungry, often requiring millions of observations to reach acceptable performance levels.2 In addition, biases in data collection can substantially affect both performance and generalizability. Lactate might be a good predictor of the risk of death, for example, but only a small, nonrepresentative sample of patients have their lactate levels checked. Private companies spend enormous resources to amass high-quality, unbiased data to feed their algorithms, and existing data in electronic health records (EHRs) or claims databases need careful curation and processing to become usable.

Finally, machine learning does not solve any of the fundamental problems of causal inference in observational data sets. Algorithms may be good at predicting outcomes, but predictors are not causes.3 The usual commonsense caveats about confusing correlation with causation apply; indeed, they become even more important as researchers begin including millions of variables in statistical models.

Machine learning has become ubiquitous and indispensable for solving complex problems in most sciences. In astronomy, algorithms sift through millions of images from telescope surveys to classify galaxies and find supernovas. In biomedicine, machine learning can predict protein structure and function from genetic sequences and discern optimal diets from patients’ clinical and microbiome profiles. The same methods will open up vast new possibilities in medicine. A striking example: algorithms can read cortical activity directly from the brain, transmitting signals from a paralyzed human’s motor cortex to hand muscles and restoring motor control.4 These advances would have been unimaginable without machine learning to process real-time, high-resolution physiological data.

Increasingly, the ability to transform data into knowledge will disrupt at least three areas of medicine. First, machine learning will dramatically improve the ability of health professionals to establish a prognosis. Current prognostic models (e.g., the Acute Physiology and Chronic Health Evaluation [APACHE] score and the Sequential Organ Failure Assessment [SOFA] score) are restricted to only a handful of variables, because humans must enter and tally the scores. But data could instead be drawn directly from EHRs or claims databases, allowing models to use thousands of rich predictor variables. Does doing so lead to better predictions? Early evidence from our own ongoing work, using machine learning to predict death in patients with metastatic cancer, provides some indication: we can precisely identify large patient subgroups with mortality rates approaching 100% and others with rates as low as 10%. Predictions are driven by fine-grained information cutting across multiple organ systems: infections, uncontrolled symptoms, wheelchair use, and more. Better estimates could transform advance care planning for patients with serious illnesses, who face many agonizing decisions that depend on duration of survival. We predict that prognostic algorithms will come into use in the next 5 years — although prospective validation will take several more years of data collection.

Second, machine learning will displace much of the work of radiologists and anatomical pathologists. These physicians focus largely on interpreting digitized images, which can easily be fed directly to algorithms instead. Massive imaging data sets, combined with recent advances in computer vision, will drive rapid improvements in performance, and machine accuracy will soon exceed that of humans. Indeed, radiology is already partway there: algorithms can replace a second radiologist reading mammograms5 and will soon exceed human accuracy. The patient-safety movement will increasingly advocate the use of algorithms over humans — after all, algorithms need no sleep, and their vigilance is the same at 2 a.m. as at 9 a.m. Algorithms will also monitor and interpret streaming physiological data, replacing aspects of anesthesiology and critical care. The time scale for these disruptions is years, not decades.

Third, machine learning will improve diagnostic accuracy. A recent Institute of Medicine report highlighted the alarming frequency of diagnostic errors and the lack of interventions to reduce them. Algorithms will soon generate differential diagnoses, suggest high-value tests, and reduce overuse of testing. This disruption will happen more slowly, over the next decade, for three reasons: first, the standard for diagnosis is unclear in many conditions (e.g., sepsis, rheumatoid arthritis) — unlike binary judgments in radiology or pathology (e.g., malignant or benign) — making it harder to train algorithms. Second, high-value EHR data are often stored in unstructured formats that are inaccessible to algorithms without layers of preprocessing. Finally, models need to be built and validated individually for each diagnosis.

Clinical medicine has always required doctors to handle enormous amounts of data, from macro-level physiology and behavior to laboratory and imaging studies and, increasingly, “omic” data. The ability to manage this complexity has always set good doctors apart from the rest. Machine learning will become an indispensable tool for clinicians seeking to truly understand their patients. As patients’ conditions and medical technologies become more complex, the role of machine learning will grow, and clinical medicine will be challenged to grow with it. As in other industries, this challenge will create winners and losers in medicine. But we are optimistic that patients, whose lives and medical histories shape the algorithms, will emerge as the biggest winners as machine learning transforms clinical medicine.

Members don't see this ad.
 
I'd like to see how well their cancer prognostication model does against an actual oncologist.

Like, I'm sure their model is way better than the current scoring models for prognostication, but there's something to be said about actually seeing the patient and there's a lot of data there that can't be abstracted into a variable for the computer to eat. With just a bit of experience, one can look at two patients with identicle numbers on paper and identify the one more likely to die sooner.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Is it just me or has Zeke built a career as a physician seeking to destroy/fundamentally transform medicine?
 
  • Like
Reactions: 11 users
Additionally, I'd believe that eventually we might get a machine capable of reading a CXR, but it's actually more difficult than a layperson would think. There's a fair bit of judgement that has to be made regarding how much to read into a subtle abnormality (overcalling vs undercalling).

Many medical problems are probably more amenable to major machine assistance before imaging in general is. i.e. You could conceivably come up with a one-size-fits-all algorithm for treatment of DM (perhaps one of the most complicated common diseases to approach given the sheer # of equally valid possible approaches to treatment) pretty easily. You'd need a bunch of variables to take into account comorbidities/insurance/whether the patient is a truck driver/even patient preference, but they're all discrete variables and a machine could probably spit out recommendations. Hell, the machine might even do a better job than the doc at suggesting some things, because memory is fallible and it could certainly keep track of some of the economic stuff better than I can. I have no clue which GLP1 insurance X has on formulary this week, and that can frequently lead to delays in care.

Of course... then you'd have to convince the patient of the regimen you want them on, listen to their concerns, and educate them on the specifics. That's a lot harder than deciding to go up on drug X, and is why I'm fairly certain physicians won't be out of work anytime soon.

The best way is something that we're already trending towards: machine-assisted physicians. The algorithm can help you manage your patient, can trigger reminders of XYZ (actual clinically relevant reminders, not the billing BS most reminders these days are), and you interpret that and apply it to your patient. The hybrid approach will get better and better as time goes on, but it's still important to understand what you're doing and why you're doing it.
 
  • Like
Reactions: 6 users
We haven't yet made a machine that can reliably interpret an ECG and he thinks we'll have them oversee anesthesia anytime int he next 20 years? I'll take that bet right now.
 
  • Like
Reactions: 21 users
Didn't finish reading the article, (and honestly, I don't bother with Emmanuel's BS anymore, as I think he's a senile old fart who has no idea what he's talking about) but wanted to make a point.

We haven't yet made a machine that can reliably interpret an ECG and he thinks we'll have them oversee anesthesia anytime int he next 20 years? I'll take that bet right now.

This is what I was going to say but in relation to machines reading CXRs. Right now any machine read EKG, if not stone cold normal, is "omg, anterior MI cannot be excluded, posterior MI cannot be excluded, ? LVH" etc, even when somebody with knowledge of how EKGs work would tell you that it's normal.

If machine's read CXRs it's going to lead to so much overcalling of every little thing that so many CXRs are going to turn into Chest CTs looking for "possible nodule in left upper lobe, recommend CT".

Most of medicine can read their own EKGs, but most of them don't review their own imaging - it's going to lead to a lot of extra imaging studies.
 
  • Like
Reactions: 1 users
He's pretty accurate in his analysis, however I am in agreement with all of you regarding his egregiously optimistic timeline. What's more likely is that radiologists, critical care and pathologists will utilize these technologic breakthroughs to improve their patient care. I am sure Emmanuel worded this carefully as to garner more public attention to the matter.

It's natural to feel upset or uncomfortable when this topic is brought about. We all have spent so much time studying this material and trying to become experts, and the thought of a machine coming in and replacing us is quite distressing. While the next generation should be concerned, it's my belief that the absolute replacement of human work in the aforementioned fields is decades away. What's more likely is that these breakthroughs significantly alter the job markets in the next 20-30 years.

Regardless, resisting this type of inevitable change is futile. You're much better off accepting it, investigating how it works, predicting how it will affect your practice, and adjusting before everyone else does.

My 0.02
 
  • Like
Reactions: 3 users
"Second, machine learning will displace much of the work of radiologists and anatomical pathologists. These physicians focus largely on interpreting digitized images, which can easily be fed directly to algorithms instead." said a person who probably hasn't looked though a microscope since med school. No anatomic pathologists I know uses digital images for diagnosis. We still use glass slides.
 
  • Like
Reactions: 4 users
He's pretty accurate in his analysis, however I am in agreement with all of you regarding his egregiously optimistic timeline. What's more likely is that radiologists, critical care and pathologists will utilize these technologic breakthroughs to improve their patient care. I am sure Emmanuel worded this carefully as to garner more public attention to the matter.

It's natural to feel upset or uncomfortable when this topic is brought about. We all have spent so much time studying this material and trying to become experts, and the thought of a machine coming in and replacing us is quite distressing. While the next generation should be concerned, it's my belief that the absolute replacement of human work in the aforementioned fields is decades away. What's more likely is that these breakthroughs significantly alter the job markets in the next 20-30 years.

Regardless, resisting this type of inevitable change is futile. You're much better off accepting it, investigating how it works, predicting how it will affect your practice, and adjusting before everyone else does.

My 0.02
Except people having been saying this same thing for decades now about how X was going to replace Y in medicine. Not a single one has happened yet, so I'm not too worried about it.
 
  • Like
Reactions: 1 user
  • Like
Reactions: 3 users
You would think that Zeke wouldn't show his face in public, much less prognosticate about the future in the NEJM, given the melt down of the Obamacare exchanges which he designed.
 
  • Like
Reactions: 2 users
Members don't see this ad :)
Who the eff takes this guy seriously? I think this non-sensical **** he's saying is a sort of "Hey! Look at me! I'm still relevant!"
 
Yeah, this is true. Instead, we just outsource it. No need to invent technology for pattern recognition when you are pay someone halfway around the world for a fraction of the cost.

http://www.nytimes.com/2003/11/16/business/who-s-reading-your-x-ray.html?_r=0
Meh, usually for overnight stuff only. That was a thing when I was in residency, its been 6 years and its not really expanding much.

Besides, there's value in being able to call the radiologist down the hall to discuss a case.
 
Meh, usually for overnight stuff only. That was a thing when I was in residency, its been 6 years and its not really expanding much.

Besides, there's value in being able to call the radiologist down the hall to discuss a case.


Definitely something to be said for inhouse reads. We have nearly every single external film read by inhouse people.
 
Yeah, this is true. Instead, we just outsource it. No need to invent technology for pattern recognition when you are pay someone halfway around the world for a fraction of the cost.

http://www.nytimes.com/2003/11/16/business/who-s-reading-your-x-ray.html?_r=0

I think that having US-trained radiologists in other countries read the images is fine.

I think that having US-trained radiologists sign off on 4x the films from an Indian-trained radiologist without maintaining due process is a really bad idea.

Mainly because anytime something goes wrong a US jury is going to make the hospital or the radiology outsourcing company pay out the ass.
 
  • Like
Reactions: 1 user
I'd like to see a machine knock out Zeke Emanuel and for him to never be heard from again...And seriously, **** my fellow Republicans who nominated Donald ****ing Trump so that this monster returns to the White House in some form probably to inflict even more damage.
 
  • Like
Reactions: 2 users
Never even heard of this Zeke doucher in my life til this thread. I'm like to just plant one on this sucker.

Just one sweet knuckle sammich in his left mandibular condyle. Idc if it even hurts or does anything (i got wimpy ass hands).
 
Algorithms are useful for making stupid people stupider and can even make experts stupid.

We don't even have emrs that can easily open charts from another hospital. I'm still calling other residents for records. If we are still using pagers and fax machines, I don't really see machines replacing us when they can't even help us with our work. So far, electronics have only served to increase the amount of useless busiwork. Would be nice to see them help out with diagnostics once in a while.
 
  • Like
Reactions: 3 users
I've seen a fair amount of automation take over in pathology but then that doesn't really concern the pathologist so much because they are the ones making sure that what the machines read out is correct; for that they have to do manual review. I know that there are certain slide review technologies out there that will definitely help but nothing can replace manual review. This zeke guy needs to go work in healthcare (oh wait he does...or does he?) before he throws these exaggerated comments.
 
Algorithms are useful for making stupid people stupider and can even make experts stupid.

We don't even have emrs that can easily open charts from another hospital. I'm still calling other residents for records. If we are still using pagers and fax machines, I don't really see machines replacing us when they can't even help us with our work. So far, electronics have only served to increase the amount of useless busiwork. Would be nice to see them help out with diagnostics once in a while.
That's because EMRs are not designed to help physicians communicate in the slightest.

EMRs are designed to assist A) Billing B) Billing C) Billing D) Billing E) Data collection for hospital QI/research F) Decrease adverse effects with order entry (and decrease costs/liability by making sure physicians do every component of the order themselves) G) Billing.

The completely unintuitive ICD10 coding system facilitates the billing and data collection efforts. But for someone actually interested in taking care of that individual patient, the listed diagnosis is absolutely worthless compared to a line or two freetext telling you what the physician is actually thinking is going on. It will also never go away (see A-D, G). By far the best historic notes to understand for me (and I've looked at a lot of old notes in the process of chart review) are just dictated records that tell you what the physician is considering and planning. Handwritten records are awful and the current notes are even uninterpretable unless you take the time to chart via both the ICD10/problem list system and freetext a real assessment. (which I do, but which takes time).

On the other hand, I'm in strong favor of well done CPOE (because the old system with potentially vague chicken scratches and a clerk with a high school education interpreting them is silly), but that also requires a very robust system for understanding synonyms (many lab tests and panels can have a half dozen different names), the ability to write custom orders, a very, very, very light touch with pop-up alerts, and an absolute minimization to the number of clicks required for any given order (that means a strong system of reasonable defaults).
 
  • Like
Reactions: 4 users
I'd like to see a machine intubate your patients. And reading a tumor slide? I don't know...

If a machine can read a CT scan or MRI, why couldn't it scan a slide for features of cancer cells?
 
If a machine can read a CT scan or MRI, why couldn't it scan a slide for features of cancer cells?

A cancer slide is not benign vs maligant. It needs to be evaluated carefully in order to grade and classify the malignancy and then evaluate for ancillary findings.

So what if a machine can recognize florid invasive ductal carcinoma? So can a pigeon and a 5 year old. Now show them flat epithelial atypia and see what happens.
 
  • Like
Reactions: 3 users
A cancer slide is not benign vs maligant. It needs to be evaluated carefully in order to grade and classify the malignancy and then evaluate for ancillary findings.

So what if a machine can recognize florid invasive ductal carcinoma? So can a pigeon and a 5 year old. Now show them flat epithelial atypia and see what happens.

You're a pathologist (I assume by the username) and obviously know more than me, but really I can't see why computer algorithms couldn't be programmed to do all of that. Certainly they can look at features of cells, mitosis/hpf, which types of stains react to the sample, etc. If the human eye can do it, a computer should be able to do it eventually (maybe not tomorrow, but think 2045).
 
If a machine can read a CT scan or MRI, why couldn't it scan a slide for features of cancer cells?

Um. Maybe because machines can't read CT or MRI.

You guys realize that when machines become so sophisticated that they can dosplace physician jobs, we will be like at a full scale take over by machines. A la cybernet in Terminator. Silly med students.
 
  • Like
Reactions: 4 users
Um. Maybe because machines can't read CT or MRI.

You guys realize that when machines become so sophisticated that they can dosplace physician jobs, we will be like at a full scale take over by machines. A la cybernet in Terminator. Silly med students.

Yeah seriously, 90% of other professionals would be out of a job by then too.
 
  • Like
Reactions: 1 users
ITT: people who know medicine but not computers complaining about people who know computers but not medicine.

We've taught computers to drive. They'll learn other things too, just probably not in the time frame suggested by this guy.
 
  • Like
Reactions: 1 user
I know more about computers than they know about medicine.
 
  • Like
Reactions: 1 users
ITT: people who know medicine but not computers complaining about people who know computers but not medicine.

We've taught computers to drive. They'll learn other things too, just probably not in the time frame suggested by this guy.

We've taught bonobos how to communicate with humans. Let's have them do our jobs as well.
 
You're a pathologist (I assume by the username) and obviously know more than me, but really I can't see why computer algorithms couldn't be programmed to do all of that. Certainly they can look at features of cells, mitosis/hpf, which types of stains react to the sample, etc. If the human eye can do it, a computer should be able to do it eventually (maybe not tomorrow, but think 2045).

They cannot replace pathology (or any diagnostic specialty) because although diagnoses may appear objective to a patient or medical student, they are, in fact, quite subjective.
 
  • Like
Reactions: 1 users
ITT: people who know medicine but not computers complaining about people who know computers but not medicine.

We've taught computers to drive. They'll learn other things too, just probably not in the time frame suggested by this guy.

Yet self driving cars are frequently perplexed by a four-way stop sign. I think our odds are looking ok.
 
  • Like
Reactions: 1 user
ITT: people who know medicine but not computers complaining about people who know computers but not medicine.

We've taught computers to drive. They'll learn other things too, just probably not in the time frame suggested by this guy.
I'm not sure what impression you're getting from this thread, or if you just wanted to act superior. It seems like most people are saying exactly what you say: probably someday computers will do some of these things, but not in the near future as stated in the article.
 
They cannot replace pathology (or any diagnostic specialty) because although diagnoses may appear objective to a patient or medical student, they are, in fact, quite subjective.

If there is no objective basis to your findings then you are making **** up.

If there is an objective basis to your findings then a machine can, eventually, be taught to do what you do.

I agree that Zeke doesn't have the slightest f-ing clue when that will happen, or which fields it will happen to first.

I disagree with the above posters, who think it can't happen in 20 years. In our generation we've seen dozens of technologies go from science fiction to commonplace in less than a decade.

But most of all I disagree with anyone who thinks it can't happen to their profession at all. Any profession can become suddenly obsolete because of technology and no one is invulnerable. Its a strong argument for developing a wide range of marketable skills.
 
Last edited:
  • Like
Reactions: 3 users
If there is no objective basis to your findings then you are making **** up.

If there is an objective basis to your findings then a machine can, eventually, be taught to do what you do.

There is an objective basis for pathology, but a shocking amount of subjectivity for a lot of difficult findings.

In my own field for example, thyroid cytology (i.e. reading FNA biopsies) has clear buckets they assign it to. If adequate (which is determined by a simple count), sample is put somewhere in Bethesda II-VI. Most of the time the clearly benign are clearly benign (II) and the clearly malignant are clearly malignant (VI), but there's a lot of gray area in between.

The problem is that you could get the best thyroid cytologists in the world and have them read the same slides, you'll find up to 20% discordance. One of them puts it as Bethesda III and another puts it as Bethesda IV, which can have real world management consequences (repeat a biopsy vs send the patient for surgery for example). Even if you put these elite cytologists in the same room and have them confer, you'll still get up to 5% where they can't agree. The scariest part is that they've even done the experiment where they take the same slide and send it to the same cytologist a month later... you'll get a different answer some proportion of the time.

A lot of these cases are *hard*, and there really isn't a clear right answer. Based on the current state of AI, I don't think that an algorithm is going to be much better. Even knowing what I know about inconsistency, I'd rather send a biopsy sample for one of my patients to one of those elite cytologists than trust in a black box.
 
  • Like
Reactions: 1 user
There is an objective basis for pathology, but a shocking amount of subjectivity for a lot of difficult findings.

In my own field for example, thyroid cytology (i.e. reading FNA biopsies) has clear buckets they assign it to. If adequate (which is determined by a simple count), sample is put somewhere in Bethesda II-VI. Most of the time the clearly benign are clearly benign (II) and the clearly malignant are clearly malignant (VI), but there's a lot of gray area in between.

The problem is that you could get the best thyroid cytologists in the world and have them read the same slides, you'll find up to 20% discordance. One of them puts it as Bethesda III and another puts it as Bethesda IV, which can have real world management consequences (repeat a biopsy vs send the patient for surgery for example). Even if you put these elite cytologists in the same room and have them confer, you'll still get up to 5% where they can't agree. The scariest part is that they've even done the experiment where they take the same slide and send it to the same cytologist a month later... you'll get a different answer some proportion of the time.

A lot of these cases are *hard*, and there really isn't a clear right answer. Based on the current state of AI, I don't think that an algorithm is going to be much better. Even knowing what I know about inconsistency, I'd rather send a biopsy sample for one of my patients to one of those elite cytologists than trust in a black box.

This isn't a case for the superiority of humans to machines, though. If two elite cytologist can come up with different answers for the same slide that means that there either isn't a clear right answer or that humans are incapable of telling the difference. A machine is just as capable of guessing which of two equally probable answers is correct as a human is. I would only trust the elite cytologist more if I thought that he more consistently came up with the objectively correct answer than a machine. When that confidence goes, so does the profession.
 
Id really like to see a machine code an operative patient who randomly goes into v. Fib on the operative table. Or care for them post-op in the ICU. Rads and Path maybe. Less likely in anesthesia. People forget that anesthesiologists arent just putting you to sleep. They're keeping you alive.
 
  • Like
Reactions: 1 user
"Second, machine learning will displace much of the work of radiologists and anatomical pathologists. These physicians focus largely on interpreting digitized images, which can easily be fed directly to algorithms instead." said a person who probably hasn't looked though a microscope since med school. No anatomic pathologists I know uses digital images for diagnosis. We still use glass slides.

If digital slide AI worked, hospitals would get a tech to run it and pay for the equipment upgrade to get the overall cost savings from eliminating several physicians.

Yeah, this is true. Instead, we just outsource it. No need to invent technology for pattern recognition when you are pay someone halfway around the world for a fraction of the cost.

Who's Reading Your X-Ray?

"any Indian radiologist reading scans from Massachusetts General would have to be licensed in that state and be certified by the hospital, so patient care would not suffer."
"A big obstacle to such services' growth is the requirement of most American states that radiologists be licensed in order to analyze scans of patients treated in those states. Moreover, radiologists need to have credentials at each hospital where they practice. As a result, it takes time and administrative work to set up each new account."
"Wipro's radiologists are not licensed in any state or approved by any hospital, Mr. Kurien said. That makes them ineligible, by themselves, to do even preliminary readings for American hospitals. Instead, he said, they receive scans electronically and provide interpretations to Wipro-employed licensed radiologists in the United States, who in turn consult with the client radiologist."

What exactly are the rules on being able to practice telemedicine, particularly for being allowed to do preliminary vs. final reads?
 
Lol imagine if the machine interprets the EKG as V fib/tach when in reality it's just artifact from the surgeon....

And imagine the machine emptying the foley bag.. or estimating blood loss from the amount pouring off the drapes... And what is the machine going to do? Have every medication stored inside the machine, and many units of every type of blood products?
 
  • Like
Reactions: 1 user
This isn't a case for the superiority of humans to machines, though. If two elite cytologist can come up with different answers for the same slide that means that there either isn't a clear right answer or that humans are incapable of telling the difference. A machine is just as capable of guessing which of two equally probable answers is correct as a human is. I would only trust the elite cytologist more if I thought that he more consistently came up with the objectively correct answer than a machine. When that confidence goes, so does the profession.
The problem is the first time the machine calls it incorrectly and the patient has an adverse income, who exactly is going to get sued?

That is, a pathologist might have made the same mistake. But people are much more understanding of that.
 
I see that future corporate leeches are already at work early in term of plotting to exploit cheap medical labor from developed countries in order to have those services delivered the American people at depressed prices in order to benefit the top few.

It's good that US physicians are aware of these schemes. The best defense is the best offense. Offense can only comes when you're aware of the opposition schemes beforehand.
 
Radiologists are already using AI and computer-aided detection. It helps to remove the more mundane aspects of the job so they can focus on describing the primary disease finding and consulting. I want to be a radiologist, so I hope AI continues to go this way for radiology. I don’t have any objective data to support assertions about AI having the ability to replace certain physicians in 20 vs 50 years or whenever...can this even be predicted? I think a lot of other stuff will be automated before diagnosis—for example, how about fingerprint scanners on the workstations instead of having to type passwords every minute?
 
The problem is the first time the machine calls it incorrectly and the patient has an adverse income, who exactly is going to get sued?

This is a great point. Think about surgeries where there is some implantable device like lap bands or synthetic urethral slings. As soon as somebody figures out you can sue the billion dollar medical equipment company, lawyers start sniffing out every person who's ever had a complication and getting them to sue. The same isn't done for all the people with horrid complications of roux-en-y bypass or autologous slings because there's not that huge pot to milk. As soon as one machine makes an error, somebody is going to prove in court that the company had some liability, suddenly every patient who dies or has a complication that was managed by the machine will be taking the company to court, whether or not the overall complication rate is actually better or worse.
 
If there is no objective basis to your findings then you are making **** up.

If there is an objective basis to your findings then a machine can, eventually, be taught to do what you do.

I agree that Zeke doesn't have the slightest f-ing clue when that will happen, or which fields it will happen to first.

I disagree with the above posters, who think it can't happen in 20 years. In our generation we've seen dozens of technologies go from science fiction to commonplace in less than a decade.

But most of all I disagree with anyone who thinks it can't happen to their profession at all. Any profession can become suddenly obsolete because of technology and no one is invulnerable. Its a strong argument for developing a wide range of marketable skills.

Idk, tell a psychotic patient with delusions that machines are monitoring them that they'll be getting their meds or psychotherapy from a piece of technology and see how well that goes...
 
A lot of nay-saying regarding the ability of machines vs actually thinking about how to adapt. I'm sure truck drivers and car drivers are saying the same thing. If a computer can drive a car safely in the real world I have no doubt in my mind a computer can probably read an x-ray or a CT safely as well. It's really not a question of if, it's a question of when. I mean we can all attack the messenger, but in reality we should probably be looking at ways to adapt to the coming changes and look out for our own. Easy pathways to retrain physicians that may be displaced is probably a good start. We can delay as well, but that only works for so long. I personally wanted to be in radiology for a long time but AI developments have modified my goals. Rads and Path are probably the first ones to be drastically impacted with surgical specialties being safe for the longest time.
 
Idk, tell a psychotic patient with delusions that machines are monitoring them that they'll be getting their meds or psychotherapy from a piece of technology and see how well that goes...

Not that people aren't trying to do machine-run risk assessment in Psych. REACHVET that the VA has been rolling out has received a ton of (mostly good) publicity. In practice it's been great for telling us which patient have recently been admitted. :rolleyes:

also, as a Chicagoan. this is me making a wanking gesture at whenever the Emanuel family is mentioned.
 
  • Like
Reactions: 1 user
A spread sheet of the probability of computerization of 702 jobs from a 2013 paper:



Physicians and surgeons are ranked as the 15th least probable, at .42%. Not sure what the timeline is. Perhaps by the time physicians/surgeons are computerized, every other job will be computerized as well?

Heck, even a job as simple as working as a cashier hasn't been widely automated yet. I think we've got time.
 
  • Like
Reactions: 1 user
Top