Google Artificial Intelligence Pioneer Says Close Radiology Programs

Will deep learning artificial intelligence replace radiologists within 5 years?

  • yes

    Votes: 7 10.3%
  • no

    Votes: 61 89.7%

  • Total voters
    68

jkdoctor

Probationary Status
5+ Year Member
Apr 29, 2013
990
925
Status
Fellow [Any Field]
Dr. Hinton speaks about radiology:

There are some estimates that five percent of all AI talent within the private sector are currently employed by Google. Perhaps no on among that rich talent pool has as deep a set of perspectives as Geoff Hinton. He has been involved in AI research since the early 1970s, which means he got involved before the field was really defined. He also did so before the confluence of talent, capital, bandwidth, and unstructured data in need of structuring came together to put AI at the center of the innovation roadmap in Silicon Valley and beyond.

A British born academic, Hinton is considered a pioneer in the branch of machine learning referred to as deep learning. As he mentions in my extended interview with him, we are on the cusp of some transformative innovation in the field of AI, and as someone who splits his time between Google and his post a the University of Toronto, he personifies the value at the intersection between the research and theory and the practice of AI.

Deep Learning Pioneer Geoff Hinton Helps Shape Google's Drive To Put AI Everywhere
 

maxxor

10+ Year Member
Apr 11, 2009
868
643
Status
Attending Physician
Dr. Hinton speaks about radiology:

There are some estimates that five percent of all AI talent within the private sector are currently employed by Google. Perhaps no on among that rich talent pool has as deep a set of perspectives as Geoff Hinton. He has been involved in AI research since the early 1970s, which means he got involved before the field was really defined. He also did so before the confluence of talent, capital, bandwidth, and unstructured data in need of structuring came together to put AI at the center of the innovation roadmap in Silicon Valley and beyond.

A British born academic, Hinton is considered a pioneer in the branch of machine learning referred to as deep learning. As he mentions in my extended interview with him, we are on the cusp of some transformative innovation in the field of AI, and as someone who splits his time between Google and his post a the University of Toronto, he personifies the value at the intersection between the research and theory and the practice of AI.

Deep Learning Pioneer Geoff Hinton Helps Shape Google's Drive To Put AI Everywhere
Notice how all the self driving car stories have been walked back in the past few months?
 
  • Like
Reactions: Puff-of-Snow Sign

808s&heartbreak

2+ Year Member
Mar 24, 2017
27
21
Status
Resident [Any Field]
...and Steve Jobs thought he could cure his (actually curable) pancreatic cancer with homeopathy, Linus Pauling who won two Nobel Prizes thought he could cure diseases with megadoses of vitamin C. Point is, just because you are good at one field doesn't mean you can accurate predict other fields. He said that more than one year ago, we have four more years to go. Clock is ticking. Just so you know, nothing has changed.

He is brilliant for sure, but his statement about radiology is a major blunder. He doesn't understand the extent of the job or the healthcare landscape. Hubris is the genius's worst enemy.

Most people who know radiology and healthcare delivery do not know machine learning, and vice versa. Do you know what Silicon Valley and Donald Trump have in common? Both think they can just come in and easily save healthcare, and both got nowhere. "Who knew healthcare is so complicated?" Answer: ...any doctor can tell you that.

As someone who know both radiology and machine learning, I can confidently say it will not happen anytime soon. Machine learning has a niche in supporting radiologists and I actually hope it does, but it will not replace the radiologist or any clinician. What I think and hope will happen in the future is there is a decrease number of radiologists needed, where radiologists will serve as both information specialists and consultants to clinicians - an elite breed with a diverse and specialized skills. That is what we ought to be. We are radiologists - we have to know everything.
 
  • Like
Reactions: fun8stuff
About the Ads
OP
J

jkdoctor

Probationary Status
5+ Year Member
Apr 29, 2013
990
925
Status
Fellow [Any Field]
Is IBM Ready to Dominate Radiology With AI?
AUGUST 19, 2017 BY NANALYZE

Advances in artificial intelligence (AI) are moving so fast that our team of MBAs can barely keep up. Everywhere you turn, startups are burning through venture capital funds in a mad rush to land grab as much “big data” as possible which will be used to produce the best artificial intelligence algorithms possible. Back in February of this year, we were all proud of ourselves for putting together what we thought at the time was a comprehensive list of “9 Artificial Intelligence Startups in Medical Imaging“. Just 7 months later, we see that there are now more than 40 medical imaging startups developing AI algorithms for use in radiology
That slide from Signify Research shows how we are already seeing the AI algorithms beginning to specialize in particular areas. We’ve written before about some of the more well funded companies in the list, and now we want to start looking at the rest of them. Before doing that though, we wanted to think about what a winning company in the AI medical imaging space might look like. One thing we know for sure is that the best AI algorithms will be the ones with the best data. This means that each of these startups should have exclusive access to certain “big data” that will distinguish them from their 39 other competitors. The best startups will be spending money to secure access to data, and one good example of that comes not from a startup, but from a beaten down value stock that pays a nice +4.29% yield.
Is IBM Ready to Dominate Radiology With AI? - Nanalyze
 
OP
J

jkdoctor

Probationary Status
5+ Year Member
Apr 29, 2013
990
925
Status
Fellow [Any Field]
Google’s A.I. Program Rattles Chinese Go Master as It Wins Match
By PAUL MOZURMAY 25, 2017

HONG KONG — It’s all over for humanity — at least in the game of Go.

For the second game in a row, a Google computer program called AlphaGo beat the world’s best player of what many consider the world’s most sophisticated board game. AlphaGo is scheduled to play its human opponent, the 19-year-old Chinese prodigy Ke Jie, one more time on Saturday in the best-of-three contest.

But with a score of 2-0 heading into that final game, and earlier victories against other opponents already on the books, AlphaGo has proved its superiority.

Discussing the contest afterward, Mr. Ke said a very human element got the better of him: his emotions. In the middle of the game, when he thought he might have had a chance at winning, he got too keyed up, he said.

“I was very excited. I could feel my heart bumping,” Mr. Ke said after the contest, which took place in Wuzhen, near Shanghai. “Maybe because I was too excited I made some stupid moves.”

“Maybe that’s the weakest part of human beings,” he added.

AlphaGo’s victory on Thursday simply reinforced the progress and power of artificial intelligence to handle specific but highly complex tasks. Because of the sheer number of possible moves in Go, computer scientists thought until recently that it would be a decade before a machine could play better than a human master.
Google’s A.I. Program Rattles Chinese Go Master as It Wins Match
 
  • Like
Reactions: AI Coming

hantah

7+ Year Member
May 10, 2012
11
16
Status
Resident [Any Field]
Dr. Hinton speaks about radiology:

There are some estimates that five percent of all AI talent within the private sector are currently employed by Google. Perhaps no on among that rich talent pool has as deep a set of perspectives as Geoff Hinton. He has been involved in AI research since the early 1970s, which means he got involved before the field was really defined. He also did so before the confluence of talent, capital, bandwidth, and unstructured data in need of structuring came together to put AI at the center of the innovation roadmap in Silicon Valley and beyond.

A British born academic, Hinton is considered a pioneer in the branch of machine learning referred to as deep learning. As he mentions in my extended interview with him, we are on the cusp of some transformative innovation in the field of AI, and as someone who splits his time between Google and his post a the University of Toronto, he personifies the value at the intersection between the research and theory and the practice of AI.

Deep Learning Pioneer Geoff Hinton Helps Shape Google's Drive To Put AI Everywhere
He is totally off base and has no clue about radiology. I write deep learning algorithms all the time for radiology problems and we are simply nowhere close. There's reason why radiologists are MDs and not some high school students who are trained to find image patterns. There are very specific niche problems in radiology that are amenable to replacement with AI, but these algorithms simply do not have the robustness of human eyes when faced with unexpected situations. Many problems in radiology (and medicine) are not solvable in data-driven ways because some are driven by consensus opinions and some are driven by very few examples of catastrophic cases that were emphasized in training. I would say, unless there is another major breakthrough in both algorithms and data (size/quality), replaceable components account for max 10-20%.
 

Tiger100

2+ Year Member
Mar 11, 2017
192
164
Status
Attending Physician
Now apart from joking, let's talk about AI.

- Will AI replace radiologists in the next 20, 50 or 100 years?

I don't think it will replace but whether it decreases the demand for radiologists, nobody knows. It may happen.

BUT

If AI reaches to that point, then our lives won't be the same. An AI which is capable of replacing a radiologist will be able to do a lot of other things including replacing 90% of jobs in US and many physician jobs.

Let me give you an example.
Let's say AI can diagnose a lung mass. Now why it can't biopsy it? Then after a few years, why it can't do a lobectomy. Why it can't put the orders for chemotherapy? Why it can't do an endoscopy or ERCP? If it can diagnose appendicitis, Why it can't do appendectomy? I don't say in the first stage, but it will be able to do it eventually.

The following words are from wikipedia and I read about it in the past:

According to CNN, there was a recent study by surgeons at the Children's National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.

What we do in radiology is more complicated that most of medicine. I can't predict the future. But if AI gets to the point that it can replace my job, it will definitely replace surgeon's job, dermatologist job, internist job and ...
If AI can be that accurate, then a dermatology service can easily be run by a PA and an AI Machine which can look at skin lesions and make diagnosis. An oncology service can be run by a PA and an AI algorithm that can order the chemotherapy and even give medication for its complications. And more importantly, 90% of Americans will be jobless.

In summary, don't worry about it.
 

medgator

Senior Member
Lifetime Donor
15+ Year Member
Sep 20, 2004
5,633
2,458
Status
Attending Physician
AI can't be sued... A radiologist will still need to sign off at the end of the day
 

Tiger100

2+ Year Member
Mar 11, 2017
192
164
Status
Attending Physician
So is Google going to be placing my central lines in IR?
That's the point. If AI can interpret a neck CTA or a carotid or jugular vein Ultrasound, then it can also be attached to a robot that can stick a needle into the vein, pass a wire and then pass a catheter and then stitch it.

And if AI is capable of that, it is probably capable of replacing 90% of jobs in planet. Rather than referring the patient to a specialist, a PA can just use AI to get the answer.

Human may reach that point sometime in the future. But our lives will be something totally different.
 

Mad Jack

Critically Caring
5+ Year Member
Jul 27, 2013
35,666
65,418
4th Dimension
That's the point. If AI can interpret a neck CTA or a carotid or jugular vein Ultrasound, then it can also be attached to a robot that can stick a needle into the vein, pass a wire and then pass a catheter and then stitch it.

And if AI is capable of that, it is probably capable of replacing 90% of jobs in planet. Rather than referring the patient to a specialist, a PA can just use AI to get the answer.

Human may reach that point sometime in the future. But our lives will be something totally different.
Functional robotics of that level are pretty far off. Like, if you watch the video of that robot doing the intestinal work, it's slow and requires a lot of assistance, in addition to only doing one very small part of the operation. It's like saying I can train my cat to fetch, therefore cats will soon be able to run a McDonald's.
 
About the Ads

Tiger100

2+ Year Member
Mar 11, 2017
192
164
Status
Attending Physician
Functional robotics of that level are pretty far off. Like, if you watch the video of that robot doing the intestinal work, it's slow and requires a lot of assistance, in addition to only doing one very small part of the operation. It's like saying I can train my cat to fetch, therefore cats will soon be able to run a McDonald's.
I think we are both talking about the same thing.

All of these automatic systems are very primitive. IF we reach a point that they can do a very complex highly intellectual task like interpreting a CT, then it will be capable of doing many things that we can not even think about it right now including running a McDonald's.

My whole point is that I even don't think about it.
 
  • Like
Reactions: Mad Jack

redoitall

7+ Year Member
Mar 30, 2012
395
61
The Big State, USA
Status
Medical Student
I am an intern going into rads. I used to develop text data mining algos and published in as a big a journal as it gets. Although it would seem easy enough to make computer read and recognize the meaning of a sentence, let me assure you that computers are simply incapable of doing so. Sure they can do the basic things, but when things are implied, or suggested and so on, computers are in no way near what humans can do.
Now before they can actually figure out what a picture says, this is at an entirely different level.
As an intern, occasionally, we see radiological findings that are suspicious. I mean we can pick them up. But when these findings have implications as far as management, there is this strong feeling and urge to pick up the phone and call the radiologist to double check and ask questions. Now I don't think I will be calling a computer anytime soon...
That said, I don't deny rads will change in the near future. I mean 15 years ago, films were still thrown on light walls to be read. Go figure what rads will be in 15 years from now. It is certainly a field that offers many opportunities for technological development and radiologists will have to adapt. They have done so nicely so far. Would you go back to the time where you read films on a light wall???
Things will change, but this is more exciting that worrisome, at least for me.
 

DrfluffyMD

Membership Revoked
Removed
2+ Year Member
Dec 15, 2016
1,454
1,489
Status
Resident [Any Field]
Most of OP's post belong to the DO and pathology forums. What brought you here? Application time?
 

Dr. Mantis Tobogan

You can take it from me, because I'm a doctor.
Jul 27, 2017
15
7
Philadelphia, PA
Status
Medical Student
Most of OP's post belong to the DO and pathology forums. What brought you here? Application time?
If I remember correctly, OP is a doomsdayer over in the DO forums, and I can only imagine that's what he is in the path forum.

Common theme leads me to think he is chicken little.
 
  • Like
Reactions: Spikebd

johnnydrama

I'm no Superman
10+ Year Member
Jun 14, 2006
12,987
5,346
Status
Watson isn't a threat, it's a PR campaign. IBM made the mistake of over-promising and under-delivering on AI.

The real stuff is mainly coming from Google and Facebook.
 

Naijaba

10+ Year Member
Apr 2, 2007
1,057
108
Status
Pre-Medical
Oh, another AI thread; I get to pontificate again. ;)

Watson isn't a threat, it's a PR campaign. IBM made the mistake of over-promising and under-delivering on AI.

The real stuff is mainly coming from Google and Facebook.
Absolutely right, Watson was a marketing ploy.

He is totally off base and has no clue about radiology. I write deep learning algorithms all the time for radiology problems and we are simply nowhere close. There's reason why radiologists are MDs and not some high school students who are trained to find image patterns. There are very specific niche problems in radiology that are amenable to replacement with AI, but these algorithms simply do not have the robustness of human eyes when faced with unexpected situations. Many problems in radiology (and medicine) are not solvable in data-driven ways because some are driven by consensus opinions and some are driven by very few examples of catastrophic cases that were emphasized in training. I would say, unless there is another major breakthrough in both algorithms and data (size/quality), replaceable components account for max 10-20%.
I disagree, I'm just an R1, but there aren't many MDs who have undergraduate and master's degrees in computer science; hopefully my advice has some merit.

There's a misconception amongst radiologists about how Google/Facebook/Baidu are attacking this problem. Sure, the short-term goal is for simple high-level diagnoses (pneumothorax, pneumonia, rib fracture, etc.). This is very much possible with the current state of technology, but this is not the product that Google would release. There's disbelief that deep learning can do more than object-detection. But, allow me to play devil's advocate...

Suppose you have a general surgery patient who has an abscess, and the referring general surgeon wants to know if there's a window to aspirate that abscess. This scenario happened yesterday at my hospital.

Google will make an AI that answers the referring physician's questions.

That's what I believe they're going to do. It'll be a cognitive search-engine for referring physicians. The referring physician provides the image and the question. I've only been a prelim-surg intern for 3 months, but I've heard the same questions asked over and over:

Is there a residual pneumothorax? Can we remove the chest tube?
The patient has a post-operative fever, rule out pneumonia.
The patient has sudden-onset shortness of breath, rule out PE.
Does the patient have active extravasation?
What is this thing [clicks on screen]?
Is this abscess cavity [clicks on screen] in communication with this abscess cavity [clicks different location on screen]?


I don't think that the human radiologist will be replaced overnight, but I think such a tool would reduce the dependence of referring physicians on the radiologist's report.

Brief tangent: Google's search index is in excess 30 trillion webpages. How on earth does it search so many pages in a fraction of a second? The secret is a global ordering across possible results. If you have an ordered set of N websites, it takes only log2(N) = log2(30,000,000,000,000) = 44 comparisons to order your results. Google spends all of its money (and computing power), calculating that global order. The computational cost of searching the sorted index is minimal in comparison to calculating the index itself. Right now Google is in the process of building such an index across radiology reports and images.

Edit: I didn't mean to call out hantah so explicitly, but Google operates at such a different scale compared to what we can do on our desktop PCs in terms of deep learning. They use some 1 million servers to perform machine learning...

Edit 2: There's always a legal piece to this conversation (e.g. can you ever remove the radiologist from the equation?), but consider this: Suppose Google releases such a search engine for free, and the referring physician doesn't use it, will they not be criticized in court for not at least trying such a readily available tool?
 
Last edited:

Tiger100

2+ Year Member
Mar 11, 2017
192
164
Status
Attending Physician
Oh, another AI thread; I get to pontificate again. ;)



Absolutely right, Watson was a marketing ploy.



I disagree, I'm just an R1, but there aren't many MDs who have undergraduate and master's degrees in computer science; hopefully my advice has some merit.

There's a misconception amongst radiologists about how Google/Facebook/Baidu are attacking this problem. Sure, the short-term goal is for simple high-level diagnoses (pneumothorax, pneumonia, rib fracture, etc.). This is very much possible with the current state of technology, but this is not the product that Google would release. There's disbelief that deep learning can do more than object-detection. But, allow me to play devil's advocate...

Suppose you have a general surgery patient who has an abscess, and the referring general surgeon wants to know if there's a window to aspirate that abscess. This scenario happened yesterday at my hospital.

Google will make an AI that answers the referring physician's questions.

That's what I believe they're going to do. It'll be a cognitive search-engine for referring physicians. The referring physician provides the image and the question. I've only been a prelim-surg intern for 3 months, but I've heard the same questions asked over and over:

Is there a residual pneumothorax? Can we remove the chest tube?
The patient has a post-operative fever, rule out pneumonia.
The patient has sudden-onset shortness of breath, rule out PE.
Does the patient have active extravasation?
What is this thing [clicks on screen]?
Is this abscess cavity [clicks on screen] in communication with this abscess cavity [clicks different location on screen]?


I don't think that the human radiologist will be replaced overnight, but I think such a tool would reduce the dependence of referring physicians on the radiologist's report.

Brief tangent: Google's search index is in excess 30 trillion webpages. How on earth does it search so many pages in a fraction of a second? The secret is a global ordering across possible results. If you have an ordered set of N websites, it takes only log2(N) = log2(30,000,000,000,000) = 44 comparisons to order your results. Google spends all of its money (and computing power), calculating that global order. The computational cost of searching the sorted index is minimal in comparison to calculating the index itself. Right now Google is in the process of building such an index across radiology reports and images.

Edit: I didn't mean to call out hantah so explicitly, but Google operates at such a different scale compared to what we can do on our desktop PCs in terms of deep learning. They use some 1 million servers to perform machine learning...

Edit 2: There's always a legal piece to this conversation (e.g. can you ever remove the radiologist from the equation?), but consider this: Suppose Google releases such a search engine for free, and the referring physician doesn't use it, will they not be criticized in court for not at least trying such a readily available tool?
I think your statement is somehow oversimplification.

I read CTA for PE many times a day especially from ED and inpatient and you are right. All of them have the same question "rule out PE".

Do you know what are the results? Most of the time negative. BUT most of these patients have some other pathology that can account for the symptoms including but not limited to Aortic dissection, Pneumonia, Pneumothorax, large pericardial effusion, rib fracture, osseous metastatis or even cholecystitis. Some other patients have some other pathologies that may not be the cause of symptoms but are very important like lung cancer, mediastinal lympadenopathy, adrenal nodule, thyroid nodule, aortic aneurysm, axillary lymphadenopathy. And at least 2-3 time during my career I have diagnose breast cancer on a chest CT.

From my understanding deep learning is can be good at answering zero and one. It means whether there is a certain pathology or not. But even in that case, most CTAs are suboptimal. It is eventually the radiologist judgement to call something PE or to call it artifact or call it limited study. Let's be honest. Some referring physicians are good at looking at one pathology. In fact, a surgeon does not need my opinion to see whether an abscess can be drained or not. He is as good. But the same surgeon easily misses the osteomyelitis of lumbar spine or the bone metastasis that is sitting next to the abscess. The main role of a radiologist is to put together all the info and have a comprehensive report.

Now, I don't say that sometime in the future, may be 100 years from now or may be 50 years from now (nobody knows) deep learning can not get to the point to be able to multitask and be as good as a radiologist. But the same system that is able to do such a highly intellectual task, will be able to do many many things, beyond our imagination. It will be able to replace 90% of the jobs in US and many things that doctors do right now will be done by such a highly intellectual system and a PA.
 

DrfluffyMD

Membership Revoked
Removed
2+ Year Member
Dec 15, 2016
1,454
1,489
Status
Resident [Any Field]
Oh, another AI thread; I get to pontificate again. ;)



Absolutely right, Watson was a marketing ploy.



I disagree, I'm just an R1, but there aren't many MDs who have undergraduate and master's degrees in computer science; hopefully my advice has some merit.

There's a misconception amongst radiologists about how Google/Facebook/Baidu are attacking this problem. Sure, the short-term goal is for simple high-level diagnoses (pneumothorax, pneumonia, rib fracture, etc.). This is very much possible with the current state of technology, but this is not the product that Google would release. There's disbelief that deep learning can do more than object-detection. But, allow me to play devil's advocate...

Suppose you have a general surgery patient who has an abscess, and the referring general surgeon wants to know if there's a window to aspirate that abscess. This scenario happened yesterday at my hospital.

Google will make an AI that answers the referring physician's questions.

That's what I believe they're going to do. It'll be a cognitive search-engine for referring physicians. The referring physician provides the image and the question. I've only been a prelim-surg intern for 3 months, but I've heard the same questions asked over and over:

Is there a residual pneumothorax? Can we remove the chest tube?
The patient has a post-operative fever, rule out pneumonia.
The patient has sudden-onset shortness of breath, rule out PE.
Does the patient have active extravasation?
What is this thing [clicks on screen]?
Is this abscess cavity [clicks on screen] in communication with this abscess cavity [clicks different location on screen]?


I don't think that the human radiologist will be replaced overnight, but I think such a tool would reduce the dependence of referring physicians on the radiologist's report.

Brief tangent: Google's search index is in excess 30 trillion webpages. How on earth does it search so many pages in a fraction of a second? The secret is a global ordering across possible results. If you have an ordered set of N websites, it takes only log2(N) = log2(30,000,000,000,000) = 44 comparisons to order your results. Google spends all of its money (and computing power), calculating that global order. The computational cost of searching the sorted index is minimal in comparison to calculating the index itself. Right now Google is in the process of building such an index across radiology reports and images.

Edit: I didn't mean to call out hantah so explicitly, but Google operates at such a different scale compared to what we can do on our desktop PCs in terms of deep learning. They use some 1 million servers to perform machine learning...

Edit 2: There's always a legal piece to this conversation (e.g. can you ever remove the radiologist from the equation?), but consider this: Suppose Google releases such a search engine for free, and the referring physician doesn't use it, will they not be criticized in court for not at least trying such a readily available tool?
Well, I would ask you to reconsider this whole AI answering clinical question business once you progress further in training. While questions may seem simple now, it's rarely that simple in practice.

A computer algorithum gives you probablity weightings, which in absence of clinical intepretation, are meaningless. Further more, the output is only as good as input, and AI are notoriously bad at working with bad inputs.

For example, I can learn how to see fibroadenoma of the breast, or any other new lesions, if you show me ONE case of it on radiopedia. An AI system will not be nearly as good if it only get to see one sample case.

Also, there are nuances of interactions between patients and physicians as well as physicians and physicians that require a human touch.

Let's use a personal example. I remember when you wrote out an IR interview experience blog, you have put down a very specific description of a certain IR PD, alluding to her physical appearance. Within the same blog, you have also put down the date where you rotated at this program, making you imminently identifiable (not to me, but persumably to her and everywhere else you rotated).

The combination of those two ideas in your blog may not be detectable or even seem to be related to an AI system at all, yet I (not an AI) was immediately able to write to you about the inappropriateness of the description and hopefully saving you from being black listed by all California IR fellowships since you have since edited that blog. I don't think a computer program right now can calculate the likelihood of you accidentaly creep out that PD, the likelihood of people taking actions against your future candidacy due to that episode, etc. At least, perhaps not in a form that immediately actionable, because human beings are inherently unpredictable in small numbers (think about to the Foundation series by Asimov), and both radiology and residency/fellowship applications have to do with sufficiently small number of humanity (zebra cases and a few dozen PDs) that AI cannot be relied upon to make accurate "predictions".
 
About the Ads

harambe4ever

2+ Year Member
Feb 14, 2017
85
58
Status
Medical Student
I still don't understand Naijaba would match into radiology if he thinks the robots are going to replace his own occupation before they replace taxi drivers, factory workers, burger flippers, engineers, lawyers, and even arrogant AI programmers themselves.
 

DrfluffyMD

Membership Revoked
Removed
2+ Year Member
Dec 15, 2016
1,454
1,489
Status
Resident [Any Field]
I still don't understand Naijaba would match into radiology if he thinks the robots are going to replace his own occupation before they replace taxi drivers, factory workers, burger flippers, engineers, lawyers, and even arrogant AI programmers themselves.
He was pretty gung ho about IR but ended up matching in a DR program with strong IR.

There is someone I know who also have extensive computer science background who was hell-bent on IR. Turned out some of those folks worry about AI take over of DR and seek out IR as a refuge.

I personally find that distasteful and rather have surgical/clinically minded colleagues and trainees rather than AI / comp sci geeks who decided to enter my field only because they are fearful for continued existence of DR.
 

johnnydrama

I'm no Superman
10+ Year Member
Jun 14, 2006
12,987
5,346
Status
Well, I would ask you to reconsider this whole AI
Well, I would ask you to reconsider this whole AI answering clinical question business once you progress further in training. While questions may seem simple now, it's rarely that simple in practice.

A computer algorithum gives you probablity weightings, which in absence of clinical intepretation, are meaningless. Further more, the output is only as good as input, and AI are notoriously bad at working with bad inputs.

For example, I can learn how to see fibroadenoma of the breast, or any other new lesions, if you show me ONE case of it on radiopedia. An AI system will not be nearly as good if it only get to see one sample case.
Here's the thing - you don't really know how to diagnose a fibroadenoma because of that single radiopaedia entry.

Rather, you are able to process that radiopaedia entry in the context of all of your other radiology knowledge to create a usable rule set for future cases.

Likewise you could train your classifier on a dataset of breast images without fibroadenomas and use the classifier learned from that dataset to quickly create a classifier for a fibroadenoma based off of limited training samples.

Non-medical example:
https://arxiv.org/abs/1606.04080
 
OP
J

jkdoctor

Probationary Status
5+ Year Member
Apr 29, 2013
990
925
Status
Fellow [Any Field]
Scanning The Future, Radiologists See Their Jobs At Risk
September 4, 2017 4:50 PM ET

In health care, you could say radiologists have typically had a pretty sweet deal. They make, on average, around $400,000 a year — nearly double what a family doctor makes — and often have less grueling hours. But if you talk with radiologists in training at the University of California, San Francisco, it quickly becomes clear that the once-certain golden path is no longer so secure.
"The biggest concern is that we could be replaced by machines," says Phelps Kelley, a fourth-year radiology fellow. He's sitting inside a dimly lit reading room, looking at digital images from the CT scan of a patient's chest, trying to figure out why the patient is short of breath.

Dr. Bob Wachter, an internist at UCSF and author of The Digital Doctor, says radiology is particularly amenable to takeover by artificial intelligence like machine learning.
"Radiology, at its core, is now a human being, based on learning and his or her own experience, looking at a collection of digital dots and a digital pattern and saying 'That pattern looks like cancer or looks like tuberculosis or looks like pneumonia,' " he says. "Computers are awfully good at seeing patterns."
Just think about how Facebook software can identify your face in a group photo, or Google's can recognize a stop sign. Big tech companies are betting the same machine learning process — training a computer by feeding it thousands of images — could make it possible for an algorithm to diagnose heart disease or strokes faster and cheaper than a human can.

Scanning The Future, Radiologists See Their Jobs At Risk

August 30, 2017 | PACS and Informatics, DI Executive

The University of Virginia Health System is currently conducting a trial into the efficacy of this software, produced by Carestream.

A study is performed on the CT scanner, the images go to the PACS, and a copy is sent to a separate server where the AI engine sits. It interprets the image and stores results. When the radiologist opens that study for interpretation, the PACS communicates with the third-party server and the AI results are delivered. An icon overlays on the PACS or desktop and lets you know the AI results are available. If you're reading three studies in a row, the first one might not have AI results, but the next case might display the icon. The icons are color-coded so you can understand them at a glance. A green icon means everything is fine -- there are no abnormalities. So, you don't have to spend but a fraction of a second recognizing the green color and going on with your workflow. Red means that the AI algorithm has produced abnormal results. Click on the icon, and you can see the list of findings that it has found. Then, you can make a decision about whether to include those findings in your report or not.

The algorithms collectively need time to interpret images. If you open a study immediately after it reaches PACS, the results won't be back. It takes about 5 or 6 minutes for the results to be recorded.

Q&A: Radiology Department Tests Artificial Intelligence | Diagnostic Imaging
 
Last edited:

DrfluffyMD

Membership Revoked
Removed
2+ Year Member
Dec 15, 2016
1,454
1,489
Status
Resident [Any Field]
Funny how those threads always roll around the ERAS time.
 

laricb

7+ Year Member
Aug 18, 2012
489
85
NYC
Status
Resident [Any Field]
That's the point. If AI can interpret a neck CTA or a carotid or jugular vein Ultrasound, then it can also be attached to a robot that can stick a needle into the vein, pass a wire and then pass a catheter and then stitch it.

And if AI is capable of that, it is probably capable of replacing 90% of jobs in planet. Rather than referring the patient to a specialist, a PA can just use AI to get the answer.

Human may reach that point sometime in the future. But our lives will be something totally different.
So we can all stay home and the government will pay our bills. Bill Gates said it best we need to start taxing companies that use AI to avoid this problem..
 

Mad Jack

Critically Caring
5+ Year Member
Jul 27, 2013
35,666
65,418
4th Dimension
There already is a machine for automated peripheral IVs and venipunctures.

Central lines are far more complicated than a simple blood draw. It requires a degree of precision and dexterity that machines likely won't be able to perform independently in even a lab for another two decades or longer.
 

qxrt

5+ Year Member
Apr 2, 2013
221
183
Status
Attending Physician
I see articles about how AI will take over radiology the same way I see articles about how scientists have discovered the next "cure for cancer." As long as it sells papers, journalists will keep writing about them. I ignore them.
 
  • Like
Reactions: Cyal and PlutoBoy

PlutoBoy

Sic transit gloria mundi
Removed
7+ Year Member
Nov 19, 2009
15,505
10,054
Central lines are far more complicated than a simple blood draw. It requires a degree of precision and dexterity that machines likely won't be able to perform independently in even a lab for another two decades or longer.
Yeah. I don't think machines will replace physicians. Venous access may be automated at some point though.
 
About the Ads
Jun 19, 2016
58
27
Oh, another AI thread; I get to pontificate again. ;)



Absolutely right, Watson was a marketing ploy.



I disagree, I'm just an R1, but there aren't many MDs who have undergraduate and master's degrees in computer science; hopefully my advice has some merit.

There's a misconception amongst radiologists about how Google/Facebook/Baidu are attacking this problem. Sure, the short-term goal is for simple high-level diagnoses (pneumothorax, pneumonia, rib fracture, etc.). This is very much possible with the current state of technology, but this is not the product that Google would release. There's disbelief that deep learning can do more than object-detection. But, allow me to play devil's advocate...

Suppose you have a general surgery patient who has an abscess, and the referring general surgeon wants to know if there's a window to aspirate that abscess. This scenario happened yesterday at my hospital.

Google will make an AI that answers the referring physician's questions.

That's what I believe they're going to do. It'll be a cognitive search-engine for referring physicians. The referring physician provides the image and the question. I've only been a prelim-surg intern for 3 months, but I've heard the same questions asked over and over:

Is there a residual pneumothorax? Can we remove the chest tube?
The patient has a post-operative fever, rule out pneumonia.
The patient has sudden-onset shortness of breath, rule out PE.
Does the patient have active extravasation?
What is this thing [clicks on screen]?
Is this abscess cavity [clicks on screen] in communication with this abscess cavity [clicks different location on screen]?


I don't think that the human radiologist will be replaced overnight, but I think such a tool would reduce the dependence of referring physicians on the radiologist's report.

Brief tangent: Google's search index is in excess 30 trillion webpages. How on earth does it search so many pages in a fraction of a second? The secret is a global ordering across possible results. If you have an ordered set of N websites, it takes only log2(N) = log2(30,000,000,000,000) = 44 comparisons to order your results. Google spends all of its money (and computing power), calculating that global order. The computational cost of searching the sorted index is minimal in comparison to calculating the index itself. Right now Google is in the process of building such an index across radiology reports and images.

Edit: I didn't mean to call out hantah so explicitly, but Google operates at such a different scale compared to what we can do on our desktop PCs in terms of deep learning. They use some 1 million servers to perform machine learning...

Edit 2: There's always a legal piece to this conversation (e.g. can you ever remove the radiologist from the equation?), but consider this: Suppose Google releases such a search engine for free, and the referring physician doesn't use it, will they not be criticized in court for not at least trying such a readily available tool?
Let's say you are right and that deep learning will replace or significantly reduce the demand for diagnostic radiologists. It's clear you believe this, and you may in fact be right. How do you plan to make a living in radiology then? Why would you pursue it? The contradiction in your decisions makes anyone doubt the veracity of anything you say.

Also FYI, even as an intern I had a very simplified view of what radiologists do (similar to what a med student thinks the point of a radiologist is). Once you start doing radiology you will realize the real value proposition of radiology is very different than what you think - in some ways better, in some ways worse, but it's very different, and that is the value proposition you have to replace with deep learning.
 
  • Like
Reactions: PlutoBoy

CharlieBillings

2+ Year Member
Jan 11, 2015
500
603
Status
Medical Student
Why are EKGs so much more complicated than radiology scans? I mean if all of radiology is about to be machines in 5-10 years why do machines still have issues with EKGs?

From my uneducated standpoint I feel like EKGs contain less information than say an MRI, and should be easier.
 
  • Like
Reactions: Fab5Hill33

808s&heartbreak

2+ Year Member
Mar 24, 2017
27
21
Status
Resident [Any Field]
Oh, another AI thread; I get to pontificate again. ;)



Absolutely right, Watson was a marketing ploy.



I disagree, I'm just an R1, but there aren't many MDs who have undergraduate and master's degrees in computer science; hopefully my advice has some merit.

There's a misconception amongst radiologists about how Google/Facebook/Baidu are attacking this problem. Sure, the short-term goal is for simple high-level diagnoses (pneumothorax, pneumonia, rib fracture, etc.). This is very much possible with the current state of technology, but this is not the product that Google would release. There's disbelief that deep learning can do more than object-detection. But, allow me to play devil's advocate...

Suppose you have a general surgery patient who has an abscess, and the referring general surgeon wants to know if there's a window to aspirate that abscess. This scenario happened yesterday at my hospital.

Google will make an AI that answers the referring physician's questions.

That's what I believe they're going to do. It'll be a cognitive search-engine for referring physicians. The referring physician provides the image and the question. I've only been a prelim-surg intern for 3 months, but I've heard the same questions asked over and over:

Is there a residual pneumothorax? Can we remove the chest tube?
The patient has a post-operative fever, rule out pneumonia.
The patient has sudden-onset shortness of breath, rule out PE.
Does the patient have active extravasation?
What is this thing [clicks on screen]?
Is this abscess cavity [clicks on screen] in communication with this abscess cavity [clicks different location on screen]?


I don't think that the human radiologist will be replaced overnight, but I think such a tool would reduce the dependence of referring physicians on the radiologist's report.

Brief tangent: Google's search index is in excess 30 trillion webpages. How on earth does it search so many pages in a fraction of a second? The secret is a global ordering across possible results. If you have an ordered set of N websites, it takes only log2(N) = log2(30,000,000,000,000) = 44 comparisons to order your results. Google spends all of its money (and computing power), calculating that global order. The computational cost of searching the sorted index is minimal in comparison to calculating the index itself. Right now Google is in the process of building such an index across radiology reports and images.

Edit: I didn't mean to call out hantah so explicitly, but Google operates at such a different scale compared to what we can do on our desktop PCs in terms of deep learning. They use some 1 million servers to perform machine learning...

Edit 2: There's always a legal piece to this conversation (e.g. can you ever remove the radiologist from the equation?), but consider this: Suppose Google releases such a search engine for free, and the referring physician doesn't use it, will they not be criticized in court for not at least trying such a readily available tool?
Disagree. By the way, there are lots of people with tech backgrounds in radiology. Just take a look at the top residencies - lots of residents and attendings have engineering backgrounds up to the PhD level. As long as as one is technically minded, it's not hard to learn machine learning and more specifically CNNs - Coursera and Udacity have courses by AI experts including the infamous Geoffrey Hinton. Having a bachelor and masters degrees in CS don't really mean much - knowing data structures and discrete math has very little to do with AI. The background in AI can easily be obtained - udacity has nanodegrees both in machine learning and in artificial intelligence, taught by Sebastian Thurn.

In regards to your examples:

Regarding abscess: you cannot definitively tell if a fluid-attenuating lesion is an abscess or a cyst definitely. Need to incorporate history. Fever/pain/leukocytosis --> increased pretest probability of abscess. Recent instrumentation and the patient is current asymptomatic? More likely a lymphocele or something.

Is there a residual pneumothorax? Can we remove the chest tube?
- Deep learning may tell you pneumo or no pneumo. but suppose if the patient had a chest tube pulled multiple times and redeveloped a pneumo? Do you pull the tube or do you wait another day, or perhaps do a pleurodesis? the prior films are the radiologists' best friend. this is just one example, but in medicine there are many exceptions that still require human judgment.


The patient has a post-operative fever, rule out pneumonia.
- pneumonia and atelectasis can look similar. radiologists are bayesian thinkers....it's not 1 or 0. it's more like, well, this patient is intubated and the consolidation is in the right lower lobe, so more likely aspiration rather than pneumonia. if this patient had a right upper lobe wedge resection and he's not breathing much, and the consolidation in next to the resection, would you call pneumonia or atelectasis (in a febrile vs non-febrile patient)? Hard to call...

The patient has sudden-onset shortness of breath, rule out PE.
- ok. Suppose the AI can definitively say no PE. but wait...is that a widened mediastinum? what's the clinical history? if the patient developed sudden onset shortness of breath in the setting of trauma, the widened mediastinum can be 2/2 aortic dissection. but if there is a prior film that revealed chronic widening of mediastinum, then aortic dissection is lower on the differential. So, what do you want to recommend next? well, depends on clinical history and prior studies, right?

And so on and so forth. Different diseases can present the same, and same diseases can present differently. Unless there is a general medical intelligence, incorporating clinical history, inter-modality comparisons (humans can compare an prior ultrasound to a current CT, deep learning cannot), radiology cannot be automated. Not to mention all the procedures that radiologists do.

For the record, though, AI can be helpful to radiologists (for example, measuring lung nodules....it's so imprecise and time-consuming, I would love for that to be automated) and we should welcome it.


My reason for all these explanations is so future applicants do not get dissuaded from radiology from all the misinformation. Radiology remains the best damn deal in medicine.
 

Lawper

Cat-box cycle
Gold Donor
5+ Year Member
SDN Ambassador
Jun 17, 2014
38,981
113,106
space chat
forums.studentdoctor.net
Dr. Hinton speaks about radiology:

There are some estimates that five percent of all AI talent within the private sector are currently employed by Google. Perhaps no on among that rich talent pool has as deep a set of perspectives as Geoff Hinton. He has been involved in AI research since the early 1970s, which means he got involved before the field was really defined. He also did so before the confluence of talent, capital, bandwidth, and unstructured data in need of structuring came together to put AI at the center of the innovation roadmap in Silicon Valley and beyond.

A British born academic, Hinton is considered a pioneer in the branch of machine learning referred to as deep learning. As he mentions in my extended interview with him, we are on the cusp of some transformative innovation in the field of AI, and as someone who splits his time between Google and his post a the University of Toronto, he personifies the value at the intersection between the research and theory and the practice of AI.

Deep Learning Pioneer Geoff Hinton Helps Shape Google's Drive To Put AI Everywhere
1. Hinton isn't a radiologist. Just because he's a pioneer in deep learning doesn't mean he can understand and predict healthcare trends

2. This situation is related to the following

 

Naijaba

10+ Year Member
Apr 2, 2007
1,057
108
Status
Pre-Medical
Why are EKGs so much more complicated than radiology scans? I mean if all of radiology is about to be machines in 5-10 years why do machines still have issues with EKGs?

From my uneducated standpoint I feel like EKGs contain less information than say an MRI, and should be easier.
This is an age-old misconception. "If computers are able to read images, why can't they read EKGs?" They can, there's simply no money in it. Deep learning has shown to be superior to cardiologists in reading EKGs, but there's no impact on a hospital's bottom-line. Radiology, for better or worse, is highly lucrative and completely digital (well not 100% digital, 808s&theheartbreak brings up a good exception to this below). CT scans are typically reimbursed at a fixed rate, regardless of the time it takes to perform the read. Radiologists have aggressively taken advantage of this over the years. PACS? Check. Thin-client viewers? Check. Templates? Check. Dictaphones? Check. Hot-keyed keyboard and mice? Check. Standing-desks? Check. Four-monitors? Check. Practically everything is optimized around obtaining accurate reads as fast as possible. There are even some companies using machine learning to optimize the order in which reads are presented to the radiologist to minimize fatigue.

By the time we new residents graduate, machine learning will be in the reading room. I doubt it will be anywhere close to performing full reads, but it will be an essential tool to maintaining a high-efficiency practice. The prediction is that eventually practices will realize they need less radiologists to maintain the same volume thanks to the speed improvements afforded by machine learning.

And so on and so forth. Different diseases can present the same, and same diseases can present differently. Unless there is a general medical intelligence, incorporating clinical history, inter-modality comparisons (humans can compare an prior ultrasound to a current CT, deep learning cannot), radiology cannot be automated. Not to mention all the procedures that radiologists do.

For the record, though, AI can be helpful to radiologists (for example, measuring lung nodules....it's so imprecise and time-consuming, I would love for that to be automated) and we should welcome it.


My reason for all these explanations is so future applicants do not get dissuaded from radiology from all the misinformation. Radiology remains the best damn deal in medicine.
I agree with everything you've said. I don't know if we'll ever get to "general medical intelligence" but I think it's more likely now than ever. The developments are happening so rapidly, it's hard to keep pace.

For what it's worth, deep learning (particularly convolutional neural networks), are sort-of doing what you're describing. Traditional machine learning only made decisions off of pre-described features. For example, a facial recognition model would require eye color, hair length, space between eyes, jaw shape, etc. to be precisely defined in order to identify a specific person.

Conversely, a deep convolutional neural network, can learn what features define you! That is, what about your photos separates you from the millions of other people on Facebook. Many of these learned features are the same as the ones humans use to tell each other apart. I can't tell it's Bruce Wayne if I just see his chin under the mask:
upload_2017-9-8_22-16-21.png
but give me his eyes, nose and mouth and i'll know who it is:
upload_2017-9-8_22-15-22.png

It's a comical example, but nonetheless compelling. Deep learning does this by learning the filter functions (often convolutional matrices) to take the source image down to just the features of interest. Here is the famous picture from the Nature paper illustrating how this works:

upload_2017-9-8_22-18-16.png

Each of those (creepy?) faces at the bottom represents a unique composite image found by merging features from the layer above. Why do people like me feel that deep learning can perform radiology reads? If such a network can tell 100 million people apart by compositing features, surely such an algorithm exists for compositing radiology images.

It's true that such a system would not be able to integrate any other insight from outside resources (clinical notes, pathology results, prior studies, etc.), but it alone would be enough to increase efficiency in radiology practice. And that is enough to drive its adoption.
 

808s&heartbreak

2+ Year Member
Mar 24, 2017
27
21
Status
Resident [Any Field]
This is an age-old misconception. "If computers are able to read images, why can't they read EKGs?" They can, there's simply no money in it. Deep learning has shown to be superior to cardiologists in reading EKGs, but there's no impact on a hospital's bottom-line. Radiology, for better or worse, is highly lucrative and completely digital (well not 100% digital, 808s&theheartbreak brings up a good exception to this below). CT scans are typically reimbursed at a fixed rate, regardless of the time it takes to perform the read. Radiologists have aggressively taken advantage of this over the years. PACS? Check. Thin-client viewers? Check. Templates? Check. Dictaphones? Check. Hot-keyed keyboard and mice? Check. Standing-desks? Check. Four-monitors? Check. Practically everything is optimized around obtaining accurate reads as fast as possible. There are even some companies using machine learning to optimize the order in which reads are presented to the radiologist to minimize fatigue.

By the time we new residents graduate, machine learning will be in the reading room. I doubt it will be anywhere close to performing full reads, but it will be an essential tool to maintaining a high-efficiency practice. The prediction is that eventually practices will realize they need less radiologists to maintain the same volume thanks to the speed improvements afforded by machine learning.



I agree with everything you've said. I don't know if we'll ever get to "general medical intelligence" but I think it's more likely now than ever. The developments are happening so rapidly, it's hard to keep pace.

For what it's worth, deep learning (particularly convolutional neural networks), are sort-of doing what you're describing. Traditional machine learning only made decisions off of pre-described features. For example, a facial recognition model would require eye color, hair length, space between eyes, jaw shape, etc. to be precisely defined in order to identify a specific person.

Conversely, a deep convolutional neural network, can learn what features define you! That is, what about your photos separates you from the millions of other people on Facebook. Many of these learned features are the same as the ones humans use to tell each other apart. I can't tell it's Bruce Wayne if I just see his chin under the mask:
View attachment 223387
but give me his eyes, nose and mouth and i'll know who it is:
View attachment 223386

It's a comical example, but nonetheless compelling. Deep learning does this by learning the filter functions (often convolutional matrices) to take the source image down to just the features of interest. Here is the famous picture from the Nature paper illustrating how this works:

View attachment 223388

Each of those (creepy?) faces at the bottom represents a unique composite image found by merging features from the layer above. Why do people like me feel that deep learning can perform radiology reads? If such a network can tell 100 million people apart by compositing features, surely such an algorithm exists for compositing radiology images.

It's true that such a system would not be able to integrate any other insight from outside resources (clinical notes, pathology results, prior studies, etc.), but it alone would be enough to increase efficiency in radiology practice. And that is enough to drive its adoption.
Glad we are on the same page. I don't think general med AI will be here anytime soon, if ever. And if it were here, then there is no need for doctors of any specialties.

Thanks for the share. I've been meaning to read the imagenet/googlenet/alexnet papers. It is impressive, for sure. The best analogy I can give you is that in difference in radiology is there are people with Bruce Wayne's face who are not Bruce Wayne, and there are people who are actually Bruce Waynes without Bruce Wayne's face. Think of Bruce Wayne status post a facial reconstruction/rhinoplasty/whatever - he is still Bruce Wayne even if he doesn't look like him. That's often what we have in radiology.

I think AI in radiology is a great thing and I fully welcome it. Whether or not it will make a substantial impact in the radiologist work flow, I don't know. But I know it will for sure not replace radiologists. It may decrease the need for radiologists if it helps radiologists get more efficient, but even that I'm uncertain about. It will kind of be like EMR. It helps solves some problems and create others. There are way too many practical barriers, which I will not go into.

By the way, during residency I've learned that radiology is really more about differential diagnosis and medical reasoning than it is about image recognition. You will see what I mean when you start. Image recognition is a pre-requisite, but the value is in the other two. For different patients you would also put in different pertinent positives or negatives. The reports are actually very highly tailored towards each individual case. Here's a very simple case, and there are so many more complicated cases: I don't call aortic calcification in a 80 year old, I call aortic calcification in a 18 year old. In a 20 year old with a small lesion that looks a simple cyst, I say most likely simple cyst. In a 60 year old with a small lesion that looks like a simple cyst, but has a history of clear cell renal cell carcinoma on the same side near the current cyst, I would recommend an MRI for further evaluation. In abdominal imaging clinicians who come down to our reading room all the time for further consultation and recommendations on an already known diagnosis: "How many degrees is the tumor surrounding the SMA?" "Can I take this foley out?" etc.

TL;DR: Radiologists are safe, and AI in radiology may help the radiologist.
 

droliver

Moderator Emeritus
15+ Year Member
May 1, 2001
1,573
113
Status
Attending Physician
People are dismissing the pace of machine new learning at their own peril with diagnostic reads. It's only a matter of computer power and a large image database to reach a reliable diagnostic system. The computer science guys have a better perspective of the engineering here and the pace it's advancing then doctors quite frankly. There's billion s of dollars being plowed into this area.
 

Tiger100

2+ Year Member
Mar 11, 2017
192
164
Status
Attending Physician
People are dismissing the pace of machine new learning at their own peril with diagnostic reads. It's only a matter of computer power and a large image database to reach a reliable diagnostic system. The computer science guys have a better perspective of the engineering here and the pace it's advancing then doctors quite frankly. There's billion s of dollars being plowed into this area.

What does a plastic surgeon know about radiology or medicine in general?

The computer guys may have a better perspective of engineering, but they have a very WRONG perspective of the practice of medicine and what radiologists do.

Your statement is very naive at best. It is like saying that it is just a matter of computer science and technology to reach a level that robots put in breast implants.

Now look at the following:

IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close

IBM pitched Watson as a revolution in cancer care. It's nowhere close

Do a little search in the area. Just a year ago they claimed that it treats the patients with 99% accuracy.

It is good to participate in the discussion. But it is not good just to say something. I checked your posts. Once in a while you make a troll post in radiology board. What are your intentions?
 
Last edited:

qxrt

5+ Year Member
Apr 2, 2013
221
183
Status
Attending Physician
People are dismissing the pace of machine new learning at their own peril with diagnostic reads. It's only a matter of computer power and a large image database to reach a reliable diagnostic system. The computer science guys have a better perspective of the engineering here and the pace it's advancing then doctors quite frankly. There's billion s of dollars being plowed into this area.
And yet the funny thing is, no one ever mentions the fact that there are many radiologists who understand A.I. better than A.I. specialists/engineers who understand radiology. I have yet to hear anyone who is actually a radiologist talk about how A.I. will take over radiology...because the only people who talk about this are non-radiologists. I can't imagine that every single radiologist is just biased and self-interested in making sure that radiology stays relevant as a medical specialty.
 
Last edited:
  • Like
Reactions: wegh

laricb

7+ Year Member
Aug 18, 2012
489
85
NYC
Status
Resident [Any Field]
And yet the funny thing is, no one ever mentions the fact that there are many radiologists who understand A.I. better than A.I. specialists/engineers who understand radiology. I have yet to hear anyone who is actually a radiologist talk about how A.I. will take over radiology...because the only people who talk about this are non-radiologists. I can't imagine that every single radiologist is just biased and self-interested in making sure that radiology stays relevant as a medical specialty.
Let the AI kill a few patients and misdiagnose. Then when the class action Billion Dollar Lawsuits come in, the discussion will end
 
Last edited:
  • Like
Reactions: wegh

DrfluffyMD

Membership Revoked
Removed
2+ Year Member
Dec 15, 2016
1,454
1,489
Status
Resident [Any Field]
Let the AI kill a few patients and misdiagnose. Then when the class action Billion Dollar Lawsuits come in, the discussion will end
Yep. It's already being talked about. Tesla's autopilot is already killing people.

Machine learning techniques fails just as drastically as slamming into a stationary car.
 
OP
J

jkdoctor

Probationary Status
5+ Year Member
Apr 29, 2013
990
925
Status
Fellow [Any Field]
The horse is here to stay, but the automobile is only a novelty—a fad.
- Advice from a president of the Michigan Savings Bank to Henry Ford's lawyer Horace Rackham. Rackham ignored the advice and invested $5000 in Ford stock, selling it later for $12.5 million.

That the automobile has practically reached the limit of its development is suggested by the fact that during the past year no improvements of a radical nature have been introduced.
- Scientific American, Jan. 2, 1909.

1946: "Television won't be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night." — Darryl Zanuck, 20th Century Fox.

1981: “Cellular phones will absolutely not replace local wire systems.” — Marty Cooper, inventor.

1995: "I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse." — Robert Metcalfe, founder of 3Com.

2006: "Everyone's always asking me when Apple will come out with a cell phone. My answer is, 'Probably never.'" — David Pogue, The New York Times.

2007: “There’s no chance that the iPhone is going to get any significant market share.” — Steve Ballmer, Microsoft CEO.

In February, Ford announced it is going to be investing $1 billion in Argo AI. The robotics company was created by former Google and Uber leaders. Ford plans to combine the expertise of Argo AI with Ford’s existing self-driving car efforts to have a “fully autonomous vehicle” coming in 2021. The first place these vehicles will be used by Ford is for ride hailing applications.
Ford Motor CEO Mark Fields told CNBC that Ford plans to have a “Level 4 vehicle in 2021, no gas pedal, no steering wheel, and the passenger will never need to take control of the vehicle in a predefined area.”

At the end of last year Honda announced it was in discussions with Waymo, an independent company of Alphabet Inc., to include Waymo self-driving technology in their vehicles.
The long stated goal of Honda is to have cars that can at least drive themselves on highways by 2020. That is when Tokyo will host the Summer Olympics, and Japan hopes to make it a showcase of their technological prowess.

Toyota has been one of the car companies most skeptical about autonomous vehicles, but in 2015 they made a big investment to catch up. Toyota is investing $1 billion over five years in the Toyota Research Institute to develop robotics and AI technology. Toyota hopes to launch products based on their Highway Teammate programs in 2020. This would also be just in time for the Tokyo Olympics.

Unlike other big car makers, GM has not laid out a specific timeline for their self-driving cars but they’ve made it clear they are moving aggressively. In December GM CEO Marry Barra wrote, “We expect to be the first high-volume auto manufacturer to build fully autonomous vehicles in a mass-production assembly plant.” The focus will be ride-sharing first over the individual buyer.
According to Reuters, GM is rumored to have plans to deploy thousands of self-driving electric cars next year with its ride-sharing affiliate Lyft Inc. GM spent $500 million to buy part 9 percent stake in Lyft as part of its strategy to create an integrated network of on-demand autonomous vehicles.

Renault-Nissan is counting on their new partnership with Microsoft to help advance their autonomous car efforts. Renault-Nissan plans to release 10 different self-driving cars by 2020.

Volvo CEO Hakan Samuelsson said in an interview, “It’s our ambition to have a car that can drive fully autonomously on the highway by 2021.” He envisions that full autopilot would be a highly enticing option on premium vehicles and at first would cost $10,000.

Hyundai senior research engineer Byungyong You told Drive, “We are targeting for the highway in 2020 and urban driving in 2030.” To achieve this goal Hyundai is investing 2 trillion won ($1.7 billion) and hiring over 3,000 employees for its self-driving car program.

Ola Källenius, Daimler’s new head of development, expects large-scale commercial production to take off between 2020 and 2025.

Fiat-Chrysler also teamed up with Waymo last year to test some self-driving Chrysler Pacifica Hybrid minivans.
The experience has convinced Fiat-Chrysler CEO Sergio Marchionne that self-driving cars are farther along that he once thought. He suspects that they could be on the road in five years.

Last year BMW announced a high profile collaboration with Intel and Mobileye to develop autonomous cars. Officially the goal is to get “highly and fully automated driving into series production by 2021.”

Musk has predicted that by the end of this year a Telsa will be able to drive from Los Angeles to New York City without a human touching the wheel.

https://www.techemergence.com/self-driving-car-timeline-themselves-top-11-automakers/
 

wegh

2+ Year Member
Mar 10, 2017
117
77
You can automate a lot more medical specialities before you can automate radiology.

Also, isn't this the same guy who is developing the tech? Why wouldn't he hype up his own stuff?
 

wegh

2+ Year Member
Mar 10, 2017
117
77
People are dismissing the pace of machine new learning at their own peril with diagnostic reads. It's only a matter of computer power and a large image database to reach a reliable diagnostic system. The computer science guys have a better perspective of the engineering here and the pace it's advancing then doctors quite frankly. There's billion s of dollars being plowed into this area.
Just automate every speciality then. Sick? Get this bloodwork and proceed to XYZ based on the results. Surgery? Use this robot that is way more accurate than a human
 

laricb

7+ Year Member
Aug 18, 2012
489
85
NYC
Status
Resident [Any Field]
Just automate every speciality then. Sick? Get this bloodwork and proceed to XYZ based on the results. Surgery? Use this robot that is way more accurate than a human
Yes and before we know it there will be a robot in the white house....
 

hantah

7+ Year Member
May 10, 2012
11
16
Status
Resident [Any Field]
The technology & dataset is simply not there yet... I have worked with members of google, some of the most promising startups in medical deep learning, I write deep learning code myself, and currently conduct research in a big data laboratory in a major academic institution pushing forward very aggressively on deep learning. Theoretically, it may appear possible, but practically the technology is simply not there yet. I did develop a pulmonary nodule detector that can detect nodules better than I can on chest CT. I also worked with a team that developed CXR nodule detector that can detect nodules on par or better than me. I still think it's impossible to take away most of our jobs. They solve very narrow problems and as soon as there is slight perturbation in the dataset, the algorithm makes mistakes (and sometimes deadly mistakes)...the CXR algorithm required the work of like 10 very very strong deep learning engineers for many months, with the help of a very favorable manually annotated dataset of large size. We will be presenting some variation of these at RSNA this year...still.. I do not think we're even close.
 

808s&heartbreak

2+ Year Member
Mar 24, 2017
27
21
Status
Resident [Any Field]
The technology & dataset is simply not there yet... I have worked with members of google, some of the most promising startups in medical deep learning, I write deep learning code myself, and currently conduct research in a big data laboratory in a major academic institution pushing forward very aggressively on deep learning. Theoretically, it may appear possible, but practically the technology is simply not there yet. I did develop a pulmonary nodule detector that can detect nodules better than I can on chest CT. I also worked with a team that developed CXR nodule detector that can detect nodules on par or better than me. I still think it's impossible to take away most of our jobs. They solve very narrow problems and as soon as there is slight perturbation in the dataset, the algorithm makes mistakes (and sometimes deadly mistakes)...the CXR algorithm required the work of like 10 very very strong deep learning engineers for many months, with the help of a very favorable manually annotated dataset of large size. We will be presenting some variation of these at RSNA this year...still.. I do not think we're even close.
THANK YOU. Nowadays when people without dual engineering/radiology backgrounds comment on these, I simply stop engaging with them. These convos go nowhere. Engineers without radiology backgrounds don't get the complexity of imaging and medical diagnosis. Radiologists without technical backgrounds do not understand the technical challenges. You are one of the few who do.
 
  • Like
Reactions: Bobbbyyyy
About the Ads