Computer Vision/Machine Learning - End of Radiologist?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Dock1234

Full Member
10+ Year Member
Joined
Jun 1, 2011
Messages
52
Reaction score
3
Almost all of these articles are written by IT people who have zero knowledge about medicine and radiology. One thing that these people don't understand is that we don't just visually look at the images. We compare different modalities together and also take into account the patient's clinical presentation. Then we make the final conclusion based on a plethora of information.

A decade ago CAD (computer assist diagnosis) came into market esp for mammo interpretation. Even now after multiple revisions, the technology is horrible and nobody even look at its results, though most places have the software in their PACS.

This won't happen in the next at least 40-50 years. But it is always a topic of interest esp for people who have zero knowledge in medicine.
 
  • Like
Reactions: 1 user
Let me give you this example.

So far, they have not been able to design a software that can confidently recognize people's faces. Just think about it. This seems a very simple task for human. A 3-month old infant can easily recognize his/her mother and never ever makes mistake. All these mega computers that are connected to each other are not capable of doing a task that is considered very basic for human.
 
  • Like
Reactions: 3 users
Members don't see this ad :)
Nevermind that if you were reaching for low-hanging fruit, it would be easier to automate primary care. You come in, get some blood drawn and vitals taken by a technologist, the results get fed into a computer, and your blood pressure medications get adjusted. If you need a specialty referral, the computer gives it to you.

Follow the algorithm. Easier to automate than computer vision.
 
A computer can't take my order at mcdonalds. You think it's going to replace doctors lol? everything in medicine is far more complex than people who are not physicians can possibly comprehend.
 
I don't think computer vision is the problem. In fact Facebook's DeepFace project can recognize two photos as being the same person even with fairly radical differences in angles/lighting and even years of aging between photos with an accuracy just as good as human.
I think the problem is the expert system part of the technology. There's a few prominent researchers that are both computer scientists and radiologists- but the vast majority are one or the other and the latter has no interest in collaborating with the former. It's almost comical because radiology is the most computing-applicable field of medicine.
 
As someone who has degrees in computers and experience working as an IT professional writing hundreds of thousands of lines of code, the short answer is I ain't losing sleep over this one bit.
 
I don't think computer vision is the problem. In fact Facebook's DeepFace project can recognize two photos as being the same person even with fairly radical differences in angles/lighting and even years of aging between photos with an accuracy just as good as human.
I think the problem is the expert system part of the technology. There's a few prominent researchers that are both computer scientists and radiologists- but the vast majority are one or the other and the latter has no interest in collaborating with the former. It's almost comical because radiology is the most computing-applicable field of medicine.

you realize it uses people's friends and a host of other data(location data, when someone posted, who they were with, if two people were in a similar place, if you messaged each other recently((wouldn't be surprised at all if it scanned your messages and saw things like " that picture of us in front of the stadium is so cool")) to make those assumptions right? I wouldn't be surprised if the data had more effect on the overall answer it chose than the actual image. if you have enough data, the chances of successfully identifying the person without even actually looking at the picture itself is incredibly high.
 
you realize it uses people's friends and a host of other data(location data, when someone posted, who they were with, if two people were in a similar place, if you messaged each other recently((wouldn't be surprised at all if it scanned your messages and saw things like " that picture of us in front of the stadium is so cool")) to make those assumptions right? I wouldn't be surprised if the data had more effect on the overall answer it chose than the actual image. if you have enough data, the chances of successfully identifying the person without even actually looking at the picture itself is incredibly high.

That's not true at all. Only data set are images. DeepFace was a huge step forward for machine vision. What you are talking about is trivial.
See study: DeepFace: Closing the Gap to Human-Level Performance in Face Verification
 
That's not true at all. Only data set are images. DeepFace was a huge step forward for machine vision. What you are talking about is trivial.
See study: DeepFace: Closing the Gap to Human-Level Performance in Face Verification

re-read your article chief.
 
re-read your article chief.

I read and re-read it when it first came out. I've also seen the image data sets (4 million images of 4 thousand different people) often with non-trivial differences in lighting, angles and age progression. It was one of the most talked about computer science research papers that came out so far this year. Here's a link:
http://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf

Seriously, what you described would barely pass as an undergraduate project...
 
Last edited:
There's a few prominent researchers that are both computer scientists and radiologists- but the vast majority are one or the other and the latter has no interest in collaborating with the former. It's almost comical because radiology is the most computing-applicable field of medicine.
If computer-assisted detection does in fact ever gain traction...this is not actually anything that would hold it back. If it's profitable, it will be done.
 
Members don't see this ad :)
So, No. Computers are no longer inferior to humans in recognizing patterns in images, even very complex images. Meanwhile, they are still vastly superior to humans in remembering and computing large sets of information. That's the other side of machine learning: The computer can recognize the patterns, great. But how can it deduce the meaning of the image?

Old-school CAD used the old ideas of artificial intelligence- expert systems. Essentially coding rules for the software to follow that are given by the domain expert (in this case the experienced radiologists). This is not horrible in simple cases. In fact back in 199 Lars Edenbrandt showed his computer software being superior in successfully diagnosing heart attacks from EKG results vs. physician experts. But the idea of expert-systems being the sole intelligence source has been abandoned because the rules are too complex to code (e.g. a medical education, intuition, experience) to be comprehensive. The solution was "neural network" learning algorithms. Simplification: have a set of x images with corresponding set of y conclusions so that the x(n+1) image can produce a computer generated y(n+1) conclusion based on the rules of the existing x,y sets. When corrected, the correction is "learned" so that each successive x,y data point has a computer generated y conclusion that is closer to the "corrected" y conclusion than the previous x,y data point. The advantage of this is that each "correction" generates rules (expertise) that do not have to be coded into the program by an expert. Sometimes tiny rules that maybe even the expert(s) didn't notice apply! Imagine being able to harness all the decision rules learned over tens of millions of radiology images (what human can ever have that kind of work experience?) while guided by a traditional well produced expert system.

So that's the theory and it's been around for a while. The problems include both the complexity of a well-working learning algorithm along with the huge processing power of so many data points. But both are being slowly solved. I'll reference another illustrative research project- this time from Microsoft- the Adam project. It's done a lot to solve both issues to the point where Adam can not only identify that your user-submitted photo is of your dog- but also his/her correct breed while using processing power available from mid-level consumer hardware.

I'm not writing this to disparage Radiology. I confess I know very little about Radiology (obviously). But some of you are buried under a rock in terms of what the cutting edge of technology is today. Interacting with patients, messy textual data, emotional expressions, fluid physical motion (surgery) these are all harder barriers towards technology in medicine but not image recognition and classification.

Def. open to critical replies from ppl who read the whole thing. I'm not a radiologist and I wouldn't really call myself a computer scientist either.
 
Last edited:
As I said in my original post, most people who design these systems or talk about it have ZERO knowledge about medicine and radiology.

The mindset behind all these discussions are: If we see A on a CT then it means it is cancer. If we see B then it means it is pancreatitis. So if we gather 1000 cases of A and give it to the system, then once the system recognizes A, it will diagnose a cancer. This is not how it works. Period.

Now I may be buried under a rock in terms of cutting technology as you said. But I guarantee you that my buried-under-rock knowledge of technology is much more correct than your knowledge of medical imaging. Just to give you an idea that may seem very new to you. A great part of medical imaging is subjective. The final conclusion is usually made by putting many many things together, including the patient history, lab data and many other things. And even in this case, there are different thresholds for different people to call different things. Eventually it is me who take a responsibility of calling or not calling something. So as you see, it is not like the algorithms that you are talking about. It is not like recognizing the breed of your dog.

A set of X images resulting in a set of y conclusions are not how medical imaging works. A lot of times a set of x images mean nothing. A lot of times a set of y conclusions come out of nowhere. It is not really what you think.
 
  • Like
Reactions: 1 user
[deleted]
 
Last edited:
As a computer scientist who actively programs, and current radiology resident, the idea of a machine replacing a radiologist in the near future is ******ed. The machine can't even recognize 30% of what I shout into Powerscribe. And for the record, this is coming from someone who would be excited, not terrified, by a level of AI that supplants human image recognition (combined with the million other things we do as radiologists to get those final few words to follow "IMPRESSION:").

But in my lifetime, I've accepted that this will remain as a bunch of randomly attempted research projects. Maybe some will achieve modest success here or there with simple modalities. But nothing on the scale / impact required to replace even a single radiologist.

What I do see happening in the next few decades, and what remains a personal area of interest, is improvement in computer-aided diagnosis. This will turn into an increasing arsenal of tools built into to PACS that allow us to generate faster reports by automating some of the more mundane aspects of what we do. Only time will tell.
 
  • Like
Reactions: 1 user
As I said in my original post, most people who design these systems or talk about it have ZERO knowledge about medicine and radiology.

The mindset behind all these discussions are: If we see A on a CT then it means it is cancer. If we see B then it means it is pancreatitis. So if we gather 1000 cases of A and give it to the system, then once the system recognizes A, it will diagnose a cancer. This is not how it works. Period.

Now I may be buried under a rock in terms of cutting technology as you said. But I guarantee you that my buried-under-rock knowledge of technology is much more correct than your knowledge of medical imaging. Just to give you an idea that may seem very new to you. A great part of medical imaging is subjective. The final conclusion is usually made by putting many many things together, including the patient history, lab data and many other things. And even in this case, there are different thresholds for different people to call different things. Eventually it is me who take a responsibility of calling or not calling something. So as you see, it is not like the algorithms that you are talking about. It is not like recognizing the breed of your dog.

A set of X images resulting in a set of y conclusions are not how medical imaging works. A lot of times a set of x images mean nothing. A lot of times a set of y conclusions come out of nowhere. It is not really what you think.

Interesting. I understand that other data points can be used such as patient history and lab data but that would be very easy to include as structured textual data alongside the image data. Subjectivity is definitely true. I'm sure Adam might be unsure between a Siberian Husky and an Alaskan Malamute much like a human would. The difference is that converting rule markers into percentage points of certainty would be much simpler for a computer. So even subjectivity is less subjective.
And if x image means nothing, than "means nothing" is a conclusion. And as for y conclusion coming out of nowhere - that I'm a little confused by.
That said, I'm starting to get a better idea of limitations.

The mindset behind all these discussions are: If we see A on a CT then it means it is cancer. If we see B then it means it is pancreatitis. So if we gather 1000 cases of A and give it to the system, then once the system recognizes A, it will diagnose a cancer. This is not how it works. Period.
How what works? Radiology diagnosing? I bet it doesn't. I also know that's not how a neural network learning algorithm works. Even the most basic image detection system would not have that due to classic statistical over-fitting. The basic outline of a learning algorithm, besides many numerous nodes (e.g. what might be cancer, what might be pancreatitis, what might be a, b, c, etc) each with numerous rules, is out-of-the-sample predictions that are corrected (given the real documented conclusion) with each iteration (case added). The difference between the MLA's prediction and the real conclusion at each iteration is the basis of creating more nodes/rules or changes to existing ones.
 
As a computer scientist who actively programs, and current radiology resident, the idea of a machine replacing a radiologist in the near future is ******ed. The machine can't even recognize 30% of what I shout into Powerscribe. And for the record, this is coming from someone who would be excited, not terrified, by a level of AI that supplants human image recognition (combined with the million other things we do as radiologists to get those final few words to follow "IMPRESSION:").

But in my lifetime, I've accepted that this will remain as a bunch of randomly attempted research projects. Maybe some will achieve modest success here or there with simple modalities. But nothing on the scale / impact required to replace even a single radiologist.

What I do see happening in the next few decades, and what remains a personal area of interest, is improvement in computer-aided diagnosis. This will turn into an increasing arsenal of tools built into to PACS that allow us to generate faster reports by automating some of the more mundane aspects of what we do. Only time will tell.

I can imagine your criticism of Powerscribe (I had to look up what it is). I understand that the same company is also involved with Apple's Siri sound recognition. Which one would expect to be advanced. But when I said that interacting with patients (along with other things I listed) are a bigger barrier to tech in medicine than image recognition (IR) I was talking partly about sound recognition (SR). Indeed SR is a much harder engineering problem than IR. I know that's counter-intuitive so I could explain later why but that's not the point of this thread.

In anycase- couldn't CAD be a sliding scale from essentially useless to doing considerable work. I agree that radiologists being fully replaced in our lifetimes is unlikely. That's why it's silly for rad departments not to collaborate much more with computer scientists than they have. Unless I'm missing something about radiology images vs. other images.
 
CAD is widely available in mammo. It's been out there for almost 10 years. Guess what? More experience mammographers even don't look at its results. You may think this is weird, but this is what it is. It is too sensitive for calcifications and does a horrible job for masses and architectural distortions. Bottom line it causes more hassles than benefits.

FYI, mammo is one of the most algorithmic standardized mundane modalities in radiology (I have not done any mammo in the last 3 years). Even the projections in mammo are much more standardized than CXR or a head CT. Sorry if someone is a mammographer here, but I hate mammo.
 
  • Like
Reactions: 1 user
A radiologist won't be replaced for a very very very long time. When that happens the technology out there will be so good we may be able to replace many other aspects of medicine waaay more algorithmic such as simple, uncomplicated HTN management (it's fairly algorithmic most of the time, though there are always intricacies that a machine cannot do such as a physical exam)... And I don't see the later even on anyone's radar.

There are many variables computers will have to overcome. 1. They will have to be proven to be at a minimum non-inferior in a finding and diagnosis. 2. It assumes radiologists will not adjust their reports to help their colleagues more such as by expanding on the impression. 3. You will have to convince every single health care provider relying on these radiology reports that they are safe to use. 4. If computers do it all how can other doctors discuss imaging findings and other possibilities of a finding with a person? I do this probably at least once a week. Surgeons and ED guys probably do it a lot more often. 5. Who will do procedures that radiologists do now?

I just don't see this happening. People not in medicine, as shark pointed out, simply cannot understand the huge complexity that goes in to making decisions on patients.
 
CAD is widely available in mammo. It's been out there for almost 10 years. Guess what? More experience mammographers even don't look at its results. You may think this is weird, but this is what it is. It is too sensitive for calcifications and does a horrible job for masses and architectural distortions. Bottom line it causes more hassles than benefits.

FYI, mammo is one of the most algorithmic standardized mundane modalities in radiology (I have not done any mammo in the last 3 years). Even the projections in mammo are much more standardized than CXR or a head CT. Sorry if someone is a mammographer here, but I hate mammo.

But should you be blaming CAD or the reliability of breast x-ray images themselves (vs. breast ultrasound images)? And I'm guess humans can't do better with mammograms without CAD (whatever that is in this case).

I'm more interested in what humans CAN do - and how it can be automated to some degree with computer vision. Not solving the limitations of images themselves.
 
Last edited:
A radiologist won't be replaced for a very very very long time. When that happens the technology out there will be so good we may be able to replace many other aspects of medicine waaay more algorithmic such as simple, uncomplicated HTN management (it's fairly algorithmic most of the time, though there are always intricacies that a machine cannot do such as a physical exam)... And I don't see the later even on anyone's radar.

There are many variables computers will have to overcome. 1. They will have to be proven to be at a minimum non-inferior in a finding and diagnosis. 2. It assumes radiologists will not adjust their reports to help their colleagues more such as by expanding on the impression. 3. You will have to convince every single health care provider relying on these radiology reports that they are safe to use. 4. If computers do it all how can other doctors discuss imaging findings and other possibilities of a finding with a person? I do this probably at least once a week. Surgeons and ED guys probably do it a lot more often. 5. Who will do procedures that radiologists do now?

I just don't see this happening. People not in medicine, as shark pointed out, simply cannot understand the huge complexity that goes in to making decisions on patients.

Yes, that's why HTN management wouldn't be replaced. Pretty much anything physical is difficult since you have to deal with a crapload of more things than just vision like image sensors, range data, tacticle/force sensors, guidance, kinematics, physicial manipulation and who knows what else. Not to mention computer-human interaction and likely sound. (This is what makes the self-driving car so difficult. It's computer vision + a host of more problems.)

Those are very good points but none of them seem insurmountable. But definitely things to think about. 1. No doubt it would have to achieve what DeepFace did and match/exceed human performance. 2. That's a very good point about radiologists expanding on their impression. I don't know enough to comment. What I read- it seems like radiologists also give their medical opinion and sometimes treatment advice on what they detect? This would come under a different field called natural language processing and would be challenging in the context of medicine I agree. 3. I think that will only come with time no matter the statistics/proof that can be shown 4. Don't really understand the problem here. Anyone can discuss the "computer's" findings. 5. Don't know what procedures there are. I'm really only talking about computer vision and that aspect of a radiologist's job!
 
Last edited:
But should you be blaming CAD or the reliability of breast x-ray images themselves (vs. breast ultrasound images)? And I'm guess humans can't do better with mammograms without CAD (whatever that is in this case).

I'm more interested in what humans CAN do - and how it can be automated to some degree with computer vision. Not solving the limitations of images themselves.


It seems that you didn't understand the point. I didn't talk about limitations of the images or mammo versus US. I was talking about mammo itself and how human versus computer are interpreting the images. Even an unexperienced radiologist does much better job than CAD reading mammo to the point that most radiologists ignore the CAD input. So yes, I blame CAD because it is much inferior to human. Human + CAD equals Human because most radiologists ignore CAD results anyway.

FYI, mammo (or according to you breast x-ray) is a completely different modality than the US and gives totally different information that can not be replaced by US or even MRI. Just to give you an idea. If you are a non-radiologist to you MRI is always better than CT and CT is better than US and US is better than Xray. That is not the case. X-ray many times give information that can not be obtained from MRI.

Knowing what information is obtained from mammo and what is obtained from breast US is very basic radiology. Without knowing it, you are completely off the track. Without having the medical knowledge and without knowing the limitations of images and each modality, you can not give a reasonable opinion about image interpretation (and computer programs interpretation).

I again go back to my original post. The people who talk about these computer programs, have a very wrong idea about how medicine and radiology work. I don't think this discussion is going anywhere.
 
It seems that you didn't understand the point. I didn't talk about limitations of the images or mammo versus US. I was talking about mammo itself and how human versus computer are interpreting the images. Even an unexperienced radiologist does much better job than CAD reading mammo to the point that most radiologists ignore the CAD input. So yes, I blame CAD because it is much inferior to human. Human + CAD equals Human because most radiologists ignore CAD results anyway.

FYI, mammo (or according to you breast x-ray) is a completely different modality than the US and gives totally different information that can not be replaced by US or even MRI. Just to give you an idea. If you are a non-radiologist to you MRI is always better than CT and CT is better than US and US is better than Xray. That is not the case. X-ray many times give information that can not be obtained from MRI.

Knowing what information is obtained from mammo and what is obtained from breast US is very basic radiology. Without knowing it, you are completely off the track. Without having the medical knowledge and without knowing the limitations of images and each modality, you can not give a reasonable opinion about image interpretation (and computer programs interpretation).

I again go back to my original post. The people who talk about these computer programs, have a very wrong idea about how medicine and radiology work. I don't think this discussion is going anywhere.

It's interesting you said that about MRI vs x-Ray bc my first year when we were starting and comparing images , I said something similar about how I feel like x ray would be preferred in some situations. Not claiming this was due to knowledge, just a "feeling" I had looking at the image. Funny now cuz people were like " NO MRI IS ALWAYS BETTER."
 
You have to look at them as different modalities. For example, calcium is sometimes very difficult to find on MRI like calcified tendinopathy of rotator cuff. On the other hand, it is a first year call on X-ray.
I had the opportunity to work with some of the best MSK attendings and almost all of them refused to read MRI without having X-rays available.

Similar to head CT and brain MRI. On brain MRI for example both calcium and hemorrhage may look similar (esp on GRE). On CT, it is very easy to differentiate them.

For high risk patients for breast cancer, currently breast MRI every year is recommended. But it does not replace mammo. It means that you have to do both mammo and MRI every year.
 
It seems that you didn't understand the point. I didn't talk about limitations of the images or mammo versus US. I was talking about mammo itself and how human versus computer are interpreting the images. Even an unexperienced radiologist does much better job than CAD reading mammo to the point that most radiologists ignore the CAD input. So yes, I blame CAD because it is much inferior to human. Human + CAD equals Human because most radiologists ignore CAD results anyway.

FYI, mammo (or according to you breast x-ray) is a completely different modality than the US and gives totally different information that can not be replaced by US or even MRI. Just to give you an idea. If you are a non-radiologist to you MRI is always better than CT and CT is better than US and US is better than Xray. That is not the case. X-ray many times give information that can not be obtained from MRI.

Knowing what information is obtained from mammo and what is obtained from breast US is very basic radiology. Without knowing it, you are completely off the track. Without having the medical knowledge and without knowing the limitations of images and each modality, you can not give a reasonable opinion about image interpretation (and computer programs interpretation).

I again go back to my original post. The people who talk about these computer programs, have a very wrong idea about how medicine and radiology work. I don't think this discussion is going anywhere.

Thanks, after more reading on the subject of computer pre-processing of mammogram images in particular I learned more about their usefulness and uselessness. But you have to understand that CAD is not a static technology and machine vision is a growing field. As it turns out the engineering problems of computation for mammograms is a well known challenge for machine vision in certain types of images. Without getting mathy, it kinda has to do with regions of an image in training sets being assummed to be independent and independently distributed - but really being too correlated with other regions of the image and other images of the same object. Obviously this tends to lead to false positives- or even more likely, a situation where you get multiple positives when it should be singular. Some paradigms under an idea called multiple instance learning have been put forth with varying degrees of success. But there's no theoretical (mathematical) reason for why it would remain unsolvable. Which is why the one question I keep asking (and this is where I want the discussion to go): what is unique (if anything) about radiology images that they are impregnable to future machine vision utilization. TBH- I didn't get a satisfying answer at all. And certainly not the answer that 'computer people don't know about medicine' - of course the computer scientists at for example siemens medical tech need knowledge of the domain area they are working in (in this case radiology)- also why I kept confessing my own naivete of radiology. I was hoping someone could explain to me in lay language like I have been doing about machine learning/vision. I'm starting to think there is no specific problem from the medicine side other than reluctance, illustrated with thoughts like 'look at the current problems with my (not cutting-edge) CAD or powercribe, computers are useless and could never be very useful here.'
 
Last edited:
Thanks, after more reading on the subject of computer pre-processing of mammogram images in particular I learned more about their usefulness and uselessness. But you have to understand that CAD is not a static technology and machine vision is a growing field. As it turns out the engineering problems of computation for mammograms is a well known challenge for machine vision in certain types of images. Without getting mathy it has to do regions of an image in training sets being assummed to be independent and independently distributed - but really being too correlated with other regions of the image and other images of the same object. I think it's apparent why this would lead to false positives- or even more likely, a situation where you get multiple positives when it should be singular. Some paradigms under a idea called multiple instance learning have been put forth with varying degrees of success. But when it comes to an engineering problem with well defined parameters and enough information anything can be solved (eventually) or so is my thinking. Which is why is the one question I keep asking (and this is where I want the discussion to go): what is unique (if anything) about radiology images that they are impregnable to future machine vision utilization. TBH- I didn't get a satisfying answer at all. And certainly not the answer that 'computer people don't know about medicine' - of course the computer scientists at for example siemens medical tech need knowledge of the domain area they are working in (in this case radiology)- also why I kept confessing my own naivete of radiology. I was hoping someone could explain to me in lay language like I have been doing about machine learning/vision. I'm starting to think there is no specific problem from the medicine side other than reluctance.

Here's 1: when **** hits the fan, people aren't willing to put their life behind a machine successfully interpreting an image when a trained physician that has spent 30 years training and practicing could. You're going to say " but we can make a system that is more effective than doctors!!!!!zomg yay yay yay"

So my suggestion to you, is to go do it. You'll make a **** ton of money and your hard on for computers will be fulfilled. Countless amounts of people are trying and none of them can do it, so keep on living in dream land. Go computers. Except my toaster somehow still manages to not consistently cook whatever is in it, but hey I, gonna hand it my life. Yay
 
Thanks, after more reading on the subject of computer pre-processing of mammogram images in particular I learned more about their usefulness and uselessness. But you have to understand that CAD is not a static technology and machine vision is a growing field. As it turns out the engineering problems of computation for mammograms is a well known challenge for machine vision in certain types of images. Without getting mathy, it kinda has to do with regions of an image in training sets being assummed to be independent and independently distributed - but really being too correlated with other regions of the image and other images of the same object. Obviously this tends to lead to false positives- or even more likely, a situation where you get multiple positives when it should be singular. Some paradigms under an idea called multiple instance learning have been put forth with varying degrees of success. But there's no theoretical (mathematical) reason for why it would remain unsolvable. Which is why the one question I keep asking (and this is where I want the discussion to go): what is unique (if anything) about radiology images that they are impregnable to future machine vision utilization. TBH- I didn't get a satisfying answer at all. And certainly not the answer that 'computer people don't know about medicine' - of course the computer scientists at for example siemens medical tech need knowledge of the domain area they are working in (in this case radiology)- also why I kept confessing my own naivete of radiology. I was hoping someone could explain to me in lay language like I have been doing about machine learning/vision. I'm starting to think there is no specific problem from the medicine side other than reluctance, illustrated with thoughts like 'look at the current problems with my (not cutting-edge) CAD or powercribe, computers are useless and could never be very useful here.'
It can't simply be articulated because it takes us 5 years of dedicated training. I can't explain all of the subtleties to you using an Internet forum and you cant explain all the computer science buzz words you keep throwing out there hoping we understand your perspective.

Detecting an abnormality on an image is only a fraction of the process which seems complete lost on you.
 
Last edited:
  • Like
Reactions: 1 users
I don't understand this thought among computer science people where everything is possible to be run by computers. Like if it's so easy, just go do it. 99 percent of CS people sit around in a circle jerking off talking about "what could be the future," instead of making it the future.
 
It can't simply be articulated because it takes us 5 years of dedicated training. I can't explain all of the subtleties to you using an Internet forum and you cant explain all the computer science buzz words you keep throwing out there hoping we understand your perspective.

Detecting an abnormality on an image is only a fraction of the process which seems complete lost on you.

To be fair, that's ALL I've been talking about, along with outputting likely meanings that a specialist can ultimately decide on. Not the feasibility of optimizing or replacing any other task.

You have a good point re : perspectives explained on a forum, but some of the combative responses (PL198, others) make me inclined to think that the obstacles are not necessarily technical in nature.
 
Last edited:
To be fair, that's ALL I've been talking about, along with outputting likely meanings - that a specialist can ultimately decide on. Not the feasibility of optimizing or replacing any other task.

You have a good point re : perspectives explained on a forum, but some of the combative responses make me inclined to think that the obstacles are not necessarily technical in nature.

.. Yes medical students and physicians should celebrate CS people that know nothing about medicine proclaiming it would so easy to do what you say. You act like this concept would somehow matter in terms of it catching on. It wouldn't , if your outcome is so superior, then it would catch on. Physicians don't set medical policy. All your statements just show that you lack so much understanding of the whole process, that I have 0 idea how you can make statements about changing it.
 
With an infinite amount of time, processing power, and resources, sure, radiology can be automated. Pretty much anything can. However, I (and most of the other people posting here) believe we are much further from that than freemontie does. Can radiology be largely automated in 150 years? Sure. In 10 years? No. So in between there is likely to be a gradual increase in the use of automated tools that will replace some of the mundane parts of the job.

Right now, there isn't even a reliable, commercially available tool that can:
- tell me if a tumor has grown between a prior study and the current study
- label and count the number of MS lesions (or metastases) present within the brain.

These are examples of the simplest of tasks a computer could handle for me. There is plenty of low hanging fruit out there that can gradually be automated. We are just nowhere near even partially automated.
 
  • Like
Reactions: 1 users
Oh, and this whole thing about not having a computer handle HTN or diabetes argument from above is silly. I don't need a robot that can touch a patient. I just need a tech that I pay $10 / hour to enter values in the computer, which spits out the prescriptions.
 
... I hate mammo.

You sure you hate Mammo-Grahams?

IMG_648906091002.jpeg
 
  • Like
Reactions: 1 users
No hospital would be willing to take on the liability for using a computer program to do diagnoses. You have to realize that as a physician you are looked at as a liability sponge by hospital/group leadership, not just a revenue machine. Even if you outlive your usefulness as a diagnostician, they'll still want you double-checking things so there's someone to suck up the lawsuits.
 
Thanks, after more reading on the subject of computer pre-processing of mammogram images in particular I learned more about their usefulness and uselessness. But you have to understand that CAD is not a static technology and machine vision is a growing field. As it turns out the engineering problems of computation for mammograms is a well known challenge for machine vision in certain types of images. Without getting mathy, it kinda has to do with regions of an image in training sets being assummed to be independent and independently distributed - but really being too correlated with other regions of the image and other images of the same object. Obviously this tends to lead to false positives- or even more likely, a situation where you get multiple positives when it should be singular. Some paradigms under an idea called multiple instance learning have been put forth with varying degrees of success. But there's no theoretical (mathematical) reason for why it would remain unsolvable. Which is why the one question I keep asking (and this is where I want the discussion to go): what is unique (if anything) about radiology images that they are impregnable to future machine vision utilization. TBH- I didn't get a satisfying answer at all. And certainly not the answer that 'computer people don't know about medicine' - of course the computer scientists at for example siemens medical tech need knowledge of the domain area they are working in (in this case radiology)- also why I kept confessing my own naivete of radiology. I was hoping someone could explain to me in lay language like I have been doing about machine learning/vision. I'm starting to think there is no specific problem from the medicine side other than reluctance, illustrated with thoughts like 'look at the current problems with my (not cutting-edge) CAD or powercribe, computers are useless and could never be very useful here.'

Had to chime in. I did my undergraduate in computer science and will be a radiology resident next year. I took courses in Machine Learning and agree that at some point in the distant future, computers MAY be able to determine the diagnosis of various studies at a comparable performance to a human being, but right now the computational complexity of this is pretty outstanding. I think there are MANY OTHER fields in medicine that could be more easily reduced based on machine learning technologies simply because the variables involved are more limited. A CT or MRI study for example, contains 1000+ variables and given the variance of normal anatomy it's not very simple to train a function that maps this to a precise differential. As someone above mentioned, we just recently started RECOGNIZING/DETECTING human faces (a much easier problem). In contrast to this: most ER visits or family practice visits could probably be represented by 50-100 well formed complaints (cough for 2 weeks, with fever of 101, etc...). Additionally, the management for all of these complaints is extremely well described. This is MUCH easier to work with from a computational perspective than X-rays, let alone CT scans or MRIs.
The power of machine learning is very real, but radiology will not be anywhere close to the next field of medicine that sees its application. I think Shark's example of the mamo CAD limitations is a great example of how difficult it has been. So I'd say rest easy my radiology peeps!
 
Last edited:
  • Like
Reactions: 1 users
While I I don't doubt that there will be many uses for machine learning algorithms and natural language processing in the future, from a physical science POV I don't see the computer algorithms replacing radiologists (or any doctor for that matter) - within our lifetimes.

From my understanding of the methods used in these algorithms is that they're using statistical methods, as precisely solving Baynesian integrals with non-trivial amounts of realistic real world parameters would take a long (or perhaps infinite) amount of time. Therefore, mathematically there will always be an error function (these computer algorithms are based on statistical inferences) which is represented here: http://en.wikipedia.org/wiki/Bias–variance_tradeoff

This error can't be overcome by simply expanding the data set, i.e. the difference in having 100,000,000 data points and 10*10^(10000) data points would result in infinitesimal increases in accuracy: as described by Baye's limit in statistical mathematical inferences. http://en.wikipedia.org/wiki/Bayes_error_rate

(Which, btw, even in mathematical theory isn't fully well known or characterized at this time in 2014)

So the question then becomes, with assumed infinitely large data sets, can the "best" algorithm outperform what is serviced today. I.E. doctors. Can their minimum mathematically conceded error rate be lower than a doctor's? And even if it could, would we as humans be willing to trust a computer with a known error rate? This is an interesting question and one that I believe is no. The amount of parameters that need to be evaluated in making a clinical decision is very high, and the curse of dimensionality ensues that would probably take a computer an infinite amount of time to solve, even with lower dimensional manifolds and other simplifying model techniques.

As an aside you always have the problem of emerging health epidemics - new influxes of disease into the human population where existing data would be scarce and the computer algorithms would be useless.

Although I am by no means an expert PhD in computer science, so I might be missing something, and would welcome someone chiming in, but in my view the answer is no - or at the least - not within our lifetime. As an example, computer EKG interpretation algorithms are still not better than non expert clinicians (probably due to trying to recognize highly non-linear patterns of EKG - which is computationally intractable or would take an infinite amount of time to solve.)
http://www.sciencedirect.com/science/article/pii/S0022073611001622

And that's probably one of the areas that's most prone to automation. I would say I would be more worried if they figure out EKG interpretations. That would be step 1
 
Last edited:
While I I don't doubt that there will be many uses for machine learning algorithms and natural language processing in the future, from a physical science POV I don't see the computer algorithms replacing radiologists (or any doctor for that matter) - within our lifetimes.

From my understanding of the methods used in these algorithms is that they're using statistical methods, as precisely solving Baynesian integrals with non-trivial amounts of realistic real world parameters would take a long (or perhaps infinite) amount of time. Therefore, mathematically there will always be an error function (these computer algorithms are based on statistical inferences) which is represented here: http://en.wikipedia.org/wiki/Bias–variance_tradeoff

This error can't be overcome by simply expanding the data set, i.e. the difference in having 100,000,000 data points and 10*10^(10000) data points would result in infinitesimal increases in accuracy: as described by Baye's limit in statistical mathematical inferences. http://en.wikipedia.org/wiki/Bayes_error_rate

(Which, btw, even in mathematical theory isn't fully well known or characterized at this time in 2014)

So the question then becomes, with assumed infinitely large data sets, can the "best" algorithm outperform what is serviced today. I.E. doctors. Can their minimum mathematically conceded error rate be lower than a doctor's? And even if it could, would we as humans be willing to trust a computer with a known error rate? This is an interesting question and one that I believe is no. The amount of parameters that need to be evaluated in making a clinical decision is very high, and the curse of dimensionality ensues that would probably take a computer an infinite amount of time to solve, even with lower dimensional manifolds and other simplifying model techniques.

As an aside you always have the problem of emerging health epidemics - new influxes of disease into the human population where existing data would be scarce and the computer algorithms would be useless.

Although I am by no means an expert PhD in computer science, so I might be missing something, and would welcome someone chiming in, but in my view the answer is no - or at the least - not within our lifetime. As an example, computer EKG interpretation algorithms are still not better than non expert clinicians (probably due to trying to recognize highly non-linear patterns of EKG - which is computationally intractable or would take an infinite amount of time to solve.)
http://www.sciencedirect.com/science/article/pii/S0022073611001622

And that's probably one of the areas that's most prone to automation. I would say I would be more worried if they figure out EKG interpretations. That would be step 1

I don't understand most of what you just said, but I like the cut of your jib.
 
  • Like
Reactions: 1 users
Having a computer do this is not the major hurdle. It's creating a computer that is compact and affordable to be used by hospitals/offices. Right now, a computer that likely could get close to a radiologist imo is the IBM Watson. Too bad it's super expensive and physically gigantic. It's storage space is RAM rather than hard drives as it would take too long to retrieve info. It could be "taught" with millions of prior CTs, radiographs, etc, already done and apply that to a new one. I see something like this happening in my lifetime (next 50 years), but it won't replace radiologists. Someone will have to "verify" its results and assume liability and that's where a physician comes in to override a diagnosis or agree. So volume will continue to go up. :)
 
Having a computer do this is not the major hurdle. It's creating a computer that is compact and affordable to be used by hospitals/offices. Right now, a computer that likely could get close to a radiologist imo is the IBM Watson. Too bad it's super expensive and physically gigantic. It's storage space is RAM rather than hard drives as it would take too long to retrieve info. It could be "taught" with millions of prior CTs, radiographs, etc, already done and apply that to a new one. I see something like this happening in my lifetime (next 50 years), but it won't replace radiologists. Someone will have to "verify" its results and assume liability and that's where a physician comes in to override a diagnosis or agree. So volume will continue to go up. :)

Watson can't even complete close to the functionality of a radiologist. I understand what you're getting at, but you're extreme magnitudes off. Watson isn't 1 millionth of the way there in terms of computational power and ability. Not even close.
 
Having a computer do this is not the major hurdle. It's creating a computer that is compact and affordable to be used by hospitals/offices. Right now, a computer that likely could get close to a radiologist imo is the IBM Watson. Too bad it's super expensive and physically gigantic. It's storage space is RAM rather than hard drives as it would take too long to retrieve info. It could be "taught" with millions of prior CTs, radiographs, etc, already done and apply that to a new one. I see something like this happening in my lifetime (next 50 years), but it won't replace radiologists. Someone will have to "verify" its results and assume liability and that's where a physician comes in to override a diagnosis or agree. So volume will continue to go up. :)
Say wut, bruh? Ever heard of Moore's law? Watson used to be the size of a big room, but now it's the size of three pizza boxes. They're trying to make it into the size of a single pizza box. Hardware is not the limitation here...
 
This is very interesting. I suspect this will happen in our lifetimes, first for primary care, EM, and IM like others have mentioned it's very algorithmic. Computer experts have already come to retail, Lowes is coming out with a fleet of robots to assist customers while they are at the store. Soon enough after retail is taken over by machines, you'll see this creep into other fields like medicine. Of course I don't think Physicians will necessarily lose their jobs, but their pay will go down, where as some other lower level staff may lose their jobs.
 
Last edited:
Demonstration of IBM Watson in collaboration with Memorial Sloan Kettering Cancer Institute and how Watson is used in a case study. This is the beginning of things to come.
 
Watson handles natural language data, i.e., text. If you gave Watson a picture, it would crash.

Machine learning for text is currently much better than machine learning for images. Watson may have won Jeopardy, but Google's image processor only had an overall accuracy of 15.8% in recognizing 20,000 different objects. Image processing is a lot harder statistically and computationally.

Think about how much easier it is to recognize a cat visually than it is to identify some kind of abnormality on a scan where there's a lot of natural anatomical variation between different patients. One day they'll probably get there, but radiologists will be needed for a long time to come.
 
Thanks for the link, it's gotten substantially better from 2012 data which OP posted. I want to read the paper to understand what these error metrics actually mean.

From looking at it quickly, it's still far from perfect. the 6.6% top-5 number means that the computer included the right category as one of its top 5 guesses on the "Image classification" task, described here and here:
http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf
http://arxiv.org/abs/1409.0575

Top-1 error would be higher, and not sure how similar that "Image classification" task is to anything a radiologist does.
 
Top