For anyone who is worried AI is going to steal your radiology jobs:

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

KeikoTanaka

Full Member
5+ Year Member
Joined
Aug 11, 2017
Messages
823
Reaction score
588

Highlights from the article:

"The software clearly is not ready for use in a law enforcement capacity," Ting said. "These mistakes, we can kind of chuckle at it, but if you get arrested and it's on your record, it can be hard to get housing, get a job. It has real impacts."

"The body camera technology is just very far from being accurate," Friedman said. "Until the issues regarding accuracy and racial bias are resolved, we shouldn't be using it."

"Critics contend that the software is particularly problematic when it comes to identifying women, people of color and young people. Ting said those demographics were especially troubling to him, since communities of color have historically often been excessively targeted by police, and immigrant communities are feeling threatened by federal crackdowns on illegal immigration."


Basically, why I'm posting this, is that, Facial Recognition technology is used very often right now. Especially on things like Facebook. And here it is being used for very important purposes, and it is messing up left and right. Human faces aren't as complex as the entire human body. Don't worry, AI isn't going to steal your job in the next decade, or two maybe. Because even if its good, it won't be trusted for a long long time.

Members don't see this ad.
 
  • Like
Reactions: 2 users
Only premeds worry about this
 
  • Like
Reactions: 11 users
Can i get a money back guarantee?
 
  • Like
Reactions: 1 users
Members don't see this ad :)
In before posts about midlevels and AI ruining/destroying medicine forever.
 
  • Like
Reactions: 4 users
  • Like
  • Wow
Reactions: 5 users
I didn't read the whole article you linked, but I believe this is the same "experiment" where they took premade software and cranked it's specificity way down - like from a 99% confidence interval recommended by the manufacturer to 80% or something. So they purposefully increased the risk of false positives just to get the results they wanted.
Was not mentioned in the article, sorry, can't confirm/deny
 
  • Wow
  • Like
Reactions: 1 users
Was not mentioned in the article, sorry, can't confirm/deny

Lol it’s the ACLU purposely handicapping the software so it would make mistakes so they could get the headline they wanted: “facial recognition AI made more mistakes with women and minorities.”

From the link above:

“To illustrate the impact of confidence threshold on false positives, we ran a test where we created a face collection using a dataset of over 850,000 faces commonly used in academia. We then used public photos of all members of US Congress (the Senate and House) to search against this collection in a similar way to the ACLU blog.When we set the confidence threshold at 99% (as we recommend in our documentation), our misidentification rate dropped to zero despite the fact that we are comparing against a larger corpus of faces (30x larger than the ACLU test).”

I don’t think AI is going to take over any field, but this wasn’t really the right article to use as proof of that lol.
 
  • Like
  • Wow
Reactions: 4 users
I hate the internet... ****ing click bait
 
  • Like
Reactions: 1 user
5161ECE1-DB88-4BC1-BDFD-825AD34F4F29.gif
 
  • Like
Reactions: 1 users
Members don't see this ad :)
just because something isn't perfect right now doesn't mean it wont improve to become an acceptable alternative in the years to come. technology is not stagnant.
 
  • Like
Reactions: 4 users
just because something isn't perfect right now doesn't mean it wont improve to become an acceptable alternative in the years to come. technology is not stagnant.

This. People who ignore the progress of society will just go the way of the manufacturing industry.

Computers can and will replace virtually every job in this country. The question is only of how long.
 
  • Like
Reactions: 1 users
just because something isn't perfect right now doesn't mean it wont improve to become an acceptable alternative in the years to come. technology is not stagnant.

As a 26 year old who grew up in the era of calling my friend's home land-line to speak with their parents in order to ask for permission for my friends to come over and play, then 5 years later, texting my friend on my flip phone after school, to then 5 years later being able to google porn on the same device I text my friends with, I understand this PERFECTLY. With that being said, even with this article being not a great example because the AI was artificially lowered to make the results worse (Which I now know, I'm sorry for posting it from the get-go without that knowledge), I think it's still an interesting look into how legislation is one of the biggest barriers to technological entry.

All it will take is one missed cancer diagnosis before the hospital gets sued, and there is a crack-down on the use of AI in hospitals nationwide, and a push for more radiologists to look over all the millions of images that are taken on a daily basis, and verifying the AI reports.
 
  • Like
Reactions: 2 users
to google porn on the same device I text my friends with
And in 5 more years we will be able to have porn videos of our friends (and anyone, for that matter) with deep-fake integrated into those same devices. What a time to be alive.
 
  • Like
Reactions: 1 user
As a 26 year old who grew up in the era of calling my friend's home land-line to speak with their parents in order to ask for permission for my friends to come over and play, then 5 years later, texting my friend on my flip phone after school, to then 5 years later being able to google porn on the same device I text my friends with, I understand this PERFECTLY. With that being said, even with this article being not a great example because the AI was artificially lowered to make the results worse (Which I now know, I'm sorry for posting it from the get-go without that knowledge), I think it's still an interesting look into how legislation is one of the biggest barriers to technological entry.

All it will take is one missed cancer diagnosis before the hospital gets sued, and there is a crack-down on the use of AI in hospitals nationwide, and a push for more radiologists to look over all the millions of images that are taken on a daily basis, and verifying the AI reports.
Everytime a missed cancer diagnosis occurs because of human error it doesn't magically trigger millions of dollars of changes. If the liability and margin of error remains small enough a small amount of errors is not going to bring the ai software down and take us back to radiologists reading millions of images.
 
  • Like
Reactions: 4 users
Dogs are an emerging threat to parenthood. I feel bad for future non-babies.
 
  • Like
  • Haha
Reactions: 1 users
just because something isn't perfect right now doesn't mean it wont improve to become an acceptable alternative in the years to come. technology is not stagnant.
I feel like ive been screaming this from the roof tops for years, people are constantly using the argument “coMputeRs SuCk aT ThaT RigHT nOw”. Well no sh** this whole discussion is about whether or not itll happen in the future, and since i dont see time stopping anytime soon i think the possibility for anything to happen is still there. Unless we all kill each other before it happens, which is a different discussion.
 
No one’s made the obvious joke that maybe those lawmakers are criminals on the low ;)


The reality is that even if AI gets to the point where it can read films with high accuracy, there’ll still be a radiologist double-checking the AI. Same way that there’s still an attendant that manages the self-checkouts.
 
  • Like
Reactions: 1 users
No one’s made the obvious joke that maybe those lawmakers are criminals on the low ;)


The reality is that even if AI gets to the point where it can read films with high accuracy, there’ll still be a radiologist double-checking the AI. Same way that there’s still an attendant that manages the self-checkouts.

Right but there is one checkout attendant managing 4 self checkouts. Ask the anesthesiologists on their subforum if they like that model.

I actually don’t think that’s what will happen, but it’s not “no big deal” if it does.
 
Right but there is one checkout attendant managing 4 self checkouts. Ask the anesthesiologists on their subforum if they like that model.

I actually don’t think that’s what will happen, but it’s not “no big deal” if it does.


Anesthesiologists invented the model so we like it if we get to keep all the proceeds.
 
Right but there is one checkout attendant managing 4 self checkouts. Ask the anesthesiologists on their subforum if they like that model.

I actually don’t think that’s what will happen, but it’s not “no big deal” if it does.

Even at places without self checkouts, they usually understaff the checkout lanes anyway so they save money. Why do you think it’s not “no big deal”?
 
Even at places without self checkouts, they usually understaff the checkout lanes anyway so they save money. Why do you think it’s not “no big deal”?

Supervising 4 CRNAs is way different than sitting your own cases. Double checking AI reads wouldn't be quite as different, but it would change the field. But I don't think it's going to turn into a field where you just have a doc there for liability. Just a guess though.
 
Top