I get a kick out of "it's the end of the world! the sky is falling!" type comments. I think it would be incredibly premature to raise any alarm bells considering we've hardly got any artificial intelligence taking over physician duties (unless you count midlevels *ba-dum-tss* 😅). At this point it seems more limited to the research domain -- strengthening and validating clinical indices and whatnot. Anyway.
It seems like you are already convinced AI will take over despite zero experience with clinical EEG interpretation. Boeing thought it's software was smarter than pilots too, and the company might go bankrupt over that mistake. Persyst is an example of this 'AI' in EEG, and is completely worthless. It cannot tell a sharp wave from a vertex wave, or a seizure from someone eating lunch. EKG is much simpler than EEG with fewer potential diagnoses and less [sic] datapoints yet still lacks any halfway reliable computer interpretation.
Hi xenotype. I know you're quite a frequent poster here and I hope not to offend or be argumentative. I'm just starting my career in neurology and intend to visit this forum often! So please allow me to respectfully say I don't mean to put people on the defensive and your reply isn't addressing exactly what I meant. I just meant to open the discussion considering it seems not to have been discussed. AI is growing in ubiquity in medicine. That's not to say I am convinced AI will take over. I'm just speaking from the perspective of an experienced computer scientist and engineer that these problems are apparently growing more trivial for computational analysis.
Persyst probably is worthless. After reading through their publications I see no mention of AI or machine learning. It looks like they're rooted in 2014 tech, so they're probably using some Douglas–Peucker algorithm-type stuff, which I loved to use back then when writing my algorithms for analyzing polysomnography.
I am saying now we have hybrid deep neural networks which can utilize all of the things I mentioned to forecast and/or predict. Implanted and wearable tech can add features like trends in blood gas, K+, Na+, pH, glucose, osmolality, lactate, cortisol, actigraphy, polysomnography, circadian and diurnal data, barometric pressure, moon phase, and etc. to the equation. Also clinical and socioeconomic data -- all of the above have been correlated with seizure. We're talking data cubes of enormous dimensionality. I believe the reality is computers are probably already outperforming humans, but we'll see more practical implementations very shortly.
Bill Gates likes to say: "We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don't let yourself be lulled into inaction."
xenotype said:
This is potentially realistic but I disagree. Automated EKG analysis has not saved anyone any time.
I disagree. AEDs have presumably saved many lives.
My research focus is in deep learning of neurologic time series data, specifically ECoG. I have a MD and am a Neurologist but I do this research in the role of the engineer / computer scientist. I work closely with Epileptologists but am not one myself. I do read EEGs clinically though. Epileptologists understand very little about math, computer science, and machine learning. They do not know the difference between a GAN, a CNN, a MLP, or SIREN vs ReLU vs PReLU. Independent of a GUI being handed to them, they will be overwhelmed by the pace of development in deep learning research. Papers that were hot at NeurIPS even 5 years ago would get yawns from attendees today. The other issue is that the SOTA changes month to month, whereas clinical practice changes slowly. With end-to-end ML reads without Epileptologist involvement, all it takes is one malpractice lawyer with a CS degree to say "Why didn't you use Transfer Learning with Fourier Transforms in a Graph CNN per Blah et al. SOTA paper published last week at EMBC? Your algorithm FN rate is 0.001 but their work showed FN rate of 0.0001."
What will likely be the first phase of development is that an end-to-end ML interface will pre-read EEGs and flag segments of interest for the Clinical Epileptologist to focus on during a formal read. I have confirmed with numerous Epileptologists that they are open to this and feel it is the first acceptable paradigm shift. This protects the engineers in that they are not making the final call, as alternatively if there was no MD involved then false negatives could lead to the tech company being sued. This also protects the MD in the same way from not missing things and he/she can rely on the algorithm by choice or read every segment if desired.
I see something similar on the horizon for Radiology, and when pre-reads become standard, it could reduce the number of working Radiologists by 80% or more. A single Radiologist with ML pre-reads will be able to do the work of 3-5 Radiologists. Same thing will probably happen for EEG.
I have also submitted papers for ML in stroke (Computer Vision for detecting focal deficits via Tele encounters), but this is well behind EEG and Rads reads.
Check-out Yannick Roy recent JNE review on DL for EEG analysis:
Deep learning-based electroencephalography analysis: a systematic review - IOPscience
Lastly, if you are interested in making a career of ML/DL research, you will have to leave clinical work to focus on research entirely because there are too many smart, hard-working people in this field now with backgrounds in Physics, Applied Math, CS, EE from MIT, Stanford, Berkeley, etc. They dedicate all of their time to this and a Clinician simply cannot keep up unless acting as a collaborator. It seems like if you do not post to Arxiv on Monday then your idea will be posted by someone else on Wednesday same week.
Hey me too! As I said above, I'm a computer scientist and engineer, and much of my research has involved machine learning in various domains from waveform analysis to computer vision.
I see you're focused on time series data. I'm wondering what the future holds for accessible hybrid NNs. (I think Keras has something?) Anyway I guess you're probably comfortable with LSTMs or other RNNs, but I'd like to see some studies involving RNNs + CNNs. I mean, I'd like to see models of cube dimensions trained with different algorithms combined to predict the dependent variable(s), if that makes sense. I haven't looked into this much though. But I mean, Google search's spelling model was trained with 680 million features. The tech is getting there.
Yeah! To clarify, I've stated before elsewhere -- and agree with your assessment that -- the role of AI in medicine (especially in rads where it's the talk of the town) will be more of a utility, especially at first. I.e. radiologists would possibly be able to read more images because the models would highlight the important parts, yet keep the radiologist responsible for accepting/declining the prediction (which would further train the model!). I think sleep and eeg are much simpler and less noisy problems.
Agreed that the PhDs working on this stuff have a good grip on things. And I would also agree that clinician-scientists are a critical part of the process.
Also a review article from 2019 seems quite dated in this field I would say, right? Pubmed shows twice as many papers with keywords "machine learning" in 2020 as there were in 2018. The growth in knowledge looks exponential