Yeah man according to your exponential model, in 50 years we’ll all be sentient gasclouds in space.
Still gonna be reading plainfilms tho 🙄.
The point was always efficiency growth is not exponential. While there is some efficiency to still be squeezed out, there’s growing rhetoric among computational scientists, which I’m sure is reflected in all our anecdotal experiences, that AI really hasn’t been getting much better the past several years.
I don’t. It’s akin to saying we’ll get faster than light travel soon because look at how fast we went from the first flight to landing on the moon.
Maybe I should reframe: regardless of the feasibility of AI longterm overcoming human abilities in all things without monstrous infrastructure / cost investment; the implementation will be so slow it won’t make a functional difference to career physicians anyway. If only because the tedious process of verifying the safety of these things—assuming they work, which they don’t currently, and there is good reason to believe they won’t soon—takes decades.
Put another way, supercomputers have been able to solve geometry problems for a while. The problem isn’t whether they can solve it, the problem is whether it’s cheaper. I could build a multimillion dollar supercomputer, or I could just, you know, hire a mathematician salaried at $90k/yr. I don’t question whether particular problems can be solved with enough resource investment, I question the financial feasibility of the approach. And despite planned or RD’d supercomputer builds, the best datapoint for future expense estimates are recent expenses, barring an upheaval in how components are designed, and we don’t have such an upheaval on the horizon save QC, which I can personally guarantee you will not aid with the CNN problem.