Most of those things are either experimental, or only useful in a highly structured environment.
AI is coming, but the current publicly available crop (outside specialty tasks) makes lots of mistakes. So it's only useful in places where those mistakes can be tolerated. Maybe 6 months from now. I rather trust Derek Lowe's analysis of where biochemical AI is currently...and his analysis is "it needs better data!".
One shouldn't blindly trust news stories. There are always slanted. Sometimes you can figure the slant, but even so that markedly increases the size of the error bars.
OTOH, AI *is* changing rapidly. I don't think a linear model is valid, except as a "lower bound". Some folks have pointed to work that China has claimed as "building the technology leading to a fast takeoff". Naturally details aren't available, only general statements. "Distributed training over a large dataset" and "running on a assembly of heterogeneous computers" can mean all sorts of things, but it MIGHT be something impressive (i.e. super-exponential). Or it might not. Most US companies are being relatively close-mouthed about their technologies, and usually only talking (at least publicly) about their capitalization.