We've been going through this since the 1980's when we started to make ruled-based expert systems and put them into production. We called that AI too. Now we're doing the same with statistical machine 'intelligence' (optimisation, often), various configurations of trainable neural networks and some hybrids.
These are trainable appliances, not intelligences. They don't have the adaptability and recovery from mistakes of human or (in the case of statistical, sub-symbolic etc.) any explanatory power. To some extent, that's why I liked the ancient expert systems with a why? function, but they were also very brittle. So I think the current hype curve has inflected and this is a good thing, since, apart from this, there are some quite weighty ethical problems as well.
This is not the view of a neo-Luddite, but there's stuff to think about here.