All kinds of AI have teething pains, during which the problems are obvious and comical (the Apple Newton's handwriting recognition being a case in point). At the same time, the achievements of modern AI are amazing--but also troubling.
When I compare AI as envisioned in the 1950s--Isaac Asimov's Multivac, or his robots, perhaps--the assumption was that AI would be closely similar to human intelligence. For example, it was implicit that robots would answer questions by actually understanding them. What we are seeing today evokes an analogy with technologies like the sewing machine. Early efforts attempted to sew the same way humans did, and failed. Singer's brilliant idea was a method of using thread to fasten two pieces of cloth that did not resemble human sewing or even use the same stitch.
A Google search is within shooting distance of Multivac. You type in a question and you get a useful answer. The interesting thing is that most modern AI is shoddy. It goes halfway. It gives you something that's inaccurate, yet useful. But the key thing is that you are expected to use your human intelligence to get the rest of the way and correct mistakes. In the case of Google, you do this by looking at a ten or a hundred search results, for example--and reformulating the question if you don't get the right answer.
Perhaps one of the things that early AI pioneers missed is that modern AI relies more on having huge databases of information than would have even been imaginable in the 1950s and 1960s, and less on AI actually mimicking human intelligence.
This is not a problem when it is all open, the AI is offering you something to look at and not making decisions for you, and it is all in the nature of help or suggestions rather than direct action.
It becomes far more serious when it is happening behind the scenes--when AI is deciding whether you get a loan, or pass an essay test on an exam, or get onto a terrorist watchlist.