Yes, they are "just" programs, but programs that encode learning processes and produce structures that know how to do something without a human ever encoding the knowledge itself. The result of learning, however, is quite unpredictable. Even if the process is deterministic (it isn't always, noise is sometimes added to make it more likely to explore a wider solution space), it has as high a complexity that it's impossible to predict its end result short of actually executing it.
It's AI in the sense that if as a human you try to analyze the end result, the trained learning structure, you will have no way (yet!) of actually grasping how the hell is the knowledge actually encoded in it, it's just an emergent property of myriad of weights between nodes in a large data matrix. It'd be as futile as trying to observe a high-resolution picture by looking at individual pixels, or understand a person's thoughts from looking at the exciting/inhibiting behavior at the junction of neurons in their brain.
So, it's not really the programs that are the essential component of the AI here. They're well understood (analogous to how we have an understanding of the biology of the human brain). It's the emergent knowledge encoding that these programs create as they run and learn that's the essential component.
I can imagine over time it'll also be a rich research area to come up with analysis methods to figure out what's going on in these knowledge representations (like psychoanalysis, but for AI). I sure hope we better figure that out before we let such systems near cars, airplanes, power plants, or stock markets.