Yes, there is a-lot of programming involved today, but Watson is not just a search engine. So-called "deep learning" algorithms are really neural network simulations. They are programmed because most people don't have neural chips and so to create neural networks, people have to code then as a simulation. That's why IBM has now developed its "True North" chipset, with a million silicon neurons per chip. These are not programmed - they learn - and they run 1000 times faster (and with much less power) that equivalent simulations. Also, if you look at the code for deep learning neural network simulations, you will see that it is an implementation of neural network topologies (e.g., a cascaded restricted botzman network) and training algorithms (e.g., contrastive divergence) - you can't work on that code unless you understand those algorithms - the real work is in developing and fine-tuning the algorithms - not in the coding. Most of the people who work on that code are PhDs in AI - not programmers per se.
It is also true that there is a-lot of "glue code" to make these systems scale, but that is the type of code that I think will eventually be replaced by machine learning systems. But I don't have a crystal ball - we shall see!