*sigh* Someone doesn't understand the Singularity theory. Based on the title i'm guessing it's the professor, but since i can't actually read the article at work it's possible it's the Slashdot editor who conflated lack of AI with lack of Singularity.
The basic premise of the Singularity is that over historic time periods the rate of knowledge acquisition of the human race has increased at a geometric rate.
The reason this has happened is because acquiring knowledge allows us to develop tools that allows us to build upon the pre-existing knowledge to make new discovers that allow better tools, and so on. (Although it's far from a perfect simulation, anyone who's ever played Civilization or any similar strategy game should know that process by heart.)
There are two possible outcomes to this progression, either we hit some rate limiting factor sometime in the "near" future, or the rate of knowledge acquisition over time will approach a mathematical singularity, at which point we will be discovering things so fast that our current minds can not comprehend what will happen. Obviously proponents of the Singularity believe that it is the second possibility that will happen.
However the theory of the Singularity makes no prescriptions about _how_ we will obtain that rate of knowledge. Certainly Artificial Intelligence is one such way. However direct augmentation of our brains is another possibility. Whether that will be via cybernetic implants, biomedical alterations, genetic tinkering, or something else we haven't, and possibly can't, think of is impossible to say at this point.
Up until now of course tools have allowed us to indirectly augment out brains. Writing lets us record information. The internet lets us retrieve information. Now Watson helps us interpret that information. Yes Watson isn't doing anything, Watson is just a tool we use. But tools that help us accomplish things we couldn't before are exactly what moves us along the path towards the Singularity.
As has been pointed out, there was just recently news about replicating a worm's mind in a mechanical body. Yes it's very interesting, but no, it isn't a perfect recreation of an actual brain. But maybe when that paper gets scanned into Watson 2.0 it will make some connection to some other paper on artificial neurons or some such and Watson will let the authors know that they really ought to talk to each other. And boom, we're suddenly creating real artificial minds. Or maybe something else happens. The whole point is we don't know what the next step will be, we're just observing a trend.
If you want to argue against the Singularity you can't just pick a hole in the prospects for AI. You need to explain why the current trend in knowledge acquisition won't continue.