As a cognitive scientist (if that is indeed true), you really should do a little more research (beyond Hofstadter).
AI (AGI in particular) does not necessarily imply imitating humans. It's a bit of a homophobic slant to think that intelligence equates to the human implementation of intelligence. If a machine can exhibit the main features of intelligence (inference, learning, goal seeking, planning, etc, and other factors depending on your definition of intelligence) then it is by definition, intelligent.
Your "Who was the cowboy in Washington?" argument is a straw-man, as you can see from the posts here, most humans didn't even get the subtle references. Watson actually did pretty well in putting together vague references as this is an integral part of the Jeopardy Q/A scheme, even to the point that it was able to best the two top humans at doing this.
To imply that AI has not made advances over the years is pure hogwash. Were capabilities such as Deep Blue, Watson, Siri, et al available 50 years ago? I think not. AI has been steadily advancing over the years, despite not living up to the hype and despite not yet achieving true "human level intelligence" (note, this is vastly different than imitating humans which by some measures fall far short of intelligence). In case you've been living in a cave, advances in AI have been accelerating over the last decade and the nexus between computing power and the various disciplines of cognitive science (neuroscience, psychology, biology, etc as well as their computational counterparts) is producing advances at a much more accelerated pace.
You go right ahead and continue reading Hofstadter and his ilk, while the rest of us continue pushing the envelope of machine IQ and one day maybe (probably within the next 10-20 years), you can continue this debate with your cell phone (or whatever personal device will have replaced it by then).