The reason the story is interesting to non-statisticians is because anti-Tea Party stereotypes are proven wrong.
No, they're not. Bad analysis => cannot draw conclusions either way.
I have focused on Simpson's paradox in this thread because somebody else brought up controlling for education level, but it's not the only problem I noticed. I don't have any desire to go into a deep technical discussion of p-values and their interpretation, but I'll leave you with the thought that even with purely random data a proportion of them will be below your "critical threshold" alpha due to sheer chance - by definition alpha is the false positive rate for classifying effects as significant. If you try out a whole bunch of models at random, some of them will meet the alpha threshold even though they're not actually significant. The Yale professor strongly inferred that this was his methodology - he took a data set gathered for other purposes and tried things out until he got an interesting "significant" result. The fact that it's a "controversial" result is getting him lots of media attention. In the long run he may or may not turn out to be right, but this isn't good science.
Bottom line, since the analysis was done improperly (in several ways), you can't actually draw conclusions either way.