Oh you poor mis-informed man. We absolutely do not have the processing power equivalent to a human brain. We can outperform it on one TYPE of task, but not even close to the general case (and even then it requires supercomputers and distributed computing clusters). The main problem exists with learning, highly abstract reasoning (i.e. logical leaps), and oddly enough some of our more "mundane" things such as speech. Unsupervised learning is so incredibly hard in AI because there really isn't any way to signal what is correct and what is not correct within the current context of an AI. For one to actually surpass us we would have to impart all of our specific knowledge and exact modelling to the AI first, and even then it would be very difficult to map out. Watson from IBM is probably the most advanced as far as imparting all of our knowledge, and it really still can't handle anything on that level. Almost all programming is done in one of two ways, either we tell the computer how to obtain the correct answer, or we define parameters of what a correct answer looks like. When the machine has no guidelines and has to decide what is right, wrong, or even useful things get really confusing and complicated for it.
In the wikipedia article for watson they even point out that it had trouble with questions that did not have many terms, which shows they were not able to take into account a lot of context of the question or naturally how a human would say that to one another. The machine was specifically designed to be a giant query bot and it still had problems because if it didn't have enough keywords or a long enough sentence to do decent natural language processing, so it bombed out. That kind of goes directly into the abstract reasoning. Machines work in a very step by step logic model, they don't do well with jumping steps at all and when a problem becomes insanely large, again wikipedia for quick reference combinatorial explosion, the AI pretty much loses it shit. These are also system that have been designed for ONE particular task, and while in a lot of cases they alone can outperform a human, that is the only thing they can outperform in while the human can do thousands of others tasks.
Finally, my last point about speech is less AI related just more showing how much computational power the human brain actually has. Robotics, specifically has had serious issues with a lot of the human aspects of speech and conversation (I couldn't find any good links, I read several articles and had discussions on this back in college, but those are kind of walled behind university stuff...) such that it takes a massive amount of the robots processing power to perform these functions. Even layering on things to try and brute force the problem "creating" natural speech (sort of turing test actually) and then having the bot spit it out caused some ridiculous problems.
Our models and algorithms for creating these kinds of "dangerous AI" are so hilariously far behind from what the tin foil hat community believes we will probably be dancing on a terraformed planet several galaxies away before we actually get that figured out. Unless someone stupidly stumbles across the correct "voodoo" spell of an algorithm for truly efficient and complete machine learning, it is hard to model it when WE don't even fully understand how the human brains works (see neuroscience and psychology).
Full disclosure, I am a computer scientist/software engineer that has actually had some education on the subject of AI. It was a small focus of mine in college out curiosity, but then I saw I would be better served in other focuses so I just kept up with it on the side.