Then again, the php code had to be served through apache, while the c code was served directly by a custom server sitting on a separate socket, so there's no telling how much of the overhead was from apache.
My thoughts exactly. Is the bottleneck the webserver or the actual code?
Seems more likely that the number of servers has to do with the massive number of requests that have to be handled, so they would need several webfarms, etc., and since they have users all over the world they would have more than one data center.
From a source for the article:
Given its global user population, Facebook eventually had to move to replicating its content across multiple data centers. Facebook now runs two large data centers, one on the West coast of the US and one on the East coast.
So cut your 30,000 servers in half to 15,000 servers per data center.