1. It can specify other needed files in header, so browser can start loading them before it gets the http page
This is only a benefit if the resources are already cached locally anyway, so the browser could start loading a .png for instance while the main page html downloads. And there's cost not a benefit if the resource is already in memory. And browsers could do this anyway most of the time by just remembering what resources were used on a page even if the page itself is not cached. So this is an extremely marginal benefit at the cost of some complexity and bandwidth.
2. It can send more than one file at a time. It can mux multiple files, while pipelining only allows you to transfer files one at a time.
If multiple files are multiplexed then when the connection drops then multiple files have to be restarted in the middle instead of just one, and dynamically generated resources need to be redone from scratch. The only practical benefit to sending multiple files at once is that the browser could start processing them with just partial data (with an image header for instance it could allocate the memory for the image). This again is very marginal compared to the transfer time and also more complex.
3. It can prioritize some files over others.
So can pipelining (in general, not "HTTP pipelining"). Browser sends server a list of resource, it replies back sending whole files in smallest-first order for instance. The difference is that if you are sending whole files at once pipelined then you don't have to explicitly state priorities and have mechanism for setting and adjusting them so pipelining is less complex.
4. It can compress the HTTP headers. Modern browsers send their life history, the weather outside, everything that have ever been installed on the computer, every language / encoding it can possibly use, cookies, referer, when it saw the content last, what the etag of that context was, and so on. Header compression can actually make a difference.
So can SSL, which SPDY basically requires anyway because of proxies. Enable deflate compression over SSL and it compresses headers.
It's really not that interesting. They make a lot of claims without sources or any support whatsoever and won't answer questions about it, for instance see threads here or on reddit where google employees post about SPDY (ie are reading the thread) but won't answer questions about methods and sources or defend their theories.
The problem that SPDY is supposed to solve in it's overly complicated way has a simple solution: tweak HTTP pipelining so that the server can respond in any order it wants to. Then servers can send resources when they are available, in smallest-first order, or whatever and the pipeline doesn't block on ad.doubleclick.net (it's just sent last when it is finally finished profiling you). That's all there is to it. The HTTP designers didn't do it because they didn't want to go far enough with changes, instead just tacking pipelining on almost as an afterthought. Maybe Google invented SPDY because they are afraid of tweaking HTTP or think standards move too slowly? I don't know, all I know is that SPDY is bad news.