Q: Doesn't HTTP pipelining already solve the latency problem? A: No. While pipelining does allow for multiple requests to be sent in parallel over a single TCP stream, it is still but a single stream. Any delays in the processing of anything in the stream (either a long request at the head-of-line or packet loss) will delay the entire stream.
This does not make sense. You're still using TCP, which is a reliable transport protocol, which means packet loss is dealt with at the TCP level, and not seen by SPDY. So the effect of "delaying the entire stream" is exactly the same as with HTTP. The only difference is that you're using fewer TCP connections (one instead of several - in fact, this is one of your selling points!), so the probability that a request will be affected by packet loss in an unrelated request *increases*: packet loss slows down all subsequent traffic on SPDY, since it's sharing a single TCP connection, while with HTTP it only affects traffic that uses the same connection (out of several).
In the real world, packets loss rates are typically 1-2%, and RTTs average 50-100 ms in the U.S.) The reasons that SPDY does better as packet loss rates increase are several: SPDY sends ~40% fewer packets than HTTP, which means fewer packets affected by loss.
But the packets are bigger. If packets are lost due to noise, increasing the size of a packet increases the probability of having an error within it. 10 dollars says you "tested" this in a simulation by fixing the probability of losing a packet, instead of fixing the error distribution. That's going to overestimate the improvements of having fewer smaller packets.
A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson