First, routers discard packets somewhat randomly, causing some transmissions to stall.
While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process, it is unfair to call it "random". Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated. In fact, that's how they design routers. You pick a bunch of different situations and decide how often you can afford to drop packets, then design a queueing system to meet those requirements. Queueing theory is a well-established field (the de-facto standard textbook was written in 1970!) and networking is one of the biggest applications.
Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays
You wouldn't expect uniform delays. A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed. Those sorts of systems are usually related to renewal processes and don't often show up in networking applications. That's actually a good thing, because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems.
"Substantial" is the key word here. Effectively the concept of managing "flows" just means that the router is caching destinations based on fields like source port, source IP address, etc. By using the cache rather than recomputing the destination the latencies can be reduced, thus reducing the number of times you need to use the queue. In queueing theory terms you are decreasing mean service time to increase total service rate. Note however that this can backfire: if you increase the variance in the service time distribution too much (some delays will be much higher when you eventually do need to use the queue) you will actually decrease performance. Of course assumedly they've done all of this work. In essence "flow management" seems to be the replacement of a FIFO queue with a priority queue in a queueing system, with priority based on caching.
Personally, I'm not sure how much of a benefit this can provide. Does it work with NAT? How often do you drop packets based on incorrect routing as compared to those you would have dropped if you had put them in the queue? If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal, not Spectrum.
And of course, any time someone opens with "The Internet is broken" you have to be a little skeptical. Routing is a well-studied and complex subject; saying that you've replaced "packets" with "flows" ain't gunna cut it in my book.