I agree, but net traffic peak isn't suited for well-engineered designs. Maximums become absurd. When building a bridge, design is for maximum load x a safety factor (10, often). You put weight points equalling a fleet of big heavy trucks (65,000lbs GVW) on the bridge model, bumper to bumper, and do static/dynamic loading. You model 120-mph winds, or 150 or whatever.
The archtype here is 'slashdotting'. Peak load isn't a value you look up in a handy reference. It isn't an estimate or '10x what you've seen for a peak so far'. In the internet age, peak is whatever the fuck the internet is willing to throw at you. I run a tiny site with a few hundred hits per day. When we've published something that got MASSIVE attention, our little '$6/month' shared-hosting drupal site got half a million hits in the first 12 hours one time, 120k the other.
If my blog was a bridge, it'd be some rural span that sees a car every 4 minutes. A 1-sigma peak is 20 in a minute (wooo!). My site can handle that. At 500k hits in 12 hours, or the local peak moment of 200k hits in an hour, that's 3000 cars per minute. The car analogy is big trucks stacked fifteen deep vertically, creating a third lane up the middle, carrying 25 tons of rocks apiece...
Frankly, I'm amazed my little shared-hosting ISP (A Small Orange) still puts up with us after 3 such nuisances (resisting a bogus copyright takedown, forwarding the issue to me).
Short of Amazon/Rackspace cloud designs, it SUCKS to buy hardware that sits idle. Good engineering in frugal organizations for stuff like this is to build conservatively, track load, have a departmental fund for scaling up when load is consistently too high, and if you're lucky having a proxy or dynamic-content-shedding plan in place to deliver key static content, etc. It's not a rack of pizzaboxes for today, when a single app/db pair can dish out the content the other thousand days of the project's production life.