Of course this is the case. This study is as exciting as news that George Michael is gay. There have been plenty of studies to this effect. My company makes tons of money consulting on better storage utilization. [Some Fortune 500 companies I've visited run below 40% utilization.] EMC, IBM, HDS, NetApp and the rest have no real interest in selling you less drives. They all make vague, glossy statements about saving storage money but in reality you need to be wasteful if you want to protect your ass. Think of the things we spend $ on just to get another 9 on the uptime digits: UPS, generators, clustering, DR systems/networks that sit idle, dark fibre between datacenters, RAID 1(+0), RAID 6, tapes, VTLs, Storage Arrays, redundant Fibre Channel SANs, . . .
From a human perspective, fuzzyfungus is right. Over-engineering is less likely to cost your job than failure. Plus, over-engineering is easy to justify.
Some things are just known to cost money if you MUST ensure that business is not subject to fallibility in hw and sw. The fact that there are 50 TBs unused out of your 200 TB of usable storage really might not mean too much. [Some of the numbers quoted could point to the mirrored side of RAID 1 stripes as wasted. It's a cheap gimmick to make the numbers look worse but still true to a certain extent if the performance difference between R5 and R1 is not needed.] Of course, there are usually low hanging fruit that can be attacked to save real money and prevent cascading costs on the other cost centers mentioned above but there will always be waste. It's the cost of five 9's.