I'll accept the idea that somewhere somebody has so many servers and so little space that a blade center was the only way they could achieve the density they needed.
Except I've never seen it -- all the blade centers I've ever seen have been partially full and the equivilent 1U and 2U servers probably would have fit in the same or less space than the blade chasis was occupying.
And almost always there's a mongolian clusterfuck when they decide to add blades to the chasis -- which they inevitably do, because they have so much money sunk into the blades that there's no way out from under it.
The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays. Half or full height? LoM or mezzanine slot? Which mez slot? Which blade slot? Oh, you want an extra NIC on that blade? Sorry, the mapping requires an additional switching module which will cost you more than any decent L3 48 port gig switch.
Whatever the savings from the blade center (and maybe in some metered situation there is power savings of couple hundred watts) is easily lost in hours of troubleshooting when trying to do something different.
Blade centers always look like some kind of pre-virtualization version of server consolidation that became obsolete once 24U of servers could easily be run on 8U or less of VM host and SAN. They would be a lot more interesting if their mapping regeimes weren't hard wired -- blade advocates give me blahblah "point of failure" about a switchable/configurable backplane.
The HP c-class isn't that bad. It's been pretty set it and forget it. The ESX runs off of an SD card (or maybe it's just a boot image, there's a VM team that deals with that stuff), then all the datastores are hosted on a SAN. The blades themselves are just compute and memory.
Of course your original argument still stands, I've never seen a case where real estate is at such a premium that blades are the only way to go. Usually I see racks and racks of storage taking up room instead of servers, but for me they make adding/configuring physical servers easy. No storage and networking teams to send tickets for configuring zoning and vlans. I just go into the bladecenter and connect one to the other.