You haven't worked with large scale virtualization much, have you?
In all fairness.. I am not at full scale virtualization yet either, and my experience is with pods of 15 production servers with 64 CPU Cores + ~500 Gb of RAM each and 4 10-gig ports per physical server, half for redundancy, and bandwidth utilization is controlled to remain less than 50%. I would consider the need for more 10-gig ports or a move to 40-gig ports, if density were increased by a factor of 3: which is probable in a few years, as servers will be shipping with 2 to 4 Terabytes of RAM and run 200 large VMs per host before too long.
It is thus unreasonable to pretend that large scale virtualization doesn't exist or that organizations are going to be able in the long run to justify not having large scale virtualization, OR moving to a cloud solution which is ultimately hosted on large scale virtualization.
The efficiencies that can be gained from a SDD strategy versus sparse deployment on physical servers are simply too large for management/shareholders to ignore.
However: the network must be capable of delivering 100%.
Perfectly content to overallocate CPU, Memory, Storage, and even Network port Bandwidth at the server edge. However the network at a fundamental layer has to be able to deliver 100% of what is there --- just like the SAN needs to be able to deliver within a degree of magnitude the Latency/IOPS and Volume space size that the vendor showed as the capacity of the SAN --- we will intentionally choose to assign more storage than we actually have, BUT that is an informed choice, the risks simply become unacceptable if the lower level core resources can't make some absolute promises about what exists and the controller architecture forces us to make an uniformed choice, OR guess about what our own network will be able to handle affected by the loads created by completely unrelated networks or VLANs outside our control, E.g. perhaps another tenant of the datacenter.
This is why a central control system for the network is suddenly problematic.
The central controller has suddenly removed a fundamental capability of the network to be heavily subscribed, fault-isolated within a physical infrastructure (through Layer 2 separation), and tolerate and minimize the impact of failures, if designed appropriately.