Mod parent up - summary is indead very biased.
When using virtual machines you loose some control and visibility compared to the tradition pizza box server. A physical server is easy to pinpoint, easy to implement ACLs (ethernet/ip), Quality of Service, traffic monitoring or just to shut down a network port.
For VMware, Cisco developed a virtual switch ( YES, a downloadable switch!
About a year ago the ethernet specifications for data centers already got an extension called FCoE or Fibre Channel over Ethernet ( http://www.t11.org/fcoe ). Basically this allow you to use one ethernet network for both your lan and your storage san. And thus not needing to build out a seperate Fibre Channel SAN.
If you are able to hook up all the rooms to a single switch (eg 24 or 48 ports) it's easier! You only need Private Vlan Edge functionality to seperate Layer 2 between rooms. Private VLAN Edge functionality can already be found on the pure Layer2 switches like 2960 or ESW series.
This attack would easily be prevented by the use of Private VLANs on your network. With PVLANs Clients connected to the LAN can only send Layer 2 frames to the default gateway and other pre-defined shared services such as printing, ad, mail, internet... Typically Private VLANs are very handy in shared/public environments such as hotels, public desktops.
Howto configure PVLANs on a Cisco Cat 3750 switch:
Many other techniques are available to protect a L2 LAN environemnt:
* DHCP snooping (DHCP trusted/untrusted ports)
* Dynamic ARP inspection
* IP Source Guard
* Port security (stickies) and MAC acls
I work for Cisco, so this post is biased.
If you want to know more about Intel Nehalem 55xx architecture.
It explains that a the server manufacturer using the Intel Nehalem 55xx processor can support up to 3, 6 or 9 DIMMs/socket. This corresponds with a memory bus speed of 1333, 1066 or 800Mhz. The latter is not often implemented and would give you (9x2x8GB) 144GB in a dual socket system.
What Cisco did is, developing a patented "memory switch" which presents up to 4 DIMMs as 1 to the processor, MULTIPLYING THE ALLOWED RAM TIMES FOUR. If the memory is running at 1066Mhz this gives you 48DIMMs. If the memory is running at 800Mhz this would allow up to 72 DIMMs in one server. The latter one has not been implemented.
Where would you ever need this kind of memory?
* Running VMware ESX, XenServer,... and assuming 3-4GB per VM -> imagine 96 VMs per physical box
* imagine running a 300GB MySQL database out of RAM without the need of a high end machine
Also the price per GB is not linear for memory. 8GB costs currently way more than 4x 2GB. So if you still don't need the 384GB memory, you can fill the 48DIMMs with 2GB and have a 96GB RAM server for a lower price.
There are also a lot of other features which are really different and better than the competition, such as centralized management per 320 servers. In more enterprise environments customers can also consolidate their SAN and their LAN network by using open standard FCoE.
Please check it out at Cisco - Unified Computing System
Great IPv6 song!