I work for Cisco, so this post is biased.
If you want to know more about Intel Nehalem 55xx architecture.
It explains that a the server manufacturer using the Intel Nehalem 55xx processor can support up to 3, 6 or 9 DIMMs/socket. This corresponds with a memory bus speed of 1333, 1066 or 800Mhz. The latter is not often implemented and would give you (9x2x8GB) 144GB in a dual socket system.
What Cisco did is, developing a patented "memory switch" which presents up to 4 DIMMs as 1 to the processor, MULTIPLYING THE ALLOWED RAM TIMES FOUR. If the memory is running at 1066Mhz this gives you 48DIMMs. If the memory is running at 800Mhz this would allow up to 72 DIMMs in one server. The latter one has not been implemented.
Where would you ever need this kind of memory?
* Running VMware ESX, XenServer,... and assuming 3-4GB per VM -> imagine 96 VMs per physical box
* imagine running a 300GB MySQL database out of RAM without the need of a high end machine
Also the price per GB is not linear for memory. 8GB costs currently way more than 4x 2GB. So if you still don't need the 384GB memory, you can fill the 48DIMMs with 2GB and have a 96GB RAM server for a lower price.
There are also a lot of other features which are really different and better than the competition, such as centralized management per 320 servers. In more enterprise environments customers can also consolidate their SAN and their LAN network by using open standard FCoE.
Please check it out at Cisco - Unified Computing System