*snip*Here are the concerns I have with it:
All power rails appear to be exposed. While they are on the back, this could be a significant safety (personnel and/or fire) issue. Considering that you can up to 500A @ 12.5V DC running through the zone power rails, and potentially more for the main cabinet DC power rails, exposed seems like a bad idea.
That appears to be an illustrative picture. An image from a different article of an "in production" or "active testing" rack shows grounded shields around the bus bars. This is the wired.com article I'm referring to. The picture is somewhere in the bottom third.
Your assertion that you'd save "way more" by switching to SSD storage is assuming that the spindle disks are the main consumer of current.
According to WD, the WD20EARX draws 5.3W during read/write, 3.3W during idle, and 0.7W standby/seep(which, admittedly is a rare situation in datacenters.) (from the WD20EARX datasheet)
According to Intel, the Intel 910 series SSD draws up to 25W while active and 8W while idle. The Intel 520 series SSD draws 850mW active and 600mW idle. (from the Intel 520 series product specifications.) I don't know if those numbers for the 910 are a typo, because it seems weird that they'd exceed a mechanical drive.
Either way, my point is that the WD's have a power ratio of 2.65W/TB and the Intel 520 SSDs have a power ratio of 1.78W/TB. Which means that switching to SSDs will save you 33% on your storage power needs. Thing is, because the SSDs have less capacity per SATA port, once you factor in the extra necessary RAID controllers, SATA cards or SATA port expanders, the percent power saving will drop. Admittedly, I have no idea by how much.
I guess, my point is to challenge the popularly regarded idea that mechanical harddrives are extremely power hungry. While CPU efficiency has improved considerably in recent years, I hold that CPUs and associated electronics consume a much larger portion of a server's power than commonly believed.
Also, at idle, the WD consumes 1.65W/TB and the intel consumers 1.45W/TB. Then again, it's not a fair comparison because the SSD can switch between idle and active far more quickly than the mechanical drive. So, once you consider more of aspects of the situation, things become less clearly cut.
The facebook hinged storage server must be using their new 21" rack because they (from images) appear to have an arranged the drives in three rows of 5 drives. The 3.5" drive formfactor is 4" wide, meaning that the enclosure must be at least 20" wide to accomodate five drives per row. Also, using their new rack concept, their servers don't include and AC power supply. So, it's not exactly as space efficient when you factor in the 2U power supply at the bottom. With one PS and one 30drive facebook server, you're at 30drives for 4U or an efficiency of 7.5drives/U. One PS and two 30drive servers, you're at 60drives for 6U and an efficiency of 10drives/U. One PS and three 30drive servers, 90 drives on 8U and an efficiency of 11.25 drives/U. At four servers on one PS unit, you've got 120 drives occupying 10U for an efficiency of 12 drives/U. So, once you have four servers together with the associated PS, you finally reach the efficiency of a thumper.
The thumper (Sun x4500 and x4540) had 48 3.5" HDD's, 2 (x4500) or 3 (x4540) 800W/1600W (110VAC or 220VAC) power supplies and an adorable, itty-bitty dual opteron server. 48 drives occupying 4U is an efficiency of 12drives/U.
To be fair, while the 4server, 1power supply configuration only equals the storage density of the thumper, it has better server/cpu/nic density.
As an aside, the full rack setups appear to have three power supply units. Assuming/guessing 42U per rack with 6U devoted to PS, it leaves 36U divided into three bays of 12U. So, with a PS and five facebook hingy servers occupying 10U and sporting 150 drives at 15 drives/U, you finally outdo the thumper.
To be clear, I do believe that there are benefits to the proposed new rack size, but I don't think it's a clear improvement. Personally, I think the thumper design was brilliant. The only purpose of this reply was to point out that it's not as simple as 15 drives per U.
On a separate note, the ability to fit 5 drives side-by-side in a 21" rack is the best justification I've seen, so far, for widening to 21".
Wow, I prefix too many of my comments with insecure clauses devoid of information and only serving to indirectly apologize to the reader for supplying information I think is important for them to understand despite my worry that I'm trying their patience. If you read all of this reply including even this sudden instrospective insight to my character; then, thank you. I'm flattered.
Nope, they increase a "U" to be 48mm from 44.45mm. This is now called an OU. So, 1OU = 48mm or an increase of 8% compared to a regular U. They claim that this 3.55mm increase will "increases airflow, improving air economization; it also allows for better for cable and thermal management and efficient use of space." Personally, I question wether the increase in airflow, cable management, and efficient use of space will be significant. I'd be very keen to see a good example of how these new 48mm rack units will improve cable management.
Also, the bus bars depicted in the photos appear to be incredibly vulnerable to accidental short circuiting.
Have you read the email shown in the image from the first link(threatpost.com)? It's dated 2003 and it's describing how to optimize the thread local storage local descriptors introduced to linux around that time. If the source code is related to that, then it's likely irrelevant at this point. A lot has happened in the past 9 years.
Uhh, Hi. Yeah. Sorry, The 12cm/13cm band is 2.3-2.31Ghz and 2.39-2.45Ghz. It doesn't overlap the entire ISM allocation. People are still welcome to use channel 11 on their wifi routers... or use those fancy 5GHz radios.
Wouldn't placing such a device on your car constitute a tresspass to land?
Why are you replacing the motherboards yourself? With my T61p, when something in it died - motherboard needed replacing - I called up IBM and told em it's broke. Something like 19 hours later, DHL has a box for me at my door to ship the laptop out in. I put the laptop in the box and call up DHL to schedule picking up the shipping box. The same DHL guy is back 15 minutes later and takes the box. 23 hours later, same DHL delivery guy is back on my doorstep with my repaired laptop. This was with the standard warranty option when buying the laptop. My mind was blown on just how quickly it got fixed. Apparently got shipped from MI to Memphis, repaired, and shipped back in less than 24 hours.
Does this level of support not exist anymore? Otherwise, why are you replacing the motherboards in house - especially if you don't have spare parts readily available. Also, your complaint about the 44 screws? I mean, come'on, tieing the laces on my shoes takes 10 times longer than using velcro, but it's really kinda not a very big deal.
At least the nuclear solution isn't anywhere near as contaminating and destructive to the environment as coal or oil. Think of it as a lesser evil.
The difference between chernobyl's RBMK design and and our operating relics is already rather significant. Also, we have organizations in the US, such as the United States Navy, which are at the forefront of safe reactor design and operation.
VMware has supported 3D acceeleration pass through for years. Works fantastic.
Noone's surprised. Marvell's had a track record of faulty, ill-documented, or bug-plagued parts. Sometimes, I wonder why they still bother. I suppose, someone has to make Realtek look good.