Microsoft Ends 'Project Natick' Underwater Data Center Experiment Despite Success (techspot.com) 35
Microsoft has decided to end its Project Natick experiment, which involved submerging a datacenter capsule 120 miles off the coast of Scotland to explore the feasibility of deploying underwater datacenters. TechSpot's Rob Thubron reports: Project Natick's origins stretch all the way back to 2013. Following a three-month trial in the Pacific, a submersible data center capsule was deployed 120 miles off the coast of Scotland in 2018. It was brought back to the surface in 2020, offering what were said to be promising results. Microsoft lost six of the 855 servers that were in the capsule during its time underwater. In a comparison experiment being run simultaneously on dry land, it lost eight out of 135 servers. Microsoft noted that the constant temperature stability of the external seawater was a factor in the experiment's success. It also highlighted how the data center was filled with inert nitrogen gas that protected the servers, as opposed to the reactive oxygen gas in the land data center.
Despite everything going so well, Microsoft is discontinuing Project Natick. "I'm not building subsea data centers anywhere in the world," Noelle Walsh, the head of the company's Cloud Operations + Innovation (CO+I) division, told DatacenterDynamics. "My team worked on it, and it worked. We learned a lot about operations below sea level and vibration and impacts on the server. So we'll apply those learnings to other cases," Walsh added.
Microsoft also patented a high-pressure data center in 2019 and an artificial reef data center in 2017, but it seems the company is putting resources into traditional builds for now. "I would say now we're getting more focused," Walsh said. "We like to do R&D and try things out, and you learn something here and it may fly over there. But I'd say now, it's very focused." "While we don't currently have data centers in the water, we will continue to use Project Natick as a research platform to explore, test, and validate new concepts around data center reliability and sustainability, for example with liquid immersion."
Despite everything going so well, Microsoft is discontinuing Project Natick. "I'm not building subsea data centers anywhere in the world," Noelle Walsh, the head of the company's Cloud Operations + Innovation (CO+I) division, told DatacenterDynamics. "My team worked on it, and it worked. We learned a lot about operations below sea level and vibration and impacts on the server. So we'll apply those learnings to other cases," Walsh added.
Microsoft also patented a high-pressure data center in 2019 and an artificial reef data center in 2017, but it seems the company is putting resources into traditional builds for now. "I would say now we're getting more focused," Walsh said. "We like to do R&D and try things out, and you learn something here and it may fly over there. But I'd say now, it's very focused." "While we don't currently have data centers in the water, we will continue to use Project Natick as a research platform to explore, test, and validate new concepts around data center reliability and sustainability, for example with liquid immersion."
If it was economical they'd continue.. (Score:3)
It's obviously not.
Re: (Score:2)
Re: (Score:2, Insightful)
Not in modern America. If research doesn't find a way to create wealth for someone it's not worth doing. You think we're expanding our efforts in A.I. for the good of mankind or something? /s
Re:If it was economical they'd continue.. (Score:4, Interesting)
Oh, nonsense. I worked at Amazon for 9 years, the first 5 at AWS. When a company has sufficient profit and doesn't fritter it away on stock buybacks and truly absurd executive compensation there's plenty of cash available to experiment. "Throw shit at the wall and see what sticks" can be a viable business plan when you have sufficient cash. This sometimes leads to failures like the Fire Phone, but it also allows the company to take a leap and replace **ALL** of FedEx's delivery business in under a year and saved money doing it. I worked with a very talented project manager to replace the access control system with a homegrown version. The project crashed and burned, (the manufacturer refused to allow us access to the hardware drivers) At a lot of places that would have meant the end or at least the stagnation of his career, instead everyone involved got up, shook the dust off, and said, "That sucked, let's not do that again" and went on to other things. Many of them very successful.
Re: If it was economical they'd continue.. (Score:4, Funny)
Everybody that volunteered to work at that data center was on two lists: the work-from-home list and do-not-call list.
Re: (Score:3)
Everybody that volunteered to work at that data center was on two lists: the work-from-home list and do-not-call list.
It was filled with pure nitrogen; they'd be on the dead-on-arrival list.
Re: (Score:2)
Why do you suspect Highlander is doing it, then?
PS Microsoft should publish a paper.
Re: (Score:2)
Economical is a multi-faceted issue; the modules were relatively small-- about the size of a 20' shipping container IIRC, and a greater value was likely placed on the ability to upgrade hardware rather than operating costs. Something ~10-40x the size might yield different results in terms of economy, or some type of cluster arrangement that allowed for computing upgrades after 5 years to be easily loaded and re-submerged.
Re: (Score:2)
That small? It apparently had 855 servers racked in that space, that's incredibly dense. If cooling is that good I wouldn't want to change the surface area to CPU ratio much. It would probably be better to just use a series of these units chained together rather than make them into one big one.
Re: (Score:2)
Economical is a multi-faceted issue; the modules were relatively small-- about the size of a 20' shipping container IIRC, and a greater value was likely placed on the ability to upgrade hardware rather than operating costs.
Since the guy didn't explain what the reason for not continuing is, we have to speculate, but that is almost certainly not it. The data pod is a high density server farm providing generalized computing services ("cloud") that runs unserviced for years. If you want to "upgrade" (replace) the hardware you pull the existing pod and and deploy a new one while the old one is sent to the service center for replacement/reconditioning.
I suspect it is simply that they simply don't currently want to open underwater d
Re: (Score:1)
Or maybe it is continuing, classified and a military thing. Economical isn't part of the lexicon there.
Re: (Score:2)
+1
I think you nailed it. The know-how to produce, deploy and maintain deep sea data-centres is potentially valuable for intelligence and military operations.
Under the sea... (Score:2)
It's not going to happen."
Okay, it was successful (Score:2)
At least while Dr. Quinn was running the project. But, per usual, Captain Murphy got bored, started messing with it, took it off on a bizarre tangent and... well the details are fuzzy but it ended with an explosion.
Re: Okay, it was successful (Score:2)
Fignuts!
A datacenter in orbit (Score:2)
We learned a lot about operations below sea level and vibration and impacts on the server. So we'll apply those learnings to other cases.
And now for a partnership with SpaceX to test a datacenter in orbit.
Re: A datacenter in orbit (Score:2)
Maybe, but in space they'd sacrifice the primary benefit -- free access to the world's largest heat sink.
Re: (Score:2)
Maybe, but in space they'd sacrifice the primary benefit -- free access to the world's largest heat sink.
On the other hand they would have free power.
Re: (Score:2)
That would be a lot more problematic, cooling is easy underwater but expensive when there's no medium to transfer the heat to.
Re: (Score:2)
"Active Thermal Control System (ATCS) Overview
Heat Rejection Subsystem (HRS) The HRS consists of the radiator ORU, which is a deployable, eight-panel system that rejects thermal energy via radiation."
https://www.nasa.gov/wp-conten... [nasa.gov]
Re: (Score:3)
Yes, but now we're back to active cooling rather than passive like underwater, and radiative IR cooling is not very efficient. When the Space Shuttle reached orbit the first thing it did is open the bay doors to expose the radiators to space, because without them the living space would overheat in just a few hours. Cooling is one of the concerns when a new module or large experiment is sent to the the ISS, if the experiment generates more heat than it can get rid of it can only operate in periods where th
Muskrat Falls best data center location (Score:2)
Land datacenters have reactive oxygen gas? (Score:2)
"It also highlighted how the data center was filled with inert nitrogen gas that protected the servers, as opposed to the reactive oxygen gas in the land data center."
Is 'reactive oxygen gas' another way of saying air?
How do you do physical maintenance on equipment in nitrogen filled datacenters?
Re: (Score:3)
all my data centers are filled with a proprietary mixture of approximately 78% nitrogen, 21% oxygen, and a small amount of other gasses that i can't divulge. trade secret.
interestingly enough, i run the same mixture in my car tires and it does pretty good there as well
Re: (Score:2)
You use a distributed cloud deployment, probably RAID 10 configurations throughout, no dedicated single-use servers, no single point of failure. If a box or a dozen fail the other 800+ just pick up the slack, they probably designed the deployment with a 10-15% cushion of extra capacity since that's what they would do at a land-based DC deployment.
Re: (Score:2)
You don't. If it needs servicing, you haul it to the surface, open the doors and air it out, then do your servicing.
Close it up, purge it and drop it overboard.
I imagine the idea was that you basically don't service it - just leave the failing hardware there until it makes sense to do a major overhaul.
Re: (Score:2)
Otherwise referred to as a "lights out data center", there are quite a few of them around the world above water.
Re: (Score:2)
How do you do physical maintenance on equipment in a sealed pressure vessel hundreds of feet underwater?
Huge trade off (Score:5, Insightful)
Huge negative tradeoffs to be underwater. Expensive deployment, expensive maintenance, expensive enclosure, expensive retrieval. Temperature regulation on land is not that difficult, and if nitrogen makes that big of a difference then you can do that on land as well. Or so both by submerging in mineral oil. All of the extra cost in either case probably is not worth saving a few servers, after 5 years the servers value is close to 0.
Under the sea? (Score:5, Funny)
Glomar Explorer V2.0? (Score:2)
Re: (Score:2)
A former co-irker had worked for Raytheon several years prior, on their early robotics projects. He was able to tell me that he had been on a US Navy sub for several months trespassing in Soviet waters. I eventually figured out that it was probably the mission that tapped the Vladivostok submarine cable.
Land based is easier to protect from attacks (Score:2)
As we've all witnessed over the last few years, it's too easy for rogue states to sever underwater fiber optic cables.
I have a feeling Microsoft is reading between the lines of the current geopolitical situation and is worried a foreign sub could take out their underwater data centers.