Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Surprising Background (Score 2) 42

I don't know who Gil Luria is or how good he is at his job, but that statement is very misleading. All hyperscalers, including Oracle, use third party data center providers.

How it works: Companies like NVidia and OpenAI have partners. So let's say you're a medium-sized company ($500M+ market cap) and you want to deploy some machine learning capability. You go to NVidia and say "I'd like to purchase 5 of your GB300 liquid-cooled racks. NVidia says "Excellent, do you have a service provider for those?" Then they have a list of recommended and certified partners who can provide the racks pre-built in a data center where they have the care and feeding (and power and plumbing) they need. Oracle is a partner.

None of the big hyperscalers can build and run their own facilities fast enough so they all use third parties to get additional capacity. All of them. Including, and especially Oracle. So that customer with the 5 racks may find themselves in an "Oracle" data center which is actually owned and operated by Switch or CyrusOne, or NTT Data, or Iron Mountain, or DRT.

The comments by Luria make no sense (to me).

Comment Scale (Score 4, Interesting) 69

It's data centres. For the past decade and even through the pandemic-era surge in data centre demand the hyperscale providers have been pushing for 100% renewable energy solutions. That changed last year with the ramp-up of AI demand. In the US the utility providers are really struggling to provide anywhere near the power that is being requested. I've got campuses that will only get 10-20MW in the next year and then have to wait 3+ years for any additional capacity and I wont be at all surprised if any dates I have now will slip as they get closer. There is literally NO POWER.

The fastest and easiest way to lots of cheap power is gas. There are so many projects going on right now where a provider has bought huge tracts of land in Texas, and is simultaneously building a gas power plant and a data centre campus capable of 200+MW of IT load. And if you build the redundancy into your gas plant you can save $100s of millions on diesel generators.

"But, but solar is cheaper and clean, etc" Yes, but it doesn't work at night. So you need batteries. Lots of expensive batteries. Imagine the amount of batteries you'd need to provide 200MW of power for 8+ hours. It doesn't work.

"But then just grid tie the solar!" Sounds good. But for that you need a power supply and use agreement with a utility and that takes time and money and most utilities will want to own/operate the generation facility. Building the power transmission infrastructure to your 500ac campus in BFE Texas is not a cheap walk-in-the-park either. Half of all my delays are just getting the power *to* the site.

That is why ALL of these companies are silently backing away from their climate pledges. For the record, my company has not and will not back away from our climate pledges. We've been 100% renewable for years and will continue to be.

Comment Plain Bagel (Score 3, Informative) 52

This guy is pretty dry, but he does a good job of breaking down what these are, how they work and the challenges with them. TLDR: You don't own the security, the token creator does (Robinhood, etc) with just a promise to pay you if they sell their positions. You don't get to collect dividends of any equity, etc.

Comment Self Correction (Score 3, Interesting) 28

I don't know if this guy has been on vacation or living under a rock, but there was already a correction last month. Microsoft dropped 2GW worth of DC leases (on top of the several hundred MW they did in Feb) which flooded the market with inventory. Two of my customers immediately dropped out of work that was being done on other data centers because they knew they would be able to pick up space sooner and for less money as a result. Everybody in the industry saw a pull-back. Where we were working on designing and selling inventory that was 24 months out now nobody wants to talk about anything that is further than 12 months away from being ready.

Tying new data centers to old nuclear plants has a whole host of other issues around it that make me think this will end up being a nothing burger (SMRs are another matter), but this supposed irrationality of a capital system working as intended seems ill conceived.

Comment Re:Is this even possible? (Score 1) 86

I'm not sure this will answer your question, but if it doesn't then maybe you can expand on it a bit and I will try again.

In a typical DC you have both the utility feed and (diesel) generator feed coming into an automatic transfer switch (ATS) which automatically switches the load over from one to the other if the utility feed fails. But these are then typically run to an on-line UPS which (again, typically) has a run-time of 4-5 minutes. So the power is conditioned by the UPSs in the facility all the time. If the power from the utility goes out the ATS switches over to generator while the UPS picks up the load. The gens take about 30 seconds to start and get up to speed which is plenty fast enough since you have 4+ minutes of run-time on your UPSs.

So the minor fluctuations in the grid don't really matter to a DC operator. The UPSs condition the power and the generators can run the facility indefinitely (provided you've got good fueling contracts in place).

That said, I design about a dozen large data center solutions a year and each one comes with a set of somewhere between 500 and 1000 distinct requirements. It's big money and the people who are buying the capacity want to make sure they are getting exactly what they need/want. And every single one of these requests contain a question about the proximity of the data center to high-risk areas like chemical processing facilities, fuel storage depots, freight rail, etc. Guess what's on the list? Nuclear facilities.

So the idea that these DC operators want to locate their facilities right beside nuclear power stations is in contrast to one of their risk requirements. I don't think this is going to be a big trend because *their* customers are not going to be happy about it even if they are willing to make the compromise. $0.02 and all that...

Comment Nature's End (Score 4, Interesting) 23

What a crazy time to be alive. Back in 1987 authors Whitley Strieber and James Kunetka wrote a book called "Nature's End" about environmental catastrophes on Earth. In it the protagonist used a computer called an "IBM AXE" that had a rollable screen. The book was set in 2025.

And here we are with Lenovo (formerly IBMs consumer products division) releasing such a product in the same year set in the book. Wonderful! I wonder if anyone working at Lenovo has any idea...

Comment Re:WUE (Score 4, Informative) 79

Thanks for the link. You are exactly correct. As usual the media butchered it (in this case Bloomberg) -- the press release makes perfect sense.

In a typical data centre the cooling cycle is: Chillers on the roof, which either use water or air-based chilling, cool a loop of water that runs to your server rooms. These rooms have devices called CRAHs (Computer Room Air Handlers) or FWUs (Fan Wall Units) that use the chilled water to blow cold air through the room. That air gets heated up by the equipment, it rises to the ceiling and is then sucked back out and into the chilled water loop, heating it up. That is then cooled back down by the chillers on the roof again. It's amazing that we can get a PUE of 1.25 to 1.4 out of such a system but it works pretty well.

AI is driving much higher densities in the racks. A typical air-cooled rack is something like 8-12kW full of servers but can get as high as 20 or 30kW. To cool a rack that is pushing 80kW+ you need to use liquid cooling. Lots of techniques have been tried but the one the industry is settling on is direct-to-chip which uses a device called a CDU (Coolant Distribution Unit) to take the chilled water from the pipes that run to the CRAHs, and loop that out in smaller lines to the racks where it is distributed directly to cold plates on the CPUs and GPUs. This is almost exactly like what you would find on a higher-end gaming system.

The wonderful thing about direct-to-chip cooling is that it is much more efficient than air cooling so your PUE goes down. The more your PUE goes down the more energy you can use to power servers and the less you need to use to cool equipment. With direct-to-chip efficiencies in cooling you can also have a higher chilled water loop temperature (because more cooling is getting directly to the equipment).

So what Microsoft is saying in a nutshell is: "Hey, we're using less water because we're building more data centers with air chillers than evaporative water chillers, but because we're also deploying more direct-to-chip installations in those DCs, it's not increasing our power consumption too much".

One last thought: You still have to have CRAHs or FWUs in a data hall because ancillary equipment still has to be cooled down, and humans have to work in them. So unfortunately we can't get rid of the necessity to cool down the air.

Comment WUE (Score 4, Insightful) 79

Two common measurable gauges of data center efficiency are WUE (Water Usage Effectiveness) and PUE (Power Usage Effectiveness). A typical hyperscale data center design PUE is something like 1.25 where an actual PUE when the DC is fully loaded with servers is 1.4. What this means is that the the total power consumed by the entire data center is 40% more than the servers themselves consume. Obviously 1.0 would be ideal but is unreachable.

WUE is a bit different but the goal is the same: to measure how effective the data center is at using water. The calculation is Water Usage (L) / Energy Consumed (kWh). To get a data center built there is a lengthy and expensive permitting process and local municipalities want to know the effect that the facility will have on the local water supply (aquifers, municipal water, etc). So data center builders often use air cooled chillers and closed loop chilled water loops. These systems don't use any water for cooling. They aren't new. They work in almost any climate.

I bring all this up because evaporative cooling is on the decline due to these concerns and Microsoft is already leasing data center space in Phoenix in data centers that do not use evaporative chillers (and has been for years). So I'm at a loss to explain why we have an article about them "investing in a new design" they are already using. This is likely just a feel-good article and isn't anything new.

Also, for those folks saying "why not just build somewhere cold" etc. For plenty of workloads that is possible (like machine learning, REST-type services and anything that is transactional that way), but for others you still need to build close to the population centers you are serving because of latency. The perfect location for a data center is one where land is reasonably inexpensive, the power is reasonably cheap, and yet is still near large populations centers. It's not easy to find ideal locations and with the DC boom resulting from COVID and now machine learning it has become much more difficult.

Comment Re:Pump and Dump (Score 1) 46

You bet. I was wrong, and you have my apologies. Not the first time I've been wrong here on /. and probably won't be the last! :)

Constant bombardment about crypto from absolutely everywhere is (obviously) getting on my nerves. It hits me hardest here because it was the one place I trusted for interesting brain fodder, and I'm just sad about it.

Comment Pump and Dump (Score 0) 46

How about a new poll:

Will BIZX (the owners of Slashdot and a private currency exchange platform) disclose their Bitcoin holdings in 2022? Yes/No?

I mean, you should just come right out and say it. Slashdot is a tool you use to inflate your investments. I mean, you don't even advertise your own service on it. Does the business plan rely on some kind of stealth pump and dump?

How much cryptocurrency does BIZX hold right now? You should let us know so at least we know for sure why every third story is about crypto and that it's not going to get any better.

Slashdot Top Deals

No line available at 300 baud.

Working...