

Cloudflare Says Intel is Not Inside Its Next-Gen Servers (theregister.com) 40
Internet-grooming company Cloudflare has revealed that it was unable to put Intel inside its new home-brew servers, because they just used too much energy. A Tuesday post by platform operations engineer Chris Howells reveals that Cloudflare has been working on designs for an eleventh-generation server since mid-2020. jaa101 writes: "We evaluated Intel's latest generation of 'Ice Lake' Xeon processors," Howells wrote. "Although Intel's chips were able to compete with AMD in terms of raw performance, the power consumption was several hundred watts higher per server -- that's enormous." Fatally enormous -- Cloudflare's evaluation saw it adopt AMD's 64-core Epyc 7713 for the servers it deploys to over 200 edge locations around the world. Power savings also influenced a decision to go from three disks to two in the new design. A pair of 1.92TB Samsung drives replaced the three of the Korean giant's 960GB units found in previous designs. The net gain was a terabyte of capacity, and six fewer watts of power consumption. Howellls's post also reveals that testing produced data showing that equipping its servers with 512GB of RAM did not produce enough of a performance boost to justify the expense. The company has therefore settled on 384GB of memory, but did jump from DDR4-2933 to DDR4-3200 as the slight cost increase delivered a justifiable performance boost.
W-what? (Score:1)
The programmer's mantra to "just throw more hardware at the problem" fell apart somehow. Guess that only works in small shops ;^)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
True! Today's 2D bitmapped games use more cpu than Quake. :D
So do today's Doom levels. The limit removing source ports are essentially the same code with very similar efficiency to the original game, but people are now making vast and complex maps.
But yeah I do often wonder where the hell all that CPU goes.
Re: (Score:2)
Re: (Score:2)
The programmer's mantra to "just throw more hardware at the problem" fell apart somehow. Guess that only works in small shops ;^)
The attitude needs to change. We should be favouring pre-processing and caching where possible, over redundant re-processing.
For example, for websites that aren't webapps, I am generally favouring statically generated approaches, since they help reduce CPU and are generally more cache friendly.
Re:Intel made a bad assumption (Score:5, Insightful)
More likely Intel employs actual engineers, who tried to win, but were stuck with the weight of past decisions that take many years to overcome.
Re: (Score:2)
Re: (Score:2)
Intel simply taking its current core design and having someone like TMSC manufacture it at a smaller process size would give them some power savings gain, but Intel is too stubborn for that.
Actually, Intel is already using TSMC manufacturing. [optocrypto.com]
Re: (Score:3)
... were stuck with the weight of past decisions that take many years to overcome.
Nah more than likely
a few years ago when
So you agree with me, you just wanted to say no and then take credit for saying the same thing but with juvenile phrasing?
Re: (Score:2)
I'm not 100% sure that Intel primarily designs and manufactures their new CPU's based on targeted OS assumptions. Hardware OEM's? Sure. Higher-end equipment I've purchased from Dell for "server farm" type deployments are typically bundled with VMware and whatnot. So "bet they run Linux" is correct for higher-end use cases anyway I'd imagine. The guest VM's can run whatever on Earth is needed depending on the customer. Long story short is that Intel has fallen behind the past few years as a company. AMD, NVi
Re: (Score:2)
[Intel] assumed they could just throw more power at bloated Microsoft software, when in reality they needed to ignore M$ and focus on power consumption. I doubt these servers even run M$ code, bet they run Linux.
If this is true then Intel made a bad decision for capturing the server market, like CloudFlare. But I bet there are a lot more processors running MS Windows on desktops than there are running Linux on servers, so it could still be a good strategy overall for Intel.
Re: (Score:1)
If this is true then Intel made a bad decision for capturing the server market, like CloudFlare. But I bet there are a lot more processors running MS Windows on desktops than there are running Linux on servers, so it could still be a good strategy overall for Intel.
The server market is where the profit is though.
Grooming (Score:2)
Internet-grooming company Cloudflare
That's a bold accusation.
384GB of memory (Score:5, Funny)
should be enough for anybody?
Re: (Score:2)
If you use AMD, you are restricted to DRAM only, which is expensive. Intel has also pmem, with much higher capacity per slot and per $. Even ARMs and PPCs have pmem incoming, while AMD ignored it.
Re: (Score:2)
NVRAM [snia.org] still a little ahead of it.
Too young for the reference to make sense? (Score:3)
Why, when I was young, weeee didn't have hundreds of gigabytes of memory to play with.
DRAM came in dual inline packages, which we plugged into sockets when we had enough money to afford more. And if a memory chip failed, and fail they did, we had to squint at the error message on a green monitor and guess at which chip to replace.
And we had to pry that chip from its socket with our fingernails until our fingers bled. And we loiked it! Because Bill Gates told us 640K was enough memory for anybody!
Re: (Score:2)
And we had to pry that chip from its socket with our fingernails until our fingers bled.
Mine didn't bleed due to being frozen solid from walking uphill through the snow (both ways).
Re: (Score:2)
And we had to pry that chip from its socket with our fingernails
I still have my old chip-pulling tool [duckduckgo.com], though it hasn't pulled a chip in 25 years.
Intel is done (Score:3, Insightful)
There is a popular business book that I was required to read back when it was published in 2018. It was big on name dropping and its entire premise was more or less "this is what we did at Intel so you should do this too". Even then I couldn't help but wonder why the publisher would go ahead with that given that it was pretty clear then that Intel was headed for trouble. Not surprising though, a lot of executive types still see Jack Welch as a model.
Re: (Score:2)
The book makes sense if you're an executive and officer of the company. They are all making out great and if the company does become unprofitable they sell off the IP and move on.
Re: (Score:2)
Mobile devices aren't the only market out there. For a long time Intel has been the leader not only in PCs but also in data centers - both extremely large markets. Of course Intel has slipped their in recent years because their process has dropped behind TSMC's, which has threatened Intel's market share in PCs and data centers. But if they can recover their process competitiveness, they will do great even without the mobile market.
Re:Intel is done (Score:5, Insightful)
No, they're not nearly done. Intel is so big that they could easily buy AMD outright for cash before they'd ever come close to going out of business, but it will never come to that.
Moreover, you shouldn't hope they're done. Consumers benefit dramatically when we have two or more companies competing to provide ever-better designs for mainstream tech like CPUs. Without that (as we had prior to AMD's recent resurgence), you get tech stagnation because the sole remaining company has no real need to innovate. Capitalism 101.
Re: (Score:2)
As far as having competition, I don't think Intel is needed for that. That they aren't competitive and aren't driving innovation is most of why I think they are done. The fact is that their corporate culture is holding them back and they will
Slogan (Score:4, Funny)
Empirical results like this, will kill Intel (Score:1)
Re: (Score:2)
Because electricity *IS* a form of energy, and not the only form that matters to servers in the datacenter either. There is also thermal energy dissipation requirements with these chips, adding to the overall energy per-server budget the building must have available, and the owner must pay for.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Isn't even more specific than that? A lot of datacenter space is filled with general purpose servers running general purpose workloads.
Cloudflare is one of those companies with fine-grained engineering needs that seems to build their own servers which are optimized for their narrowly specific workloads.
I'm not 100% sure how well what works for Cloudflare's narrow use case applies to everyone else.
Re: (Score:2)
AMD fill all channels at top speed (Score:2)
AMD fill all channels at top speed
Re: AMD fill all channels at top speed (Score:2)
Don't assume this means 100% (Score:3)
of Cloudflare's installed capacity. It just means the new servers they're adding in this expansion will be Epyc, not Xeon.
Could be their capacity add with new servers is only 5 ot 10% or their total infrastucture.
Hyperscalers rarely dump all their infrastructure to fill in new racks, because that's disruptive to the their business. If CF bought Epycs for 5 years in a row then they might displace all of it, but that assumes Intel wouldn't have an Xeons that look better than Epyc over that whole timeframe.
That the 'story' avoids the actual fraction of capacity these new servers represent is how you know this was a joint AMD / Cloudflare press release pretending to be news
Re:Don't assume this means 100% (Score:5, Insightful)
this was a joint AMD / Cloudflare press release pretending to be news
It is NEWS when AMD sells superior CPUs. For most of x86's history, Intel has built the fastest x86 CPUs. Recently AMD has beaten Intel, despite being 1/10th the size of Intel.
You should be celebrating the fact we have competition in x86 CPUs. If it wasn't for AMD, we'd be years behind in progress and Intel would be selling Core 2 CPUs.