Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Cloud

Linus Torvalds on Why ARM Won't Win the Server Space (realworldtech.com) 230

Linus Torvalds: I can pretty much guarantee that as long as everybody does cross-development, the platform won't be all that stable. Or successful. Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment). Which means that you'll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better. This is true even if what you mostly do is something ostensibly cross-platform like just run perl scripts or whatever. Simply because you'll want to have as similar an environment as possible.

Which in turn means that cloud providers will end up making more money from their x86 side, which means that they'll prioritize it, and any ARM offerings will be secondary and probably relegated to the mindless dregs (maybe front-end, maybe just static html, that kind of stuff). Guys, do you really not understand why x86 took over the server market? It wasn't just all price. It was literally this "develop at home" issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a "real server". And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over. Do you really not understand? This isn't rocket science. This isn't some made up story. This is literally what happened, and what killed all the RISC vendors, and made x86 be the undisputed king of the hill of servers, to the point where everybody else is just a rounding error. Something that sounded entirely fictional a couple of decades ago. Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit "hyperscaling" model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place.

This discussion has been archived. No new comments can be posted.

Linus Torvalds on Why ARM Won't Win the Server Space

Comments Filter:
  • by AleRunner ( 4556245 ) on Friday February 22, 2019 @04:21PM (#58165746)
    I always found it strange to use an Ubuntu server. Whilst it's okay, and often better than most BSD or other systems, it's not as stable as RedHat. So why so many Ubuntu servers? It's simple: that's what the developers are using. Linus is, as occasionally happens, spot on with this one. If you can't get exactly the same set up locally there's always going to be the odd really difficult debugging case that just takes you too much time to justify. The solution is obvious: start providing ARM Linux laptops with very similar processors to the ones used in servers. I'll buy a few myself.
    • by Anonymous Coward on Friday February 22, 2019 @04:25PM (#58165772)

      People are using raspberry pi's, as they become more and more capable, that could be the thin end of the wedge, so to speak.

      • by thereddaikon ( 5795246 ) on Friday February 22, 2019 @04:38PM (#58165856)
        Maybe, but there have been attempts to bring ARM to cheap PCs before and its always been still born. The reasons are varied though. For example, the ARM windows tablets failed because they couldn't run 99.99% of windows applications and Microsoft only seemed half way interested in selling them to begin with. ARM Linux laptops could work fine. Most Userland code has an ARM port already anyways but mass production of PC's is expensive. What Dell can do, System 76 can't. They and others like Purism have dipped their toes into custom PC's that aren't just rebadged generic Chinese machines. But they still aren't bespoke. If they were then we would have 4:3 Linux laptops in production, because I know that the vast majority of Linux laptop users spend more time coding than watching movies. So if someone has the cash to get a production line up and running for laptops and desktops using ARM chips in purpose made motherboards that have standard RAM slots and expansion slots then I am sure people will buy them. And I think sooner or later that is exactly what will happen. The real question is, will ARM pull it off before RISCV catches up and does it first?
        • RISC-V is even worse, it has the problem that Linus is complaining about with Arm, but this time as a design feature. RISC-V is the ultimate Chinese menu architecture, once you go past the basics and then the A, M, F/D and possibly C instructions you have no idea what you're going to get. This SoC has decided they want to implement this, this, and this, and the next one instead does that, that, and the other. The much-touted "modular instruction set" is great on a whiteboard during a sales pitch, but ter
        • The server market is actually different now... DCS is bigger than home servers...

          arm has this now :

          https://github.com/ARM-software/sbsa-acs

          but well thought out just dated Linus Torvalds...

          John Jones

      • Not quite the same for the Raspberry Pi. The coding is for integrated devices, and not so much with custom server stuff. Linux is basically spot on with his assessment. Many server technologies, were originally coded for a desktop PC under somones desk. Because the RISC base Unix servers were really expensive, and for most server tasks, they were overkill systems, and the desktop did its job. Over time as the software expanded, it needed a more powerful computer. So they ended up making x86 based serve

      • by gwolf ( 26339 )

        Give me a Raspberry with 16GB RAM, please, or at least with the ability to add DIMMs and then we will be able to talk...

    • start providing ARM Linux laptops with very similar processors to the ones used in servers. I'll buy a few myself.

      The don't have to be identical just function similarly. For instance, if you can do development on a 16 core Raspberry PI that functions identically to a 64 core ARM in the cloud, that would be enough in many cases. What you don't want is subtle changes between the systems and/or the local development to be a unsupported variation. The local development needs to be just as supported and tested as the cloud version if not more so. Noone wants to hack together a local development environment that is only

    • Re: (Score:3, Interesting)

      by AmiMoJo ( 196126 )

      He may be right for Linux based development, but a lot of people are using stuff like Azure for their cloud services now. Write code in .NET using Azure services, and you don't care what architecture the server is.

      Having said that, even for Linux stuff I think it's a mistake to think that people won't be using ARM for their development machines sooner or later. Super long battery life but affordable laptops, or just having a dedicated local ARM server rather than trying to recreate the server environment on

    • "Develop at home" is really a proxy for "develop/deploy on cheapest". What applications, what software stacks, care about the underlying hardware architecture? If cloud based servers ran non-x86 hardware few would notice or care. If cloud server costs for non-x86 hardware were cheaper and performed adequately they would get used. x86 Linux won because it was cheaper than the traditional Unix vendors with their proprietary *nix RISC based platforms. Similar on the workstation side. The shift from RISC *nix b
    • I think Linus is half-right. He's right that nobody is going to want to run an AWS Arm server. He's wrong in that a lot of the world is moving to serverless. There aren't weird bugs that are platform dependent between Node.JS on a Windows, Linux or ARM server. I just upload the code to the cloud and it could be running on PowerPC for all I care.

      For serverless applications, if the cloud provider can get the javascript to run the user will be none the wiser. And they can develop at home.

      Arguably thoug

      • For serverless applications, if the cloud provider can get the javascript to run the user will be none the wiser.

        How is that serverless? Just because YOU don't have to maintain the server does not mean it does not exist. I think you should re-read what you typed and think about it a little bit to see how that can't be.

    • IBM is buying RedHat for $34 billion.

      2018 RHAT revenue was $2.9 billion. Canonical last year had revenues of $125.97 million. That's a 20x multiple.

      The market share follows a similar trend.

      I wish I was "losing" by having a 20x multiple.

      • Re:"Losing".. ??? (Score:4, Informative)

        by swillden ( 191260 ) <shawn-ds@willden.org> on Friday February 22, 2019 @08:20PM (#58167122) Journal

        2018 RHAT revenue was $2.9 billion. Canonical last year had revenues of $125.97 million. That's a 20x multiple.

        The market share follows a similar trend.

        The market share does not follow a similar trend, not even if you restrict yourself to the server space, and RH barely even registers in the desktop space.

        Red Hat has focused on an easier-to-monetize market segment, that's all.

        • The main reason I develop on CentOS is that I have the same environment as RHEL.

          So that hides the use case you mention from the data.

          • The main reason I develop on CentOS is that I have the same environment as RHEL.

            So that hides the use case you mention from the data.

            Not really. Just combine CentOS and RHEL numbers. It's still much smaller than Ubuntu.

      • A large part of the Ubuntu users doesn't pay Canonical a cent. And that is probably a major factor in its popularity.

        A better comparison would be Debian vs Ubuntu installs on servers. Although the different release cycles may be relevant there: Debian has long and unpredictable cycles, while Ubuntu has a release every 6 months. On the other hand, Debian releases deserve the predicate "stable" while Ubuntu releases have had some rough edges.

    • What exactly is RedHat losing? No doubt one-person operations create their AWS servers with Ubuntu because that's what they know. And there are a lot of them. But in the past 20 years, every business I have worked with who uses Linux servers uses RedHat or Centos, whether on-premises or cloud. By the way, I use Mint on my desktop and RedHat on my development server at home. What's the big deal? The main difference is the package manager, which took me about a day to get used to. Just COE's two cents.
    • do you have statistics with that? Never seen an ubuntu server

  • "IBM on Why Intel Won't Win the Server Space 2: Electric Bugaloo"
  • by imgod2u ( 812837 ) on Friday February 22, 2019 @04:30PM (#58165804) Homepage

    Assuming you aren't rolling your own thread and atomics libraries, is there a perceivable difference on the API side when moving from x86 to ARM or any other architecture? Hell, if this argument were true, there are enough differences between the various x86 iterations that would make it so that devs want the specific *family* of processors they develop on to be in the servers they use...

    I posit there's probably enough of a difference between AMD's x86 implementation and Intel's...

    • The big difference, as far as I can tell, is this:

      Native:

      1. Compile code.
      2. Run executable.

      Cross-Platform.

      1. Compile code.
      2. Push executable to emulator or target hardware.
      3. Run executable.

      • But if you're using continuous integration, then it is the same. Right?

        • I'd mod you up if I had the points to do so. My team develops locally on Macs or Ubuntu, then pushes to CI in the cloud, which builds deployable artefacts and deploys to cloud environments. All our cloud environments are interchangeable and if they were all ARM, I'd lose very little, since I already can't run them locally. (Well, actually, I could with some scripts around Docker and/or Kubernetes, but nobody on my team has ever had occasion to.)

    • by Jaime2 ( 824950 ) on Friday February 22, 2019 @04:55PM (#58165980)

      You've never seen how half of the corporate stuff comes into existence. It starts as an amalgamation of whatever the most tech-savvy employee managed to piece together. They pieced it together on whatever they run on their desktop.

      I've seen 32-bit servers kept around to run something that has an ancient emailer program embedded in it that won't cooperate with 64-bit operating systems. It's not that there aren't any 64-bit email clients, it's that no one has the time to figure out how to replace an internal part of this ball-of-mud that runs the company.

      I've seen Windows XP in data centers because some ancient piece of software that runs the door locks hasn't been updated in twenty years and it has a driver that doesn't play well with anything newer.

      Slightly off topic, but similar, was the time when we had trouble buying a server because the software specs were written in 2001 and stated a minimum processor clock frequency of 3.2GHz, but the world had moved on to the Core architecture and clock speeds went way down (but performance went way up).

      • You've never seen how half of the corporate stuff comes into existence. It starts as an amalgamation of whatever the most tech-savvy employee managed to piece together. They pieced it together on whatever they run on their desktop.

        I've seen 32-bit servers kept around to run something that has an ancient emailer program embedded in it that won't cooperate with 64-bit operating systems. It's not that there aren't any 64-bit email clients, it's that no one has the time to figure out how to replace an internal part of this ball-of-mud that runs the company.

        I've seen Windows XP in data centers because some ancient piece of software that runs the door locks hasn't been updated in twenty years and it has a driver that doesn't play well with anything newer.

        Slightly off topic, but similar, was the time when we had trouble buying a server because the software specs were written in 2001 and stated a minimum processor clock frequency of 3.2GHz, but the world had moved on to the Core architecture and clock speeds went way down (but performance went way up).

        At a place I used to work we had a computer setting in the back running windows 3.11 it ran some software automation of satellite tuners and dish steering. Eventually the hardware died and we finally cobbled together a solution to get it to run on Windows XP as thats the newest OS it would run on. This was after XP was end of life'ed. Fortunately we had enough old XP keys in the files from old installs no longer in use. The developer for the software is out of business and the current owner of the copyright

      • by imgod2u ( 812837 )

        Sure but that kinda illustrates my point. It isn't so much "sticking with x86" that's the issue. The environments that require that much "keep the exact image the way it is" limits migration to the latest and greatest Intel-based AWS/Azure server running 64-bit Linux just as much as it limits moving to ARM 64-bit Linux.

        And as I know it, there isn't significant marketshare or money to be made from "running Windows XP on a VM". Most of the current revenue is from turnkey people who use cookie-cutter database+

  • When you can't try out software on some cheap commodity hardware, it never even gets to the cost-benefit analysis. Fronting tens of thousands of dollars just to try out a software-hardware combination is a non-starter in almost any company. x86 wins because the difference between a vm running on a dev's/sysadmin's laptop and one running in a VMWare or Hyper-V architecture is almost non-existent - they know what they're getting before they've spent any money.

    At least ARM has some netbooks floating around w

    • by thereddaikon ( 5795246 ) on Friday February 22, 2019 @04:49PM (#58165936)
      I don't think IBM really ever cared all that much. AIM served to help offset the RnD costs somewhat. But I think IBM primarily made POWER for themselves. They wanted a modern architecture for the growing server market that would both be a decent basis to run VMs of legacy mainframe code on and also natively run modern code at the same time. They show no sign of giving up on the architecture over a decade after Apple dropped them and they sell multiple lines of servers using them. POWER doesn't really have to worry about running non native code or cross platform development because the only things POWER servers run is IBM code. The old model of not selling iron but selling a solution is very much in place today. They sell you the software, server and support all in one package. Unless you get an itemized bill you don't even know how much the systems cost. They also don't seem all that interested in the PC server space either.

      Motorola on the other hand seemed more willing and eager for PPC to catch on. It didn't work out but you did see some random machines adopt it for short periods. The BeBox, the half backed second chance at Amiga's, random accelerator cards for various obsolete machines etc. The best shot PPC ever had at getting wide adoption was during the short period Apple licensed Mac clones in the mid 90's. Jobs shut down when he returned. Regardless of whether that was the right move it did mean PPC would never be a serious contender to x86.

      • You're giving IBM's upper management far more credit than they deserve; when I think of Armonk, I think of Mike the Headless Chicken. [wikipedia.org]

      • As an interesting aside, the original Killer NIC from Bigfoot Networks has a PowerPC chip, and can be used as an accelerator for the Amiga :

        https://hothardware.com/news/amiga-enthusiast-gets-quake-running-on-killer-nic-powerpc-processor [hothardware.com]
        https://www.youtube.com/watch?v=P3k-6_-5ZIM [youtube.com]
      • I remember yelling profusely at the Amiga community that they should drop all this PPC nonsense and just adopt x86. The community insisted they didn't want Intel Inside, but more importantly, the people who owned the rights to AmigaOS were scared to death that people would pirate the OS and run it on generic hardware, so they insisted on re-badging buggy PPC dev boards (which in one case, couldn't even use disk DMA correctly).

        Same mindset as the 80's, with predictable results.

        Ironically, fast 68K cores imp

      • They show no sign of giving up on the architecture over a decade after Apple dropped them and they sell multiple lines of servers using them.

        Apple only ever sold one truly POWER-compatible processor, the PPC601. After that they dropped bits and pieces of the POWER ISA, numerous instructions falling by the wayside.

        Motorola only ever really cared about embedded processors. They had to make more credible processors for Apple (which provided mostly design input and funding to the PowerPC enterprise, they didn't have a big silicon lab at the time) but most of what Motorola did with PPC was build VRTX or BREW phones, and make embedded chips for automo

    • At least ARM has some netbooks floating around with the architecture. IBM didn't bother to try and keep Apple on their architecture, and that has hurt the ability to court new customers.

      Only the first PowerPC (601) implemented the full POWER instruction set, and Macs at the time didn't support POSIX like AIX does, so that doesn't seem as if it ever could have been very relevant.

    • I expect you can get a reasonable Blackbird package going for about 2k. While expensive it's moderately compelling as a desktop. The problem for ARM is that a lot of the desktop feel is related to single core performance, and expansion, which not a lot of cheap ARM boards actually provide all that much oomph, nor do they have the pci-e bus needed to connect to a reasonably powerful graphics accelerator.

  • x86 won on price (Score:3, Insightful)

    by perpenso ( 1613749 ) on Friday February 22, 2019 @04:40PM (#58165872)
    x86 won on price, on the desktop, on the server. That is the simple truth.

    As for stability and bugs, cross platform is superior. Bugs that are hard to manifest on one hardware architecture may manifest quite readily on a different architecture. Having worked on various cross-platform projects I've seen the main x86 based dev teams visit the alternative architecture teams (ex PPC) when they are stumped debugging, they eventually appreciated the alternative architectures. A single architecture target allows for longer lived quirky bugs. The simple truth is that cost is more important to many.

    This is not to say ARM will be successful in server space, just that it will be about cost and little else.
  • It seems to me that mobile apps wouldn't be a thing either if this logic was true.
    • It seems to me that mobile apps wouldn't be a thing either if this logic was true.

      For mobile devices, battery life is critical. So x86 isn't an option.

    • I've developed for iOS\Android and for Windows 10. Windows 10 was SOOOooooo much easier to develop for because of exactly what linus talks about. I could quickly figure out why my weird touch gesture wasn't working by hitting "run" not by compiling, transfering, launching, remotely debugging etc.

  • by spudnic ( 32107 ) on Friday February 22, 2019 @05:06PM (#58166062)

    Why not just build a system around a Crusoe processor at home and let it emulate anything you want to eventually run the software being developed on?

    Seems pretty cut and dry to me.

  • by supremebob ( 574732 ) <themejunky&geocities,com> on Friday February 22, 2019 @05:08PM (#58166070) Journal

    At some point in the near future, Macbooks will start coming with custom Apple designed ARM processors instead of Intel chips.

    At that point, the trendy urban hipsters buying these Macbooks will be developing on ARM and will want to deploy their code on ARM based servers. Your local IT department might say no, but I'm sure that the cloud hosting providers will gladly oblige.

    • Will they use Apple servers, too, and will Apple's ARM chips (which they are designing themselves) be compatible with ARM's official cores? Will the servers run OSX?

      I remember all the problems with Motorolla PPC chips not being binary compatible with IBM PPC. There's more to think about than just a base ISA, and ARM has more than one.

  • Why not use smartphones as testing platform. That is ARM everywhere
  • Pinebook Pro (Score:5, Interesting)

    by darkain ( 749283 ) on Friday February 22, 2019 @05:41PM (#58166246) Homepage

    I'm currently hoping the Pinebook Pro does very well when released later this year. I'm already planning on purchasing one for FreeBSD ARM development. The specs still are not the best, but are decent enough for some interesting development tasks. A portable ARM laptop with a hex-core processor, 4GB RAM, 64/128GB eMMC, Mini-PCIe with NVMe support, 1080p ISP display, 10,000 mha battery, and USB-C that supports charging + 4k/60hz video. This thing will be a little mini beast for $200. Most of programming is reading/writing code more so than executing it, so I believe this should be plenty powerful for solid web development and system service programming. This laptop NEEDS to do well to show the industry as a whole that these are the type of devices we WANT.

    • I'm currently hoping the Pinebook Pro does very well when released later this year.

      Will you be able to use all the hardware without goofy kernels? Because not being able to do that with PineA64+ hurt that platform at launch. Goddamn Allwinner.

  • Most of the web applications these days are developed using frameworks/languages that are cross platform (like node.js, .net core, java..). With these frameworks and app containers, it doesn't really matter what OS or hardware is running it. Server farms will move to a more efficient way to manage their server loads. I think performance/power and performance/price will be critical in deciding who wins. You can't rule out ARM right now.
  • A lot of hosted applications, especially those where the heavy client lifting has been moved client-side (Angular, React, etc.), could be described as accept parameters, call a database based upon those parameters, organize data into an acceptable payload and return that payload. It's hard to see why these would be dependent upon x86. Same for ETLs. If the power consumption/cost argument for ARM servers is really as compelling as being advertised, there might be something there.

    ARM may not be a fit for e

  • Linus seems to be forgetting about the massive shift in software development that has occured, to consuming software as container-based microservices, and providing it as the same.

    No one cares what architecture Redis is running on, as long as the service provides the same API contract and can be consumed by existing code. X86, ARM, Power, no one cares - run it where it performs the best at the lowest cost, thanks.

    The same is true of all of the other microservices that you consume, and all of the microservic

  • How does Linus think all those mobile apps get developed, since smartphones are 99.9% ARM? Well you develop on an x86 desktop, then deploy to ARM cell phones. Acting like developing on x86 desktops and deploying to ARM servers is some impossibility might be true at his level, the kernel level, but at the business process level you very rarely care. I'd say 99% of all bugs are due to some bad code or flawed design in something you wrote or a library you used. On the rare occasion that the system libraries of

  • 1. Parts. People like mow cost, fast, low power and heat parts. That work better. RAM, storage, networking all has to be ready, working and fully supported.
    2. Software. The CPU has to support something great to make people change everything and learn to code a new system.
    3. Cost. Power savings while doing more math and networking and ... better than anything.
    4. Ability of staff. People have to learn to code something new. Thats a lot of code to bring over.
    5. What is the advantage when power
  • As a hardware and software developer for over 25yrs, I have considered ARM many times and always run into the same problem. As much as we like to talk about multi-threading, there are still many applications where the single-thread performance is the most important. ARM performance is just barely good enough for mobile devices and very limited Android TV boxes. The performance of ARM is catching up though. Maybe in a 2-3 generations the performance will be good enough for people to tolerate ARM laptops

    • by Junta ( 36770 )

      Not just single thread performance, performance per watt in the 80+W TDP area is actually really advantaged toward AMD and Intel. The ARM vendors did a fantastic job of typical power consumption in frequent sleep and providing serviceable performance in low power envelopes, but have not yet demonstrated good performance in high power usage environments.

      Part of it is the relative lack of experience, a great deal of it is that Intel has invested in all sorts of third party and first party compiler and librar

  • Comment removed based on user account deletion
  • Long before they were gobbled up by Oracle, Sun used to offer universities cheap sun workstations. They had a trade in program where you could get half off on your upgraded computer by turning in any old sun server. They never asked for the computer back but the same department couldn't use the machine for two upgrades which encouraged it to be recycled into a different department who could then upgrade it to something else. University discount was typically 50% and sometimes 65% and the trade in dropped

  • Google, Amazon and Oracle all control their own server architectures, each use a different processor architecture and none of them care what Torvalds thinks. These companies have a great deal to say about what the cloud is and they don't agree on processor.

    The processor that runs code deployed in a high level language really does not matter, does Linus really not understand that?

    The reasons x86 grew to dominate have little to do with current requirements and aren't interesting in predicting the future.

  • On the one hand, he has a point, developer-friendly form factors are x86 and that's unlikely to change due to the propensity for developers to have some x86 app they want and better to hedge your bets.

    However, the presumption that cloud providers would prefer x86 because it can carry a price premium fails to acknowledge that the providers can potentially get wider margins out of an ARM ecosystem. In x86, you have two vendors and thus they only get so desperate to compete against one another. In ARM server

  • It's amazing that Linus didn't think this through further and deeper. There is no 'at home' issue with everybody running ARM on their phones and raspberry pi's anyway; they're actually more in use than 'regular PCs'. But that's not the point.

    The point is that Intel architecture downscaling is stopping. The only reason why Intel compatible architecture has the lead is because they can run their less efficient computing architecture on smaller silicon. And that's slowing down due to physical limitations. It

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...