Linus Torvalds on Why ARM Won't Win the Server Space (realworldtech.com) 230
Linus Torvalds: I can pretty much guarantee that as long as everybody does cross-development, the platform won't be all that stable. Or successful. Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment). Which means that you'll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better. This is true even if what you mostly do is something ostensibly cross-platform like just run perl scripts or whatever. Simply because you'll want to have as similar an environment as possible.
Which in turn means that cloud providers will end up making more money from their x86 side, which means that they'll prioritize it, and any ARM offerings will be secondary and probably relegated to the mindless dregs (maybe front-end, maybe just static html, that kind of stuff). Guys, do you really not understand why x86 took over the server market? It wasn't just all price. It was literally this "develop at home" issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a "real server". And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over. Do you really not understand? This isn't rocket science. This isn't some made up story. This is literally what happened, and what killed all the RISC vendors, and made x86 be the undisputed king of the hill of servers, to the point where everybody else is just a rounding error. Something that sounded entirely fictional a couple of decades ago. Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit "hyperscaling" model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place.
Which in turn means that cloud providers will end up making more money from their x86 side, which means that they'll prioritize it, and any ARM offerings will be secondary and probably relegated to the mindless dregs (maybe front-end, maybe just static html, that kind of stuff). Guys, do you really not understand why x86 took over the server market? It wasn't just all price. It was literally this "develop at home" issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a "real server". And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over. Do you really not understand? This isn't rocket science. This isn't some made up story. This is literally what happened, and what killed all the RISC vendors, and made x86 be the undisputed king of the hill of servers, to the point where everybody else is just a rounding error. Something that sounded entirely fictional a couple of decades ago. Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit "hyperscaling" model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place.
Exactly why RedHat is losing to Ubuntu (Score:5, Insightful)
Re:Exactly why RedHat is losing to Ubuntu (Score:5, Insightful)
People are using raspberry pi's, as they become more and more capable, that could be the thin end of the wedge, so to speak.
Re:Exactly why RedHat is losing to Ubuntu (Score:5, Interesting)
Re: (Score:2)
Servers are something else... (Score:2)
The server market is actually different now... DCS is bigger than home servers...
arm has this now :
https://github.com/ARM-software/sbsa-acs
but well thought out just dated Linus Torvalds...
John Jones
Re: Exactly why RedHat is losing to Ubuntu (Score:4, Insightful)
You obviously don't code. 4:3 monitors are better for coding - less scrolling - more on the screen. When wide screens first came into vogue a lot of us dev types found ourselves turning the wide screens 90 degrees to compensate for the lack of screen real-estate.
Also, "the patriarchy" has nothing to do with ARM as a platform vs x86. Stop trying to force your SJW agenda into everything.
Re: (Score:2)
You obviously don't code. 4:3 monitors are better for coding - less scrolling - more on the screen.
It depends on the size of the display.
I code both professionally and at home. On my 27" 16:9 display, splitting the screen vertically makes for two nice workspaces side-by-side.
Re: (Score:3)
He, like Richard Stallmann, is a misoginist and a hater. Did you *look* at the incivility posted by him in e-mails?
The word is "misogynist", and IMHO neither Stallmann or Torvalds is one. Oh sure, maybe they're not so good at communicating with people who aren't aspie coder-types, which could cause them problems with women (who tend to not be aspies) but haters? Most of the alt-right/libertarian types on Slashdot would consider ME an SJW and I'm wonder what drugs have you been doing.
Now, ESR, he's an actual misogynist.
Wide Screen is the way to go whether your writing software or editing video or using virtualmachines.
Why yes, but you're forgetting that some coders get set in their ways or stick to tradition. A bunch
Re: (Score:3)
I've followed ESR for quite some time now and haven't seen any misogyny yet. Do you have any examples to back up your claim?
Re: (Score:3, Insightful)
He warned against SJW entrenchment and CoC pushing. Obviously that makes him a misogynist, a racist and anti-immigrant and pro-wall.
Re: (Score:3)
No his racism, are what makes him a racist. his conspiracy theories about "false rape accusations are what makes him a misogynist.
Erik S. Raymond is the sort of Aspie who claims to have all sorts of expert skills. The Lazarus Long fanboyism went to his head.
He's your typical libertarian-aspie internet crackpot asshat.
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://rationalwiki.org/wiki/... [rationalwiki.org]
Re: (Score:2)
This is not a word, but it should be.
But only in Spanish.
Re: (Score:2)
I have an old 4:3 touch screen LCD panel from an old video poker machine. It however doesn't work well with windows, Linux on the other hand takes no work. Plug in the USB cable, load touchscreen drivers(I think they are already in most recent distros since the mainstream adoption of touch screens, I haven't used it in 5 years..) and calibrate.
Re: (Score:3)
Not quite the same for the Raspberry Pi. The coding is for integrated devices, and not so much with custom server stuff. Linux is basically spot on with his assessment. Many server technologies, were originally coded for a desktop PC under somones desk. Because the RISC base Unix servers were really expensive, and for most server tasks, they were overkill systems, and the desktop did its job. Over time as the software expanded, it needed a more powerful computer. So they ended up making x86 based serve
Re: (Score:2)
I'm also a bit surprised by his assertion that developers all work on their machine at home. A lot of developers now just use a test VM from the same cloud provider for all of their debugging and most modern IDEs have support for doing remote builds and remote debugging. It doesn't matter what OS or architecture your laptop uses, because that's not the computer that's running your compiler or test suite.
Re: (Score:3)
It was metaphorical, "Home" being where ever your dev environment is. It even says it in the summary. He was talking about like architectures. Until everybody has access to decent performance ARM based servers and home PC's it will not be adopted wide and far. Hell I didn't even read the article and I understood that.
Re: (Score:2)
The problem is, he's talking about cloud deployments. AWS is deploying a load of ARM servers, other providers probably will as well. If your workflow involves developing on VMs from the same cloud provider that you use for deployment, then Linus' argument doesn't make sense: if you can get ARM servers for deployment, you can get them for development.
The problem with ARM to date has been that there aren't any compelling server chips.
Re: (Score:2)
Give me a Raspberry with 16GB RAM, please, or at least with the ability to add DIMMs and then we will be able to talk...
Re: (Score:2)
start providing ARM Linux laptops with very similar processors to the ones used in servers. I'll buy a few myself.
The don't have to be identical just function similarly. For instance, if you can do development on a 16 core Raspberry PI that functions identically to a 64 core ARM in the cloud, that would be enough in many cases. What you don't want is subtle changes between the systems and/or the local development to be a unsupported variation. The local development needs to be just as supported and tested as the cloud version if not more so. Noone wants to hack together a local development environment that is only
Re: (Score:3, Interesting)
He may be right for Linux based development, but a lot of people are using stuff like Azure for their cloud services now. Write code in .NET using Azure services, and you don't care what architecture the server is.
Having said that, even for Linux stuff I think it's a mistake to think that people won't be using ARM for their development machines sooner or later. Super long battery life but affordable laptops, or just having a dedicated local ARM server rather than trying to recreate the server environment on
"develop at home" really "dev/deploy on cheapest" (Score:3)
Re: (Score:2)
I think Linus is half-right. He's right that nobody is going to want to run an AWS Arm server. He's wrong in that a lot of the world is moving to serverless. There aren't weird bugs that are platform dependent between Node.JS on a Windows, Linux or ARM server. I just upload the code to the cloud and it could be running on PowerPC for all I care.
For serverless applications, if the cloud provider can get the javascript to run the user will be none the wiser. And they can develop at home.
Arguably thoug
Re: (Score:2)
For serverless applications, if the cloud provider can get the javascript to run the user will be none the wiser.
How is that serverless? Just because YOU don't have to maintain the server does not mean it does not exist. I think you should re-read what you typed and think about it a little bit to see how that can't be.
"Losing".. ??? (Score:3)
IBM is buying RedHat for $34 billion.
2018 RHAT revenue was $2.9 billion. Canonical last year had revenues of $125.97 million. That's a 20x multiple.
The market share follows a similar trend.
I wish I was "losing" by having a 20x multiple.
Re:"Losing".. ??? (Score:4, Informative)
2018 RHAT revenue was $2.9 billion. Canonical last year had revenues of $125.97 million. That's a 20x multiple.
The market share follows a similar trend.
The market share does not follow a similar trend, not even if you restrict yourself to the server space, and RH barely even registers in the desktop space.
Red Hat has focused on an easier-to-monetize market segment, that's all.
Re: (Score:2)
The main reason I develop on CentOS is that I have the same environment as RHEL.
So that hides the use case you mention from the data.
Re: (Score:2)
The main reason I develop on CentOS is that I have the same environment as RHEL.
So that hides the use case you mention from the data.
Not really. Just combine CentOS and RHEL numbers. It's still much smaller than Ubuntu.
Re: (Score:2)
Of the 3, RHEL is the only one whose distribution system even generates clear numbers.
Re: (Score:2)
A large part of the Ubuntu users doesn't pay Canonical a cent. And that is probably a major factor in its popularity.
A better comparison would be Debian vs Ubuntu installs on servers. Although the different release cycles may be relevant there: Debian has long and unpredictable cycles, while Ubuntu has a release every 6 months. On the other hand, Debian releases deserve the predicate "stable" while Ubuntu releases have had some rough edges.
Re: (Score:2)
Re: Exactly why RedHat is losing to Ubuntu (Score:2)
do you have statistics with that? Never seen an ubuntu server
Re: Exactly why RedHat is losing to Ubuntu (Score:5, Insightful)
Re: (Score:2)
It's super nice to have that whole system there, if you can remote into it. Sure, you can live without it, but it's also handy to have all that stuff when it comes time to do troubleshooting. I wouldn't make the system completely headless for the same reason, I'd want to deliver something with onboard video. If you find yourself at the customer site trying to do troubleshooting, it's a huge benefit.
On the other hand, there are security repercussions to having all that crap on the system, but that's the only
Re: (Score:2, Informative)
Well, that's not a new problem. Back 'in the day' we had Windows Server trying to be secure, reliable, and usable. SQL Server tying to be 'enterprise' grade. Before that, I actually sold NetWare servers running Advantage db, PostgreSQL, with fabulous uptime and perfiomance, security, and were manageable. And before that, LANtastic. Ugh.
And seeing sensitive apps running on what is really general-purpose platforms is painful. For a little while I did break/fix support for a blood bank business. running on a
Re: (Score:2)
I have and continue to find Ubuntu in commercial products that is has no business being in though. I've seen full building security systems that would be best served by an embedded OS and web interface run on top of full Ubuntu, GNOME and all running on an ATX motherboard in a metal box bolted to the wall. The system is completely headless but it has a GUI. The only reason I can think of why is because Ubuntu is what the developers were familiar with.
I went into McDonald's the other day to get coffee and saw their menu screens reboot a terminal popped up with $user@ubuntu.
Re: (Score:2)
Exaqvision NVR's do that. I noticed it about 3 years ago. But I think that is more shitty devs trying to make their job easier.
Re: (Score:2)
It's funny that Linus doesn't mention an OS at all in his article. The same 'deploy on what you develop on' was the main argument in favor of Windows Server - until Linux came along and was a compelling enough economic model to break that mold. And these days, with VM technology readily available, if you're deploying on RedHat, you can just run a RedHat VM on your developement box. In fact, the only part of the development puzzle that's no longer portable (if you exclude desktop-native apps) is iOS. I w
Re: (Score:3)
What exactly is going to stop the developers from running ARM at home too though?
Finding them easily enough.
Raspberry Pi's are one thing, but not every developer can build their own machine (Used to be blasphemy, but in 2019 I have to deal with a LOT of devs who can't/won't handle hardware _at all_). So until there are COTS ARM-based laptops, there will not be a lot of interest in ARM servers.
Re: (Score:2)
Two examples of "COTS ARM-based laptops" are a Pinebook and an Android tablet with keyboard running Termux or GNURoot. Perhaps one reason they haven't become more popular is that they can't run the occasional Wine application.
Re: (Score:3)
A tablet with a keyboard isn't a notebook, it's a tablet with a keyboard. Pinebook has only 2GB of RAM, which is fine for pine64 as an embedded system or media player, but fucking worthless in a laptop. Also, because Allwinner. I have an original (old) PineA64+ 2GB and it's a pretty decently powerful little piece of hardware, but it's kind of a PITA. There's still no downloadable Linux image with a current kernel and drivers, for example. You have to install and then upgrade your way there.
Re: (Score:2)
A tablet with a keyboard isn't a notebook, it's a tablet with a keyboard.
Say you have a first computer with 8 GB of RAM and a permanently attached keyboard capable of folding around behind its screen, and a second computer with 8 GB of RAM and a clip-on keyboard that the user can detach. What is the key difference between these computers that is relevant to their usability?
Pinebook has only 2GB of RAM, which is [...] fucking worthless in a laptop.
From 2002 to 2006, 32-bit operating systems shipped on new desktop and laptop PCs, and they tended to come with 1 GB to 2 GB of RAM. Were desktop and laptop PCs made in 2002 to 2006 likewise "fucking worthless
Re: (Score:2)
Say you have a first computer with 8 GB of RAM and a permanently attached keyboard capable of folding around behind its screen, and a second computer with 8 GB of RAM and a clip-on keyboard that the user can detach. What is the key difference between these computers that is relevant to their usability?
Durability. The connector is a common point of failure, and the hinges tend to be inferior.
From 2002 to 2006, 32-bit operating systems shipped on new desktop and laptop PCs, and they tended to come with 1 GB to 2 GB of RAM. Were desktop and laptop PCs made in 2002 to 2006 likewise "fucking worthless"? Or what fundamental thing about computing has changed since then, other than the increasing aggressiveness of web analytics and adtech to eat RAM while continuously tracking viewers' browsing?
That's not enough? The shift to 64-bit has also had an effect; memory use hasn't doubled, but it has increased for that reason, among all the other reasons.
Having only 2GB RAM in even a NAS is a liability, especially if you want to use fancy filesystems like ZFS. In a machine expected to run desktop applications, it is severely limiting. Even if it's not expected to provide full desktop performance, you will often run
Re: (Score:2)
Drivers for what? A sound card on your cloud server?
Re: (Score:2)
You're missing Linus' point. The best way to develop for a server is with a personal computer running the same thing (or at least close). You don't want the sound card on your server. You want it on the laptop that you're using to develop for your cloud server.
Re: Exactly why RedHat is losing to Ubuntu (Score:2)
Actually, in developing my cloud services, for personal use, I bought a RPi. Worth every penny. Easy, good emulation of a remote server 'out there', pretty easy to migrate to a 'real' server. Then redeployable for a new project. Also doesn't hose up my work machine under any circumstances.
If you're seriously working in Starbucks, get batteries for it, do the serial terminal over USB, log it into Google Starbucks, later you tear out the GUI and VNC Viewer.
Re: (Score:2)
Good point but the prevailing orthodoxy Linus alludes to is doing everything on a highly spec'd developer laptop with containers. So you max out the RAM and because you're kitchen-sinking, the computer slows to a crawl! :)
If it were me I'd be buying one of those sub-$100 4GB RAM Armbian boxes equipped with a mainline 5.0 kernel with a Rockchip, Allwinner or Amlogic. The server image you cross-compile on your laptop then runs bare metal on a fanless sandwich-sized box on your desk, stowed away neatly under
Re: Exactly why RedHat is losing to Ubuntu (Score:2)
Most container work seems to be in the config and packaging. The apps, code, readily developed in mainline kernel distros. But I do work in VMs often. Have to be careful I don't have 500GB of versions and lose track of the best fallback...
Re: (Score:2)
Right now, that's also true in the server space. It's hard to imagine that an ARM partner will manage to produce a decent server chip with production at the sort of scale that you'd need for a cloud datacentre, but not manage to produce a decent laptop or workstation.
My new mobile phone is a generation old, so was pretty cheap, and it's a lot more powerful than the laptop I used for development two upgrades ago. It has 8GB of RAM, a quad-core 2.45GHz Qualcomm Kryo CPU, and 128GB of flash. If I want to d
Re: (Score:2)
People use ubuntu because the software stack is up to date.
"up to date" Is not what you want in a server environment. Tried and True is what you want. AKA Stable! Thats what this conversation is based on.
Re: (Score:2)
Also normally with a little work you could get bleeding edge programs to work if you compile them. However that can open a whole can of dependency worms.
Re: (Score:2)
Seems you are correct, I have not used either in a decade. Maybe I'm mistaken but what I described seems to me to be how it used to was. Sorry for my mis-informed post.
Also known as... (Score:2)
Re: (Score:2)
What differences can you actually notice? (Score:3)
Assuming you aren't rolling your own thread and atomics libraries, is there a perceivable difference on the API side when moving from x86 to ARM or any other architecture? Hell, if this argument were true, there are enough differences between the various x86 iterations that would make it so that devs want the specific *family* of processors they develop on to be in the servers they use...
I posit there's probably enough of a difference between AMD's x86 implementation and Intel's...
Re: (Score:2)
The big difference, as far as I can tell, is this:
Native:
1. Compile code.
2. Run executable.
Cross-Platform.
1. Compile code.
2. Push executable to emulator or target hardware.
3. Run executable.
Re: (Score:2)
But if you're using continuous integration, then it is the same. Right?
Re: (Score:2)
I'd mod you up if I had the points to do so. My team develops locally on Macs or Ubuntu, then pushes to CI in the cloud, which builds deployable artefacts and deploys to cloud environments. All our cloud environments are interchangeable and if they were all ARM, I'd lose very little, since I already can't run them locally. (Well, actually, I could with some scripts around Docker and/or Kubernetes, but nobody on my team has ever had occasion to.)
Re:What differences can you actually notice? (Score:5, Interesting)
You've never seen how half of the corporate stuff comes into existence. It starts as an amalgamation of whatever the most tech-savvy employee managed to piece together. They pieced it together on whatever they run on their desktop.
I've seen 32-bit servers kept around to run something that has an ancient emailer program embedded in it that won't cooperate with 64-bit operating systems. It's not that there aren't any 64-bit email clients, it's that no one has the time to figure out how to replace an internal part of this ball-of-mud that runs the company.
I've seen Windows XP in data centers because some ancient piece of software that runs the door locks hasn't been updated in twenty years and it has a driver that doesn't play well with anything newer.
Slightly off topic, but similar, was the time when we had trouble buying a server because the software specs were written in 2001 and stated a minimum processor clock frequency of 3.2GHz, but the world had moved on to the Core architecture and clock speeds went way down (but performance went way up).
Re: (Score:2)
You've never seen how half of the corporate stuff comes into existence. It starts as an amalgamation of whatever the most tech-savvy employee managed to piece together. They pieced it together on whatever they run on their desktop.
I've seen 32-bit servers kept around to run something that has an ancient emailer program embedded in it that won't cooperate with 64-bit operating systems. It's not that there aren't any 64-bit email clients, it's that no one has the time to figure out how to replace an internal part of this ball-of-mud that runs the company.
I've seen Windows XP in data centers because some ancient piece of software that runs the door locks hasn't been updated in twenty years and it has a driver that doesn't play well with anything newer.
Slightly off topic, but similar, was the time when we had trouble buying a server because the software specs were written in 2001 and stated a minimum processor clock frequency of 3.2GHz, but the world had moved on to the Core architecture and clock speeds went way down (but performance went way up).
At a place I used to work we had a computer setting in the back running windows 3.11 it ran some software automation of satellite tuners and dish steering. Eventually the hardware died and we finally cobbled together a solution to get it to run on Windows XP as thats the newest OS it would run on. This was after XP was end of life'ed. Fortunately we had enough old XP keys in the files from old installs no longer in use. The developer for the software is out of business and the current owner of the copyright
Re: (Score:2)
Sure but that kinda illustrates my point. It isn't so much "sticking with x86" that's the issue. The environments that require that much "keep the exact image the way it is" limits migration to the latest and greatest Intel-based AWS/Azure server running 64-bit Linux just as much as it limits moving to ARM 64-bit Linux.
And as I know it, there isn't significant marketshare or money to be made from "running Windows XP on a VM". Most of the current revenue is from turnkey people who use cookie-cutter database+
Same issue with POWER (Score:2)
When you can't try out software on some cheap commodity hardware, it never even gets to the cost-benefit analysis. Fronting tens of thousands of dollars just to try out a software-hardware combination is a non-starter in almost any company. x86 wins because the difference between a vm running on a dev's/sysadmin's laptop and one running in a VMWare or Hyper-V architecture is almost non-existent - they know what they're getting before they've spent any money.
At least ARM has some netbooks floating around w
Re:Same issue with POWER (Score:5, Insightful)
Motorola on the other hand seemed more willing and eager for PPC to catch on. It didn't work out but you did see some random machines adopt it for short periods. The BeBox, the half backed second chance at Amiga's, random accelerator cards for various obsolete machines etc. The best shot PPC ever had at getting wide adoption was during the short period Apple licensed Mac clones in the mid 90's. Jobs shut down when he returned. Regardless of whether that was the right move it did mean PPC would never be a serious contender to x86.
Re: (Score:2)
You're giving IBM's upper management far more credit than they deserve; when I think of Armonk, I think of Mike the Headless Chicken. [wikipedia.org]
Re: (Score:2)
https://hothardware.com/news/amiga-enthusiast-gets-quake-running-on-killer-nic-powerpc-processor [hothardware.com]
https://www.youtube.com/watch?v=P3k-6_-5ZIM [youtube.com]
Re: (Score:3)
I remember yelling profusely at the Amiga community that they should drop all this PPC nonsense and just adopt x86. The community insisted they didn't want Intel Inside, but more importantly, the people who owned the rights to AmigaOS were scared to death that people would pirate the OS and run it on generic hardware, so they insisted on re-badging buggy PPC dev boards (which in one case, couldn't even use disk DMA correctly).
Same mindset as the 80's, with predictable results.
Ironically, fast 68K cores imp
Re: (Score:2)
They show no sign of giving up on the architecture over a decade after Apple dropped them and they sell multiple lines of servers using them.
Apple only ever sold one truly POWER-compatible processor, the PPC601. After that they dropped bits and pieces of the POWER ISA, numerous instructions falling by the wayside.
Motorola only ever really cared about embedded processors. They had to make more credible processors for Apple (which provided mostly design input and funding to the PowerPC enterprise, they didn't have a big silicon lab at the time) but most of what Motorola did with PPC was build VRTX or BREW phones, and make embedded chips for automo
Re: (Score:2)
At least ARM has some netbooks floating around with the architecture. IBM didn't bother to try and keep Apple on their architecture, and that has hurt the ability to court new customers.
Only the first PowerPC (601) implemented the full POWER instruction set, and Macs at the time didn't support POSIX like AIX does, so that doesn't seem as if it ever could have been very relevant.
Re: (Score:2)
Only the first PowerPC (601) implemented the full POWER instruction set, and Macs at the time didn't support POSIX like AIX does, so that doesn't seem as if it ever could have been very relevant.
The ANS runs AIX on PPC 604/e.
What's an ANS? Is that an anus that's lost its home (U)? Oh, Apple Network Server, which was not a Macintosh. (Though it was based on the Power Macintosh 9500 mainboard, it was specifically gimped so as not to be able to run MacOS.) I said "Macs", not "Apple computers". At the time, people weren't buying apple's servers (I could just stop there and the sentence would be reasonably true) to do development on. If you wanted to develop software for AIX, you bought an RS6k. The lowest-end sort-of-pizza-box ones
Re: (Score:2)
I expect you can get a reasonable Blackbird package going for about 2k. While expensive it's moderately compelling as a desktop. The problem for ARM is that a lot of the desktop feel is related to single core performance, and expansion, which not a lot of cheap ARM boards actually provide all that much oomph, nor do they have the pci-e bus needed to connect to a reasonably powerful graphics accelerator.
Re: (Score:3)
x86 won on price (Score:3, Insightful)
As for stability and bugs, cross platform is superior. Bugs that are hard to manifest on one hardware architecture may manifest quite readily on a different architecture. Having worked on various cross-platform projects I've seen the main x86 based dev teams visit the alternative architecture teams (ex PPC) when they are stumped debugging, they eventually appreciated the alternative architectures. A single architecture target allows for longer lived quirky bugs. The simple truth is that cost is more important to many.
This is not to say ARM will be successful in server space, just that it will be about cost and little else.
Re: (Score:2)
apps (Score:2)
Re: (Score:2)
It seems to me that mobile apps wouldn't be a thing either if this logic was true.
For mobile devices, battery life is critical. So x86 isn't an option.
Re: (Score:2)
I've developed for iOS\Android and for Windows 10. Windows 10 was SOOOooooo much easier to develop for because of exactly what linus talks about. I could quickly figure out why my weird touch gesture wasn't working by hitting "run" not by compiling, transfering, launching, remotely debugging etc.
Crusoe to the rescue? (Score:3)
Why not just build a system around a Crusoe processor at home and let it emulate anything you want to eventually run the software being developed on?
Seems pretty cut and dry to me.
Probably true for now, but.... (Score:5, Insightful)
At some point in the near future, Macbooks will start coming with custom Apple designed ARM processors instead of Intel chips.
At that point, the trendy urban hipsters buying these Macbooks will be developing on ARM and will want to deploy their code on ARM based servers. Your local IT department might say no, but I'm sure that the cloud hosting providers will gladly oblige.
Re: (Score:2)
Will they use Apple servers, too, and will Apple's ARM chips (which they are designing themselves) be compatible with ARM's official cores? Will the servers run OSX?
I remember all the problems with Motorolla PPC chips not being binary compatible with IBM PPC. There's more to think about than just a base ISA, and ARM has more than one.
make smartphones the testing platform (Score:2)
Pinebook Pro (Score:5, Interesting)
I'm currently hoping the Pinebook Pro does very well when released later this year. I'm already planning on purchasing one for FreeBSD ARM development. The specs still are not the best, but are decent enough for some interesting development tasks. A portable ARM laptop with a hex-core processor, 4GB RAM, 64/128GB eMMC, Mini-PCIe with NVMe support, 1080p ISP display, 10,000 mha battery, and USB-C that supports charging + 4k/60hz video. This thing will be a little mini beast for $200. Most of programming is reading/writing code more so than executing it, so I believe this should be plenty powerful for solid web development and system service programming. This laptop NEEDS to do well to show the industry as a whole that these are the type of devices we WANT.
Re: (Score:3)
I'm currently hoping the Pinebook Pro does very well when released later this year.
Will you be able to use all the hardware without goofy kernels? Because not being able to do that with PineA64+ hurt that platform at launch. Goddamn Allwinner.
In long term, performance/power will decide (Score:2)
Depends on the Workload... (Score:2)
A lot of hosted applications, especially those where the heavy client lifting has been moved client-side (Angular, React, etc.), could be described as accept parameters, call a database based upon those parameters, organize data into an acceptable payload and return that payload. It's hard to see why these would be dependent upon x86. Same for ETLs. If the power consumption/cost argument for ARM servers is really as compelling as being advertised, there might be something there.
ARM may not be a fit for e
Hes wrong - because of containers (Score:2)
Linus seems to be forgetting about the massive shift in software development that has occured, to consuming software as container-based microservices, and providing it as the same.
No one cares what architecture Redis is running on, as long as the service provides the same API contract and can be consumed by existing code. X86, ARM, Power, no one cares - run it where it performs the best at the lowest cost, thanks.
The same is true of all of the other microservices that you consume, and all of the microservic
Re: (Score:2)
The leaks nowadays are near zero. Nearly all microservice delivered applications are written in NodeJS, Go, or Python. None of these languages care what architecture you're running on, and as a developer no one is writing architecture-specific code.
Even if you're doing high performance machine learning, the libraries you're using are likely Python based, and hide away the iron and GPU types from you to a very large level.
A smart guy, but not a business guy (Score:2)
How does Linus think all those mobile apps get developed, since smartphones are 99.9% ARM? Well you develop on an x86 desktop, then deploy to ARM cell phones. Acting like developing on x86 desktops and deploying to ARM servers is some impossibility might be true at his level, the kernel level, but at the business process level you very rarely care. I'd say 99% of all bugs are due to some bad code or flawed design in something you wrote or a library you used. On the rare occasion that the system libraries of
How to fix this (Score:2)
2. Software. The CPU has to support something great to make people change everything and learn to code a new system.
3. Cost. Power savings while doing more math and networking and
4. Ability of staff. People have to learn to code something new. Thats a lot of code to bring over.
5. What is the advantage when power
Single thread performance is still important (Score:2)
As a hardware and software developer for over 25yrs, I have considered ARM many times and always run into the same problem. As much as we like to talk about multi-threading, there are still many applications where the single-thread performance is the most important. ARM performance is just barely good enough for mobile devices and very limited Android TV boxes. The performance of ARM is catching up though. Maybe in a 2-3 generations the performance will be good enough for people to tolerate ARM laptops
Re: (Score:3)
Not just single thread performance, performance per watt in the 80+W TDP area is actually really advantaged toward AMD and Intel. The ARM vendors did a fantastic job of typical power consumption in frequent sleep and providing serviceable performance in low power envelopes, but have not yet demonstrated good performance in high power usage environments.
Part of it is the relative lack of experience, a great deal of it is that Intel has invested in all sorts of third party and first party compiler and librar
Re: (Score:2)
Sun used to know about this problem (Score:2)
Long before they were gobbled up by Oracle, Sun used to offer universities cheap sun workstations. They had a trade in program where you could get half off on your upgraded computer by turning in any old sun server. They never asked for the computer back but the same department couldn't use the machine for two upgrades which encouraged it to be recycled into a different department who could then upgrade it to something else. University discount was typically 50% and sometimes 65% and the trade in dropped
Linus lacks insight here, grossly uninteresting (Score:2)
Google, Amazon and Oracle all control their own server architectures, each use a different processor architecture and none of them care what Torvalds thinks. These companies have a great deal to say about what the cloud is and they don't agree on processor.
The processor that runs code deployed in a high level language really does not matter, does Linus really not understand that?
The reasons x86 grew to dominate have little to do with current requirements and aren't interesting in predicting the future.
Linus has good point. He's a smart guy. Duh. (Score:2)
EOM
Complicated.. (Score:2)
On the one hand, he has a point, developer-friendly form factors are x86 and that's unlikely to change due to the propensity for developers to have some x86 app they want and better to hedge your bets.
However, the presumption that cloud providers would prefer x86 because it can carry a price premium fails to acknowledge that the providers can potentially get wider margins out of an ARM ecosystem. In x86, you have two vendors and thus they only get so desperate to compete against one another. In ARM server
It's amazing... (Score:2)
The point is that Intel architecture downscaling is stopping. The only reason why Intel compatible architecture has the lead is because they can run their less efficient computing architecture on smaller silicon. And that's slowing down due to physical limitations. It
Re: (Score:2)
Or at least it use to
Re: (Score:2)
Then why are we not seeing Intel rule the mobile world then? I disagree.
Several reasons.
1.) Battery draw. AMD_64/x86 is a big power-hog and eats battery like no tomorrow.
2.) As a corollary to 1.) thermal output is insane no one wants to hold a flat-iron to their face to make a call. Thermodynamics is a nasty bitch all of that battery draw has to go somewhere mostly as heat.
3.) Inertia, arm has been in use for embedded devices including dumbphones and feature phones long before smart phones where a thing. Then iPhone. (no not Cisco Systems iPhone, the fruity one) Apple was the o
Re: (Score:2)
The power argument isn't really super true anymore-- Atom got pretty good, etc. But ARM already had a foothold, and... perhaps most important, it's not too onerous to be an ARM licensee. So there's a lot of SOCs now featuring ARM and with a superior degree of integration and nice things to design a phone with.
And then the same network effects hold true-- if 99% of users are Android-on-ARM then if you use Android-on-X86 you're going to have weird experiences at time.
Re: (Score:2)
Re: (Score:3)
once laptops (The preferred being mac) and datacenters have ARM options, which is provably happening, then your x86 argument dies horribly.
This, and a few more reasons:
(1) Server-side software is a more limited set than desktop or mobile. For instance, no dependencies on graphics toolkits.
(2) More and more software is written in higher-level languages and running on virtual machines/interpreters, such as Javascript and Python. It sounds like a joke, but there are major web frameworks written in JS. This further narrows down the set of software you actually have to port.