Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment Re:Oh lord, that again? (Score 2) 242

Why intentionally sacrifice performance? Todays processors may be many thousands of times faster than the early ones, but it doesn't make sense to then slow them down with overweight inefficient code.

Performance and efficiency is still very important, slow code costs money in extra execution time, increased power consumption, increased hardware requirements, especially at large scale. Code which is 10% slower running on a single user's desktop might not make much of a difference, but run that code across thousands of systems or run thousands of instances of it and you've got huge wastage.

Comment Re:No (Score 1) 357

Well in that respect Alpha (FX!32) and PowerPC (VirtualPC) had much better x86 emulation, which is tried and tested...

Alternative architectures will never shine so long as people are trying to run alien binaries on them... Emulation will always carry overhead, resulting in inferior performance and inferior battery life.

That said, the playing field is levelling... The Android runtime is theoretically cross platform, so many apps will run just fine irrespective of the underlying architecture, and a lot of software is now delivered via a browser or a connection to a remote server...
And of course anything that is open source can be fairly painlessly ported to a new architecture, and indeed all the mentioned architectures already have mature Linux and/or BSD ports and a full suite of software available.

Comment Re:lots of room for innovation (Score 1) 357

Alpha had the same issues to a much lesser extent, it was also around longer so compilers had more time to mature on the platform, and it was so much faster than other processors available at the time that inefficient code was less noticeable.

With a fresh start some baggage can be left behind, eg 64bit x86 software generally makes use of SSE2 as the lowest common denominator become the first Opteron chip instead of a 386, but even a first gen Opteron is pretty dated these days.

Gentoo users experience a small but worthwhile increase in performance by compiling their code with the correct -march/-mcpu flags irrespective of any other optimizations they might be using, there is also a linux kernel patch that enables such flags for kernel compiles. And this is on x86 where intel/amd go to great lengths to make the processors able to execute older code quickly. There would likely be a bigger difference on other architectures.

Comment Re:Always Connected (Score 1) 194

I can see why not including an ethernet port in a laptop makes sense, most users (Especially end users) these days will be using wireless, even corporate users will generally use wireless unless they're sat at their desk where there will usually be a docking station which contains its own ethernet port.
Same for removal of optical media, my last laptops that had optical drives NEVER used them and i ended up removing them to install additional HDDs in the space.

Comment Re:lots of room for innovation (Score 1) 357

Intel tried that with IA64, an architecture that depends on compiler optimizations to get good performance...
IA64 could be extremely quick with properly targeted code, but compilers weren't up to the job and the chips were too expensive.

Another problem you have with x86, is that a lot of code is compiled to target the lowest common denominator, not to target the current model cpu, so even the latest processors have to be designed to optimize code thats been compiled to run on a 386. If you're precompiling code for wide distribution you have to set the lowest supported cpu somewhere, and if you set it too recent you'll improve performance but exclude lots of potential users.

Comment Re:x86 Forever, Even Intel Couldn't Kill It (Score 1) 357

Alternative architectures fail because of closed source code...
All of those architectures failed on NT primarily because there was little or no software available for them, whereas there are millions of (usually embedded) PPC and MIPS systems running Linux even today.

Software vendors won't port to an architecture that has no users, and users won't buy an architecture that has no software.

Comment Re:RISC-V (Score 1) 357

There have been many architectures which don't do speculative execution, IA64 for example, and high performance is certainly possible... But for that to work, you need well written code (or well written compiler) to take advantage of it, and the code needs to target the specific processor revision, not be generically compiled code.

Processors in games consoles (eg Cell in PS3) were built this way because there was never any need to run the code on a different model of processor for example.

Comment Re: No (Score 1) 357

The world is becoming more paranoid, governments are starting to worry about foreign influence...
The US has already banned Russian antivirus software, do you think the Russians are especially happy about using processors designed in the US and manufactured in China, running software also written in the US?
Larger countries like Russia, China and the US can afford to do things inhouse, but smaller ones can't and an open collaborative model is the next best option. Even for larger countries, it's much cheaper.
So you'd have not just large cloud providers, but also governments potentially contributing towards open hardware.

Slashdot Top Deals

My idea of roughing it is when room service is late.