Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Wow, end of an era. (Score 1) 128 128

When people talk about an n-bit CPU, they're conflating a lot of things:
  • Register size (address and data register size on archs that have separate ones).
  • Largest ALU op size
  • Virtual address size
  • Physical address size
  • Bus data lane size
  • Bus address lane size

It's very rare to find a processor where all of these are the same. Intel tried marketing the Pentium as a 64-bit chip for a while because it had 64-bit ALU ops. Most '64-bit' processors actually have something like a 48-bit virtual and 40-bit physical address space, but 64-bit registers and ALU ops (and some have 128-bit and 256-bit vector registers and ALU ops). The Pentium Pro with PAE had a 36-bit physical but 32-bit virtual address space, so you only got 4GB of address space per process, but multiple processes could use more than 4GB between them. This is the opposite way around to what you want for an OS, where you want to be able to map all of physical memory into your kernel's virtual address space and is one of the reasons that PAE kernels came with a performance hit.

Comment Re:How soon until x86 is dropped? (Score 1) 128 128

Videogame programmer here. It wasn't really a compiler optimization issue. There's no compiler on the planet that can perform high-level optimizations like that.

Compiler engineer here. The vectorisation for the Cell wasn't the hard part, it was the data management. Autovectorisation and even autoparallelisation are done by some compilers (the Sun compiler suite was doing both before the Cell was introduced), and can be aided by OpenMP or similar annotations. If the Cell SPUs had been cache-coherent and had direct access to DRAM, then there's a good chance that a bit of investment in the compiler would have given a big speedup. The problem of deciding when to DMA data to and from the SPUs and where you need to add explicit synchronisation into the PPU was much, much harder. I've worked on a related problem in the context of automatic offload to GPUs and it turns out to be non-computable in most nontrivial cases (it depends heavily on accurate alias analysis).

Comment Re:How soon until x86 is dropped? (Score 1) 128 128

MIPS and PowerPC are still huge in embedded. MIPS is used on a huge number of cheap routers and a lot of these are in dire need of a better OS than they ship with (and many of them ship with a hacked-up Linux). PowerPC is mostly big in automotive, but IBM still sells machines and is willing to keep funding a lot of the software support. The same goes for S/390: a big part of IBM's sales pitch there is that you can spin up Linux VMs on it easily and run the OS that you're used to. SPARC these days basically means Oracle appliances. You don't buy a SPARC machine if you want to run Linux, you buy one if you want to do the vertical integration thing with Oracle (i.e. Oracle arranges you vertically with your head downwards and shakes until all of the money is integrated with their wallet).

Comment Re:ran debian on sparc for over 10 years (Score 1) 128 128

Someone needs to develop the software. The difference between open source and proprietary software is that open source software is developed by and for people who want to use it, proprietary software is developed by people who want to sell it. Successful projects are ones where the people who want to use it want to use it enough to fund development.

Comment Re:Sad Day (Score 1) 128 128

Debian hamm sucked quite a bit less than SunOS

We had a couple of those. You should have tried NetBSD. For a very long time, Linux had particularly bad handling of the SPARC TLB and NetBSD was faster to the extent that it was noticeable by the user in the GUI.

apart from the terrible quality of the CG3 driver in Xfree, which would lock the entire machine up solid after about 30 minutes of use

When was these? Even after they stopped being useful as stand-alone machines, we used them as dumb X servers and easily had a few weeks of XFree86 uptime.

Comment Re:Not the best summary... (Score 1) 106 106

Okay, let's remove the coercion then and have a proper libertarian solution. No one has to get vaccinated, but anyone who is not vaccinated is liable for and harm done by anyone that they infected with a disease, including joint liability for all outbreaks of that disease where the vaccinated number dropped below the number responsible for herd immunity.

By all means, go ahead and defend your right to kill people as a result of your negligence.

Comment Re:What's special here?? (Score 1) 44 44

No idea. Open source FPGA toolchains are definitely interesting, mostly because Altera and Xilinx compete on who can produce the worst software. Having a single toolchain that could target both (which this project is still a long way away from) would be very useful. Unfortunately, high-end FPGAs vary a lot both in the core logic block structure and the number and layout of the fixed-function macroblocks that are available.

Comment Re:Most people won't care (Score 1) 44 44

The size and complexity of modern CPU's mean that you don't stand a chance at getting a no-backdoor assurance for anything useful.

Down on this scale, on microprocessors and IC's, it's possible but incredibly difficult. If you were serious about no-backdoors (e.g. military), you'd not be using an off-the-shelf American product. You'd be describing the chip you want to build and validating the end-product against your design. Because that's the ONLY way to be sure.

There is absolutely no guarantee that your phone, the box in your phone cabinet, your network switch, your router, your PC, even your TV does not have these sorts of backdoors (especially if you consider dormant-until-activated backdoors on devices). We've reached the complexity where it would take far too long to validate any one design.

As such, if you want that sort of assurance you have little choice. Antiquated, tiny, powerless chips at best. You might be able to validate a Z80, but you wouldn't even get close to, say, a decent ARM chip at a few hundred MHz - even with the designs able to be licensed (the NDA's associated with licensing such things would probably stop you talking about any backdoor legally anyway...).

If you want a modern PC, you literally have no choice. The chipset on your motherboard is so complex as to be unauditable for an end-user, or even a skilled professional. We can just about decap and understand some 80's arcade game chips, and then only if they are simple and of certain types. Some of the protection / security chips are still complete unknowns from that era.

You can care all you like. What you can't do if even knock up a Raspberry Pi competitor without having to spend inordinate amounts of money and using proprietary components that you can't inspect somewhere along the route.

Comment Re:The important details: Slower and over 540$ (Score 1) 75 75

The peak power consumption is important for one other reason: heat. The machine that I was talking about is in a small NAS case (4 drive bays, slimline optical drive, power distribution board, mini-ITX motherboard, no other spare space). It also on has a (fanless) 120W PSU, so it's quite easy to go over the available power if the CPU can spike up to a high peak. I'll keep the newer Intel chips in mind when I upgrade, but it looks as if most of the mini-ITX motherboards are still limited to 16GB of RAM and being able to upgrade to 32GB would be the main thing that would prompt me to replace the motherboard. Oh, and Haswell still doesn't have working FreeBSD drivers, so that wouldn't be an option yet.

Comment Re:Wait, what? (Score 1) 52 52

You're missing the point (though the practical implications are the same).

The check on whether the code was valid was only run if the user typed a code into the box. Typing in random letters wouldn't validate. Typing in a valid code would.

It was an oversight that the checks existed but never actually took place in the case of null, not that they were not capable of validating codes.

As such, rather than just "Let's make up random codes and then ignore them and validate anything", the thought process was "Let's generate codes, validate them properly, but oh shit, we forgot to validate this path".

Although the results are the same, the implication that they never intended to use the code for checking against is wrong. As such, it appears to be a coding oversight which allows an authentication bypass, rather than deliberate laziness masquerading as security.

Comment Re:Wait, you have to TYPE the password??? (Score 1) 327 327

When the services go down, you can't log in to the relying sites. Luckily, core infrastructure like the account systems is a very high priority for the engineers, and the big providers have plenty of resources to keep them up -- and they do. My bank's site is down far, far more often than Google's auth servers, for example. How much more often? I don't know... I've never seen Google's auth servers down.

Comment Re:OpenID Connect scales at O(n^2) (Score 1) 327 327

Pick the top several and you'll cover nearly everyone. For the tiny percentage of users that remains, you have to either offer password auth (which means all of the work and risks of maintaining a password system, but at least when you screw it up only a tiny percentage of your users will be affected) or push them to get an account with one of the providers you support.

Comment Re:I have no fear of AI, but fear AI weapons (Score 1) 267 267

But, aren't there enough 'morally flexible' drone operators available that it doesn't really matter?

There are, but drones are only a small part of the armed forces easily reached by radio requiring powerful jammers that would be easy targets and they're usually support for people on the ground - not necessarily your people but affiliated forces. If you want to do a door-to-door search it would be extremely hard to do that by droid remote control, no matter how many operators you have. The goal of autonomous robots is genuine remote warfare, where you have the ability to run an occupation without having boots on the ground. Apart from AI sci-fi stories we do expect somebody to give the robots commands and accept their behavior even though the robot is working out the details of who to shoot itself.

Weekend, where are you?

Working...