Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:What? 64-bit? (Score 1) 56

For the default Linux kernel settings, with anything approaching or exceeding 1GB of RAM you can actually get a benefit from more address space. The kernel only maps 1GB by default because of the restrictions of a 32-bit address space - and some of that 1GB is taken up by devices, rather than actual memory. The result is that the kernel has to create temporary mappings to access process memory. With a 64-bit system the kernel can keep it all mapped, all the time.

My comment applies to x86 specifically - other architectures will not necessarily have the same cost / benefit tradeoff. Also, there have been options for the kernel that allow it to map 2GB (with a reduced 2GB address space per process) or 4GB (which will be at a performance cost) - they're not often used but in a more appliance-like device (i.e. nobody is going to plop a load more memory in later and change the cost/benefit analysis) such as this they may also be a viable option.

Comment Re:If I remember correctly... (Score 1) 91

The first Itaniums had x86 compat in hardware and were, I believe, disappointingly slow at executing x86 code. Obviously that's something that Intel could have improved if they applied themselves to the problem (and maybe they'd have made it faster if they hadn't been expecting / hoping / planning to replace x86 anyhow).

But given the different philosophies of the architectures, I think it's somewhat plausible that doing an x86 -> Itanium conversion in hardware is just a bit awkward and that software might genuinely give the flexibility to do a better job. Around the same time, Transmeta were selling their chips that exclusively exposed a software-emulated x86 layer for use in laptops. I remember wishing Intel would buy their tech and apply it to Itanium / x86 compatbility.

Comment Re:I don't really see the point. (Score 2) 130

Apple seem to be pushing their mobile CPUs forward quite fast - they're also way ahead of the curve in adopting 64-bit ARM. I wonder if there's a longer term strategy to start migrating devices like the MacBook Air over to their A-series CPUs, instead of Intel. That could tie things together quite nicely for them.

Comment Re:That... looks... horrible. (Score 3, Interesting) 82

Maltron keyboards are kind of crazy - they're still made using very low volume manufacturing techniques. The keyboard shells, AFAIK are vacuum formed and (unless things have changed recently) I think they do manual point-to-point wiring on the switches. But if you look at the sculpted shape of a Maltron, they don't lend themselves to conventional PCBs.

I'm typing on one now - I think it's quite an old one but it looks as though the design changes are mostly smallish refinements and updates to the controller / electronics. I got mine from an office clearer on eBay, otherwise they've very expensive and I probably wouldn't have got it.

I've also got a Kinesis, an ergo board which came later (and with a strikingly similar design). It feels a bit more like a slick, mass-manufactured product but I've known people insist that the Maltron is ergonomically better overall. I'm not so fussy, I'm just glad I got two cool keyboards for prices I felt I could afford!

Comment Re:Run a completely new OS? (Score 1) 257

There was work done on single address space operating systems but retaining multiple protection domains - the Nemesis research OS did this. It sounds mad at first but every process can still have separate pagetables, they just happen to all agree on the virtual addresses of shared libraries, shared memory areas, etc. This means you can still make the OS secure (though admittedly it would not be compatible with modern address space randomisation strategies).

Honestly, I can't quite remember what the main benefits actually were!

L1 caches are indexed using virtual addresses, so I suppose it may improve the extent to which shared lib code remains cached across process switches. I can't see that it would avoid TLB flushes as such because you'd still want to clear out mappings that the process you're switching too shouldn't have access to... Does mean that data structures in shared memory can contain pointers that actually work but that doesn't sound *that* important.

I'm sure there was some other, more compelling reason but on commodity hardware I can't remember what it would be. Hurm.

Comment C-x C-s (Score 1) 521

I'm used to just randomly hitting Ctrl+X then Ctrl+S in emacs when I pause and my fingers have nothing better to do. Semi-frequently, I do this in other applications without even realising I just did it, with various mildly weird results...

Comment Re:Mostly pointless (Score 5, Informative) 51

I do remember a talk where Eben Upton said that the routing was relatively complex under the main chip. Pinning it out onto an edge connector presumably gives you the luxury of building a much simpler board to plug it into - design-wise and possibly cost-wise since you might get away with fewer layers.

Seems like small-to-mid volume manufacturers might find it handy, even though high volume manufacturers would presumably just plonk the chips directly on.

Not that I'm an electronic engineer, so obviously take this with a pinch of salt.

Comment Re:Good idea (Score 3, Insightful) 175

As AmiMoJo also noted, when you have a kernel panic all bets are off regarding which parts of the kernel are OK. If the behaviour of the disk driver or filesystem have been affected, it could damage your filesystem to try to write a kernel dump into a normal disk partition. It might work but it does seem a good idea to be properly paranoid. I didn't know that Windows uses a special reserved area of the boot drive - that does make sense as a solution!

There have been various systems for crash dumping under Linux, though. I think the de-facto solution (the one that was accepted by the kernel devs) ended up being kdump, which is based on kexec (kexec is "boot directly to a new kernel from an old kernel, without a reboot"). This allows full crash dumps with (hopefully) decent safety, so it is possible to do this if configured.

In kdump, you have a "spare" kernel loaded in some reserved memory and waiting to execute. When the primary kernel panics it will (if possible) begin executing the dump kernel, which is (hopefully) able to reinitialise the hardware and filesystem drivers, then write out the rest of memory to disk. I'm not sure how protected kdump's kernel is from whatever trashed the "main" kernel but there are things that would help - for instance, if they map its memory read only (or even keep it unmapped) so that somebody's buffer overflow can't just scribble on it during the crash.

Obviously, having a full kernel available to do the crashdump makes it easier to do other clever tricks, in principle - such as writing the dump out to a server on the network. That's not new, in that there used to be a kernel patch allowing a panicked kernel directly to write out a dump to network, it just seems easier to do it the kdump way, with a whole fresh kernel. Having a fully-working kernel, rather than one which is trying to restrict its behaviour, means you can rely on more kernel services - and probably just write your dumper code as a userspace program! Having just installed system-config-kdump on Fedora 20, I see that there's an option to dump to NFS, or to an SSH-able server - the latter would never be sanely doable from within the kernel but pretty easy from userspace.

Various distros do support kdump. I think it's often not enabled by default and does require a (comparatively small) amount of reserved RAM. So that's some motivation for basic QR code tracebacks. I suppose another reason is if they expect they can mostly decipher what happened from a traceback, without the dump being necessary - plus, with a bug report you can easily C&P a traceback.

This discussion has just inspired me to install the tools, so maybe I'll find out what it's like...

Comment Re:I find it interesting (Score 1) 223

I'll apologise in advance for rambling but I don't often get to talk about Plan 9 and it's nice to have the opportunity!

X11 itself as files would, I imagine, become a bit icky because it's a complicated protocol. But as I understand it, the Plan 9 windowing system was effectively exposed as files (i.e. the display server exported a filesystem interface to applications) and that did actually permit some pretty cool stuff...

Windows basically appeared as a set of files that let you draw to the surface within the window. The interface exposed to windowed apps by the windowing system could also be consumed by other instances of the windowing system, so that nesting instances of the windowing system into windows Just Works.

The fact that everything was in the filesystem meant that network FS shares could be used to e.g. to provide rootless graphical remoting of applications, etc - you just had to arrange for the right filesystem mounts to be available and the display would automagically work out.

Having said all that, obviously these days you'd also want to worry about direct hardware access to GPUs, etc, which I'm sure would make the whole enterprise rather more complicated! Maybe that would put paid to the idea of "everything is a file" being practical for display stuff, or maybe somebody cleverer than me could point out a better way of doing it!

Further tangent: Plan 9 made device files really behave like files, which meant you could remote device access trivially using remote filesystem mounts also. This doesn't work with Unix device files and it always seems a shame that various ad-hoc remoting protocols (USB over IP, Network Block Devices, etc) get used instead. But I suspect a similar argument to the GPU could easily apply - that it's either more efficient to have a specialist protocol or that some devices are just too complicated to meaningfully abstract like that. Who knows.

In some ways, I think I "miss" the olden days when if you'd got a Plan 9 system you probably could feel justified in believing you were in the future!

Comment Re:Why am I skeptical ? (Score 1) 110

My employer did buy a top-of-the-range Alienware desktop once because it was the fastest available machine for single threaded performance (at least, out of off-the-shelf options) due to its being factory overclocked. I imagine if we'd gone for a more boutique vendor we might have got faster but I suppose it was still good to have the support.

FWIW we weren't just playing games, we actually had long running single threaded simulations that we wanted to get out of the way as fast as possible! It's now my desktop PC after my previous one died - so that worked out OK in the end!

Comment Re:What's bzr? (Score 3, Informative) 252

I thought there was a fairly complex history here, since the current bzr was (I thought) bzr-ng originally, an alternative to some original Bazaar tool. And I thought that *that* came from GNU Arch, which (speaking loosely) I gathered wasn't well understood or enjoyable to use. I don't know how much of the current behaviour dates back that far, though, so there may not be too much in common now!

Comment For older games, consider Retrode (Score 1) 227

The Retrode is a brilliant little gadget: http://www.retrode.com/

It's basically an old-school console cartridge -> USB adaptor. It also supports old Megadrive / SNES gamepads and doesn't require host software (which is actually rather neat - it'll appear as a USB mass storage device with a cartridge image on it, plus presenting the controllers as either gamepads or keyboards). With further adapters you can plug in Mastersystem, Gameboy and N64 carts (plus two N64 controllers).

It's just a really nice piece of work. I use it to rip my cartridges, just like I rip CDs, then put them into whatever emulator I like. Avoids the legally dubious websites, etc. I can imagine there might be grey areas in some emulation stuff still (e.g. some emulators need a BIOS image, which someone has to have dumped from the console) but that's only for certain consoles - and at least you don't have to go on dodgy websites to download the games you already own.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...