Forgot your password?
typodupeerror

Comment: Re:That... looks... horrible. (Score 3, Interesting) 82

by Lemming Mark (#47526889) Attached to: A Warm-Feeling Wooden Keyboard (Video)

Maltron keyboards are kind of crazy - they're still made using very low volume manufacturing techniques. The keyboard shells, AFAIK are vacuum formed and (unless things have changed recently) I think they do manual point-to-point wiring on the switches. But if you look at the sculpted shape of a Maltron, they don't lend themselves to conventional PCBs.

I'm typing on one now - I think it's quite an old one but it looks as though the design changes are mostly smallish refinements and updates to the controller / electronics. I got mine from an office clearer on eBay, otherwise they've very expensive and I probably wouldn't have got it.

I've also got a Kinesis, an ergo board which came later (and with a strikingly similar design). It feels a bit more like a slick, mass-manufactured product but I've known people insist that the Maltron is ergonomically better overall. I'm not so fussy, I'm just glad I got two cool keyboards for prices I felt I could afford!

Comment: Re:Run a completely new OS? (Score 1) 257

by Lemming Mark (#47217903) Attached to: HP Unveils 'The Machine,' a New Computer Architecture

There was work done on single address space operating systems but retaining multiple protection domains - the Nemesis research OS did this. It sounds mad at first but every process can still have separate pagetables, they just happen to all agree on the virtual addresses of shared libraries, shared memory areas, etc. This means you can still make the OS secure (though admittedly it would not be compatible with modern address space randomisation strategies).

Honestly, I can't quite remember what the main benefits actually were!

L1 caches are indexed using virtual addresses, so I suppose it may improve the extent to which shared lib code remains cached across process switches. I can't see that it would avoid TLB flushes as such because you'd still want to clear out mappings that the process you're switching too shouldn't have access to... Does mean that data structures in shared memory can contain pointers that actually work but that doesn't sound *that* important.

I'm sure there was some other, more compelling reason but on commodity hardware I can't remember what it would be. Hurm.

Comment: Re:Mostly pointless (Score 5, Informative) 51

by Lemming Mark (#46686619) Attached to: Raspberry Pi Compute Module Release

I do remember a talk where Eben Upton said that the routing was relatively complex under the main chip. Pinning it out onto an edge connector presumably gives you the luxury of building a much simpler board to plug it into - design-wise and possibly cost-wise since you might get away with fewer layers.

Seems like small-to-mid volume manufacturers might find it handy, even though high volume manufacturers would presumably just plonk the chips directly on.

Not that I'm an electronic engineer, so obviously take this with a pinch of salt.

Comment: Re:Good idea (Score 3, Insightful) 175

by Lemming Mark (#46675123) Attached to: Linux Developers Consider On-Screen QR Codes For Kernel Panics

As AmiMoJo also noted, when you have a kernel panic all bets are off regarding which parts of the kernel are OK. If the behaviour of the disk driver or filesystem have been affected, it could damage your filesystem to try to write a kernel dump into a normal disk partition. It might work but it does seem a good idea to be properly paranoid. I didn't know that Windows uses a special reserved area of the boot drive - that does make sense as a solution!

There have been various systems for crash dumping under Linux, though. I think the de-facto solution (the one that was accepted by the kernel devs) ended up being kdump, which is based on kexec (kexec is "boot directly to a new kernel from an old kernel, without a reboot"). This allows full crash dumps with (hopefully) decent safety, so it is possible to do this if configured.

In kdump, you have a "spare" kernel loaded in some reserved memory and waiting to execute. When the primary kernel panics it will (if possible) begin executing the dump kernel, which is (hopefully) able to reinitialise the hardware and filesystem drivers, then write out the rest of memory to disk. I'm not sure how protected kdump's kernel is from whatever trashed the "main" kernel but there are things that would help - for instance, if they map its memory read only (or even keep it unmapped) so that somebody's buffer overflow can't just scribble on it during the crash.

Obviously, having a full kernel available to do the crashdump makes it easier to do other clever tricks, in principle - such as writing the dump out to a server on the network. That's not new, in that there used to be a kernel patch allowing a panicked kernel directly to write out a dump to network, it just seems easier to do it the kdump way, with a whole fresh kernel. Having a fully-working kernel, rather than one which is trying to restrict its behaviour, means you can rely on more kernel services - and probably just write your dumper code as a userspace program! Having just installed system-config-kdump on Fedora 20, I see that there's an option to dump to NFS, or to an SSH-able server - the latter would never be sanely doable from within the kernel but pretty easy from userspace.

Various distros do support kdump. I think it's often not enabled by default and does require a (comparatively small) amount of reserved RAM. So that's some motivation for basic QR code tracebacks. I suppose another reason is if they expect they can mostly decipher what happened from a traceback, without the dump being necessary - plus, with a bug report you can easily C&P a traceback.

This discussion has just inspired me to install the tools, so maybe I'll find out what it's like...

Comment: Re:I find it interesting (Score 1) 223

by Lemming Mark (#46271149) Attached to: Plan 9 From Bell Labs Operating System Now Available Under GPLv2

I'll apologise in advance for rambling but I don't often get to talk about Plan 9 and it's nice to have the opportunity!

X11 itself as files would, I imagine, become a bit icky because it's a complicated protocol. But as I understand it, the Plan 9 windowing system was effectively exposed as files (i.e. the display server exported a filesystem interface to applications) and that did actually permit some pretty cool stuff...

Windows basically appeared as a set of files that let you draw to the surface within the window. The interface exposed to windowed apps by the windowing system could also be consumed by other instances of the windowing system, so that nesting instances of the windowing system into windows Just Works.

The fact that everything was in the filesystem meant that network FS shares could be used to e.g. to provide rootless graphical remoting of applications, etc - you just had to arrange for the right filesystem mounts to be available and the display would automagically work out.

Having said all that, obviously these days you'd also want to worry about direct hardware access to GPUs, etc, which I'm sure would make the whole enterprise rather more complicated! Maybe that would put paid to the idea of "everything is a file" being practical for display stuff, or maybe somebody cleverer than me could point out a better way of doing it!

Further tangent: Plan 9 made device files really behave like files, which meant you could remote device access trivially using remote filesystem mounts also. This doesn't work with Unix device files and it always seems a shame that various ad-hoc remoting protocols (USB over IP, Network Block Devices, etc) get used instead. But I suspect a similar argument to the GPU could easily apply - that it's either more efficient to have a specialist protocol or that some devices are just too complicated to meaningfully abstract like that. Who knows.

In some ways, I think I "miss" the olden days when if you'd got a Plan 9 system you probably could feel justified in believing you were in the future!

Comment: Re:Why am I skeptical ? (Score 1) 110

by Lemming Mark (#45925999) Attached to: Dell Joins Steam Machine Initiative With Alienware System

My employer did buy a top-of-the-range Alienware desktop once because it was the fastest available machine for single threaded performance (at least, out of off-the-shelf options) due to its being factory overclocked. I imagine if we'd gone for a more boutique vendor we might have got faster but I suppose it was still good to have the support.

FWIW we weren't just playing games, we actually had long running single threaded simulations that we wanted to get out of the way as fast as possible! It's now my desktop PC after my previous one died - so that worked out OK in the end!

Comment: Re:What's bzr? (Score 3, Informative) 252

by Lemming Mark (#45846323) Attached to: Emacs Needs To Move To GitHub, Says ESR

I thought there was a fairly complex history here, since the current bzr was (I thought) bzr-ng originally, an alternative to some original Bazaar tool. And I thought that *that* came from GNU Arch, which (speaking loosely) I gathered wasn't well understood or enjoyable to use. I don't know how much of the current behaviour dates back that far, though, so there may not be too much in common now!

Comment: For older games, consider Retrode (Score 1) 227

by Lemming Mark (#45643155) Attached to: The Quest To Build Xbox One and PS4 Emulators

The Retrode is a brilliant little gadget: http://www.retrode.com/

It's basically an old-school console cartridge -> USB adaptor. It also supports old Megadrive / SNES gamepads and doesn't require host software (which is actually rather neat - it'll appear as a USB mass storage device with a cartridge image on it, plus presenting the controllers as either gamepads or keyboards). With further adapters you can plug in Mastersystem, Gameboy and N64 carts (plus two N64 controllers).

It's just a really nice piece of work. I use it to rip my cartridges, just like I rip CDs, then put them into whatever emulator I like. Avoids the legally dubious websites, etc. I can imagine there might be grey areas in some emulation stuff still (e.g. some emulators need a BIOS image, which someone has to have dumped from the console) but that's only for certain consoles - and at least you don't have to go on dodgy websites to download the games you already own.

Comment: Re:WWBD? (Score 2) 362

by Lemming Mark (#45262241) Attached to: Debian To Replace SysVinit, Switch To Systemd Or Upstart

I've heard tempting-sounding things about Debian kFreeBSD, actually - aside from anything else, BSD has a port of ZFS. So if you want something with a familiar userland (GNU utilies, Debian package management, loads of packages available) it does look quite appealing. I'm not sure how common it is to use ZFS under FreeBSD so far, though.

Also, there are Solaris distros out there, which is potentially another way to get the same effect. Nexenta started as one, though I remember them switching more to focus on server stuff since then...

Comment: Re: 64-bit BS (Score 1) 512

by Lemming Mark (#44847975) Attached to: Why Apple Went 64-Bit With the iPhone 5s

I don't think I'm really adding much here but the discussion of the 8051's quirks struck a chord with me! The 8051 is a bit weird in place, although in fairness with a C compiler you can just mash on through that and not worry too much. If you actually have to look at the architecture, you can definitely see its age, though. But for 8-bit stuff, the AVR architecture (Atmel's microcontrollers) genuinely are relatively nice, despite being just an 8-bit CPU. They are RISC CPUs, so they actually have a fair number of registers and comparatively few weird quirks (that I could see).

The other big advantage in my particular line of experience is that as long as a CPU has lots of registers, gcc often supports it. Otherwise you end up having to use slightly less mainstream compilers - which is basically OK, they're still nice software. But they're not as comfortable to me as the standard GNU toolchain. Of course, I'm sure plenty of commercial embedded programmers aren't familiar with the GNU toolchain and so don't care about that.

Comment: Re:Layering? (Score 3, Informative) 205

by Lemming Mark (#44791375) Attached to: Intel Rejects Supporting Ubuntu's XMir

I can speculate a bit with things that sound plausible to me given my knowledge of the system - but I might still be a bit off target... Still, maybe it helps a little.

Mir and Wayland both expect their clients to just render into a buffer, which clients might do with direct rendering, in which case the graphics hardware isn't really hidden from the client anyhow. AFAIK it's pretty normal practice that there's effectively in-application code (in the form of libraries that are linked to) that understands how to talk directly to the specific hardware (I think this already happens under Xorg). The protocol you talk to Wayland (and Mir, AFAIK) isn't really an abstraction over the hardware, just a way of providing buffers to be rendered (which might, have just been filled by the hardware using direct rendering).

In this case Xorg is a client of Mir, so it's a provider of buffers which it must render. The X11 client application might use direct rendering to draw its window, anyhow. But the Xserver might also want to access hardware operations directly to accelerate something it's drawing (I suppose)... So the X server needs some hardware-specific DDX, since Mir alone doesn't provide a mechanism to do all the things it wants.

As for why the Intel driver then needs to be modified... I also understand that Mir has all graphics buffers be allocated by the graphics server (i.e. Mir) itself. Presumably Xorg would normally do this allocation (?) In which case, the Intel DDX would need modifying to do the right thing under Mir. The only other reason for modifying the DDX that springs to mind is that perhaps the responsibilities of a "Mir Client" divide between Xorg and *its* client, so this could be necessary to incorporate support for the "Mir protocol" properly. That's just hand-waving on my part, though...

Bonus feature - whilst trying to find out stuff, I found a scary diagram of the Linux graphics stack but my brain is not up to parsing it at this time of day:
http://en.wikipedia.org/wiki/File:Linux_Graphics_Stack_2013.svg

Comment: Re:Layering? (Score 3, Interesting) 205

by Lemming Mark (#44789007) Attached to: Intel Rejects Supporting Ubuntu's XMir

I'm honestly not super clear myself! But the DDX is, as I understand it, the in-Xorg portion of the graphics driver. So I guess it's not unreasonable that that component needs to know it's not got complete control of the hardware, as opposed to the Xorg-only case where it would have. Presumably it needs to proxy some operations through Mir (or Wayland, for XWayland) that it'd normally just set directly.

A *bit* like running X under X using Xnest or Xephyr, though I'd imagine it's less extreme than that (since those, I'd guess, have to issue X-level drawing commands to their host X server, whereas to get graphics under Wayland/Mir they'd just render to a memory buffer like any Wayland/Mir client).

All slightly speculative since I'm not familiar with the in-depth technical details!

panic: kernel trap (ignored)

Working...