Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Actually, it does ! (Score 1) 375

We've actually paid more tax per head, and received less back per head, than England for every one of the last 110 years, which is as far back as the available data goes

A big citation needed there. The last time I looked at the data was in 1998, but back then English tax payers were paying an average of around £100 each for the upkeep of Scotland, if you didn't include the north sea gas revenues.

Comment Re:My opinion on the matter. (Score 4, Informative) 826

The problem is that X was designed for network transparency in a usage model that no longer exists. X is great for network transparency when the server is doing all of the drawing. Unfortunately, the server can't do simple things like antialised line drawing, so people render on the client and then push (uncompressed) pixmaps to the server. A few issues with X11:

Some trivial things, like the fact that command IDs are 8 bits and over half of them are taken up by 'core protocol' things that no one uses anymore. this means that every extension (i.e. the stuff people actually do use) ends up providing a single 'do stuff' command and then a load of subcommands. This limits the number of extensions that you can have loaded and, because the assignment of extensions to command numbers is dynamic, makes intelligent proxies just that little bit harder to write.

There's no easy way for an application to get all of its server-side state. This means that you can't, for example, have the X server crash (or even restart cleanly after an upgrade) and have all clients reconnect and recreate their windows. The Windows and BeOS display servers, for example, have this feature. You also can't tell an application to disconnect from one server and move its windows to another easily. This ought to be basic functionality for a client-server windowing system. There are proxies that try to do this, but they break in the presence of certain (commonly used) extensions.

There is no security model. Any app can get the entire input stream. Keyloggers for X are trivial to write as are programs that inject keystrokes into other applications. Neither requires any special privilege, nor do applications that subvert the display hierarchy (e.g. window managers).

The XRender extension is basically useless. It lets you do server-side compositing, which ought to make things fast. OS X gets a lot of speedup from doing this for text rendering: programs (well, system libraries that programs use) render glyphs in a font to server-side buffers and then the server composites them in the correct place. This doesn't work well with X, because most toolkits aren't set up to do text drawing on the server but everything else on the client (which is needed because the server doesn't provide a rich set of drawing primitives). Fixing this would mean adding something like the full set of PostScript or PDF drawing commands to the server.

XLib is an abomination. It starts with an asynchronous protocol designed for latency hiding and then wraps it up in a synchronous interface. It's basically impossible to use XLib to write an application that performs well over high-latency (more than a few tens of ms) link. XCB is somewhat better, but it's fighting toolkits that were designed around the XLib model so ends up being used synchronously.

None of the network-transparent audio extensions caught on, so your remote apps can't even make notification beeps (worse - they can, but on the remote machine).

If you designed a modern protocol for a network-transparent windowing system, you'd end up with something a lot like a web browser. You'd want PostScript drawing contexts (canvas tags, in HTML5 parlance), server-side caching of images and sound samples (image and audio tags, in HTML5 parlance), and OpenGL contexts. The library would keep a list of all of the contexts that it held on behalf of the program and would be able to recreate them on demand and request that the program reinitialise them. You'd be able to run small snippets of interpreted code on the server (so that things like pressing buttons or opening menus didn't require a full network round-trip - something that DPS and NeWS got right in the '80s, but X11 got wrong). You'd ensure that input events only went to the current view or its immediate parent (if explicitly delegated), or to a program that the user had designated as privileged.

It's possible to do a lot better than X11. Unfortunately, most projects that try seem to focus on irrelevant issues and not the real ones.

Comment Re:My opinion on the matter. (Score 1) 826

There's nothing intrinsically good about the UNIX mindset. For example, UNIX originally put globing in the shell as a work around for not having shared libraries and claimed it was a feature (which led to all sorts of problems - for example */*/* can overflow the command-line argument length limit, whereas a system that had put globing in a shared library would have lazily expanded it in the called program). The problem with the systemd developers is not that they lack the UNIX mindset, it's that they produce utter crap and somehow are able to market it successfully.

Comment Re: My opinion on the matter. (Score 1) 826

That particular use is quite uncommon, but it's increasingly common to stick a recovery root partition in flash (or even in a kernel-embedded RAM disk on a recovery USB drive or similar) so that if you screw up some core configuration you can boot the core system and recover everything else. Keeping it small and self-contained has several advantages. If it's being loaded to RAM on recovery boot, you don't want it to be large and you do want to be able to write the recovery images quickly. If it's in flash (or even a separate FS on the main storage pool) then you don't want it to be too big.

It matters less for big users, who will fix a machine by simply reimaging it and have redundant everything, but if's very useful for a small company that only has a few servers. It's also useful if you're building an appliance and want to be able to have two root partitions that you switch between for atomic updates (boot one, update the other, reboot on the other, always have one bootable root).

Comment Re:What's the point? (Score 1) 511

Actually, C is used in these cases specifically because it is cross-platform, not for 'platform-specific optimisations'. The core of most of the popular apps on iOS and Android is the same, with a thin layer of platform-specific code, which is Java on Android or Objective-C on iOS. For games, the amount of Java code is typically tiny - create an OpenGL context and pass it to the native code, which is identical on both mobile platforms. This is a big part of the reason why there are so few apps for Windows Phone compared to the other platforms: by forcing WP apps to be entirely managed code, they make it hard to port apps.

Comment Re:What's the point? (Score 2) 511

Medical and military sounds like mostly Windows shops, with maybe a bit of Linux thrown in. Qt apps on OS X tend to be garbage - you can spot them within a few seconds of launch, because they look vaguely like OS X but don't behave at all like it (e.g. modal dialog boxes, incorrect shortcut keys for text field navigation, preferences that need buttons hitting to take effect, and so on).

Comment Re:Nope (Score 1) 511

The JVM is a clean bytecode virtual machine, which can be implemented in hardware and reasonably compiled to native machine code.

Only one of those is really true. You can implement a stack-based ISA in hardware, but there's a reason that most of the companies that tried it went out of business in the '80s: stack-based ISAs are really hard to get any ILP from and so once pipelining became common they started to be noticeably slower and were completely killed by superscalar register-based architectures.

Comment Re:Nope (Score 1) 511

Wow, that's a pretty old article. ARM9 was introduced in 1997 and pretty much dead for a good 5 or so years. The Jazelle extensions (no, they're not called 'Java extensions', they're called Jazelle DBX) added a decoder alongside the ARM decoder that would execute simple Java instructions natively and trap to the JVM for more complex ones. They were pretty nice for their target (i.e. machines with 2MB or so of RAM) but were surpassed quite a while ago by JIT compilers, for several reasons:
  • The JVM is stack-based, so it's hard to get any ILP out of a superscalar core and it's even hard to identify hazard-free pairs for a simple in-order pipelined core, so you don't end up packing the pipeline very well.
  • The javac output is not very well optimised, because it's intended to be consumed by something else that will optimise and doing the optimisation in the front end can hide opportunities later.
  • Run-time optimisations (trace-based adaptive recompilation) techniques improved a lot

Once ARM devices wanting to run Java had 32-64MB of RAM, you could get better performance with an optimising JIT compiler than with Jazelle and it died. More recent chips have Jazelle RCT (also known as Thumb-2EE) which has some extra instructions for fast bounds checking and so on, but even that isn't used much.

Comment Re:I see 2 problems (Score 1) 83

"You're missing the point. You buy stuff like that occasionally and on specific occasions. "

Half the time is occaisonally. Got it.

Are you really that dense? You may be buying gifts for one of your friends half of the time, but you're not buying gifts for one specific friend half of the time. Recommending things that one friend likes when you're shopping for things for a different friend may coincidentally be useful, but probably isn't unless you have a very homogeneous set of friends.

Comment Re:I see 2 problems (Score 1) 83

You're missing the point. You buy stuff like that occasionally and on specific occasions. If I have, say, 10 friends for whom I buy birthday presents, and buy 20 things for myself, from Amazon each year, then if you want to recommend things to me then there is absolutely no point in recommending things that one of my friends likes, because there's a very small chance that this will be the time when I'll be buying something for that person. The same applies to seasonal goods, but those patterns are easier to spot because they apply to everyone.

Comment Re:I see 2 problems (Score 1) 83

The problem is, if half of the stuff that you buy on Amazon is intended for gifts, then it's very difficult for the algorithm to determine the difference between a pattern with 50% of inputs being false positives, and a completely different pattern. It's quite easy to train a machine learning algorithm to discover that, given these 100 things that you've bought for either yourself or your friends, either you or one of your friends would like something from this other set of items. It's much harder for it to then determine that, at this instant, you're shopping for yourself or a specific friend and that it should narrow the search down to things that person will be interested in.

Comment Re:"Not eradicated" isn't needed (Score 1) 185

The point that the grandparent is trying to make is that you don't need to prevent cancer, you need to prevent cancerous cells from having a serious adverse effect on the organism. There are a number of benign growths that have cancer-like properties that people can live with and that don't spread over the body. Being able to differentiate the benign versions from the malignant and kill off the malignant cells would not require eradicating the cancer mechanism, but would (from the perspective of humans outside of the medical profession) count as curing cancer.

Comment Re:it's not the ads it's the surveillance. (Score 1) 611

I wonder if this will change, given all of the reports about web advertising being a bubble. Advertisers are starting to notice that, for most of them, the ROI is tiny and that's eventually going to trickle up the supply chain. If Microsoft were smart, they'd sell off their ad business while it's still at an overinflated price and then work to kill the market.

Slashdot Top Deals

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...