Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 121 declined, 51 accepted (172 total, 29.65% accepted)

×
Games

Submission + - Realtime Worlds Goes Under (bbc.co.uk)

jd writes: "On June 29th, Realtime Worlds released APB in North America. Less than two months later, they have gone bankrupt with the loss of 200 jobs (an additional 60 being shed last week). Created by the author of Lemmings and Grand Theft Auto, the company was probably the largest games outfit in the UK (Scotland to be exact), so this is more than just a loss to gamers. According to the article, APB had a poor reception, so it is unclear if this failure was genuinely a result of poor market conditions (as claimed in the article) or a result of a failure to understand the gamers."
Hardware

Submission + - When mistakes improve performance (bbc.co.uk)

jd writes: "Professor Rakesh Kumar at the University of Illinois has produced research showing that allowing communication errors between microprocessor components and then making the software more robust will actually result in chips that are faster and yet require less power. His argument is that at the current scale errors in transmission occur anyway and that the efforts of chip manufacturers to hide these to create the illusion of perfect reliability simply introduces a lot of unnecessary expense, demands excessive power and deoptimises the design. He favors a new architecture, which he calls the "stochastic processor" which is designed to gracefully handle data corruption and error recovery. He believes he has shown such a design would work and that it will permit Moore's Law to continue to operate into the foreseeable future. However, this is not the first time someone has tried to fundamentally revolutionize the CPU. The Transputer, the AMULET, the FM8501, the iWARP and the Crusoe were all supposed to be game-changers but died a cold, lonely death instead — and those were far closer to design philosophies programmers are currently familiar with. Modern software simply isn't written with the level of reliability the Stochastic Processor requires in mind (and many software packages are too big and too complex to port), and the volume of available software frequently makes or breaks new designs. Will this be "interesting but dead-end" research, or will the Professor pull off a CPU architectural revolution really not seen since the microprocessor was designed?"

Submission + - Car computers dangerously insecure (bbc.co.uk) 7

" rel="nofollow">jd writes: "According to the BBC, it is possible to remotely control the brakes, dashboard, car locks, engine and seatbelts. Researchers have actually done so, with nothing more than a packet sniffer to analyze the messages between the onboard computer systems and a means of injecting packets. There is no packet authentication or channel authentication of any kind, no sanity-checking and no obvious data validation. It appears to need a hardware hack, at present — such as a wireless device plugged into the diagnostics port — but it's not at all clear that this will be a limiting factor. There's no shortage of wireless devices that must make use of the fact you can inject packets to turn on/off the engine, lock/unlock the doors, track the car, etc. If it's a simple one-wire link, all you need is a transmitter tuned to that wire."
Operating Systems

Submission + - Experimental OS' for Evil Geniuses (arizona.edu)

jd writes: "There are now a vast range of experimental Open Source OS' of one kind or another. This is a very quick run-through of a few such OS' that might offer ideas or insights that could lead to improvements in Linux and/or the *BSDs. One of the problems with software development is that research is time-consuming, people-consuming, money-consuming and nerve-consuming. The reason you mostly see tried-and-true ideas being added in gradually, rather than really revolutionary stuff, is that revolutionary programmers are thin on the ground, and that revolutionary implementations are often too specific to be put into a general-purpose kernel if you want to keep the bloat down. The solution? Examine the revolutionary implementations of ideas by others, learn, then steal. Ooops, that should be "paraphrase".

First off, Scout is a networked/distributed OS for embedded devices. The first part is probably done better in Plan 9/Inferno, and the latter is probably done better in specialized embedded OS', especially as Scout is abandonware, but the sheer difficulty of combining these ideas in a space that could fit into a digital camera back when Scout was being developed means this actually has some rather ingenious ideas that are worth revisiting. Besides, for all the good ideas in Plan 9/Inferno, it's not an OS design anyone has exactly picked up and run with, and if Inferno hasn't been abandoned, it's as good as, so offers no advantages in that regard.

Barrelfish is perhaps the most recent and seems to allow you to build a highly-scalable high-performance heterogeneous cluster, for example, though I'm suspicious of Microsoft's involvement. Oh, I can believe they want to fund research that they can use to make their own OS' run faster, after all the complaints over Vista, but they're not exactly known for supporting non-Intel architectures. On the other hand, ETH Zurich are very respectable and I could see them coming up with some neat code. Anyways, the idea of having a cluster that can work over multiple architectures (ie: not a SSI-based cluster) is potentially very interesting.

But they're not the only guys doing interesting work. K42 (with HHGTTG references in the docs) is supposed to be a highly scalable, highly fault-tolerent OS from IBM, who quite definitely have an interest in doing precisely that kind of work. Given IBM is currently selling Linux on its HPC machines, it would be reasonable to suppose the K42 research is somehow related, perhaps with interesting ideas working their way into the Linux group. And if that isn't happenening, it damn well should, as directly as licenses and architecture permit.

The L4 microkernel group has been around for a long time now. Although microkernels have their issues, running modules in userspace has advantages for security and communicating via messages would (in principle) allow those kernel modules and kernel threads to migrate between cluster nodes — a major headache that Linux-based clusters (such as OpenMOSIX) have a very hard time solving.

One Open Source microkernel that does exactly that is Amoeba, though it has become abandonware. It's a truly amazing piece of engineering for distributed computing that is slowly becoming an amazing piece of dead code through bitrot. However, if you want to set out to compete with Kerrighed or MOSIX, this might be a good place to look for inspiration.

Then there's C5. Fortunately, not the one invented by Sir Clive Sinclair, but rather a rather intriguing "high-availability carrier-grade" microkernel. Jaluna is the Open Source OS toolkit that includes the C5 microkernel. Now, many are the boasts of "carrier-grade", but few are the systems that merit such a description. The term is usually taken to mean that the OS has 5N reliability (ie: it will be up 99.999% of the time). One of the problems in this case, though, is that if it requires additional layers to be useful, 5N reliability in the microkernel doesn't mean anything useful. You could build an OS that only supported /dev/null as a device, and do so with far far greater reliability than 5 nines, but so what? Nonetheless, there may be things in C5 that are worth looking at.

Calypso is a "metacomputing OS", which seems to be the latest buzzword-compliant neologism to describe a pile-of-PCs cluster. On the other hand, abstract and efficient parallel systems mean better utilization of SMP and multicore systems and therefore better servers and clients for MMORGs.

I think most Slashdotters will be familiar with FreeRTOS — an Open-Source version of a very popular real-time OS. This OS is being used by some members of the Portland State University's rocketry group as it is absolutely tiny and will actually fit on embedded computers small enough to shove into an amateur rocket. There's a commercial version that has "extra features". I don't like — or trust — companies that do this, as altering the number of pathways in code will alter the quality of the code that is left. Unless it's independently verified and QA'd (doubtful given the approach being followed), it is not safe to assume that because the full source is good that a cut-down version won't be destabilized. On the other hand, if you want a simple embedded computational device (for your killer robot army or whatever), FreeRTOS looks sufficiently general-purpose and sufficiently hard real-time.

There are, of course, plenty of other OS' — some closed-source (such as ThreadX) and some open-source (such as MINIX 3) which have some — indeed, sometimes many — points of interest. However, there's not much point in listing every OS out there (Slashdot would run out of space, I'd get tired of typing, and I'd rapidly run out of put-downs). Besides which, at the present time, the biggest problem people are trying to solve are multi-tasking on SMP and/or multi-core architectures and/or clusters, grids and clouds. Parallel systems are bloody difficult. The second problem is how to provide the option of having fixed-sized time-slices out of a fixed time interval — often for things like multimedia. This is not the same as "low latency". It's not even deterministic latency, except on average. (It's only deterministic if not only does a program have a guaranteed amount of runtime over a given time interval, but it ALSO has a guaranteed start-time within that interval.) What RTOS' normally provide is deterministic runtime and a guarantee that the latency cannot exceed some upper limit. From the number of times the scheduler in Linux has been replaced, it should be obvious to all-and-sundry that providing any kind of guarantee is extremely hard and — as with the O(1) scheduler — even when the guarantee is actually met, you've no guarantee it'll turn out to be the guarantee you want.

A third problem people have tried to tackle is reliability. There's a version of LynxOS (a Linux variant, I believe) which is FAA-approved for certain tasks (it has the lowest certification possible). There was, at one point at least, also a carrier-grade Linux distro, but I've not seen that mentioned for a while. If you include security as a facet of reliability, then there are also Linux distros that have achieved EAL4+ ratings, possibly EAL5+. Some of the requirements in these projects is mutually-exclusive, which is a problem, and clearly the way the requirements were implemented are or we'd be seeing projects evolving FROM these points rather than the projects being almost evolutionary dead-ends.

It would seem logical, then, to go back to the experimental kernels where the fringes of OS theory are being developed, dyed and permed. Study what people think might work, rather than the stuff that's already mainstream or already dead, see if there's a way to use what's being discovered to unify currently disparate projects, and see if that unification can become mainstream. Even if it can't, not having to re-invent everything is bound to speed up work on the areas that are least-understood and therefore in most need of work."

United States

Submission + - Steve Fossett Killed By Downdraft (NTSB) (yahoo.com)

jd writes: "The NTSB has now released the text of its examination into the crash of Steve Fossett's aircraft on Sept 3rd, 2007. It concludes that downdrafts were the likely cause of the crash, dragging the plane into the mountain with such force that, even at full power, it would have been impossible to escape the collision. Pilots experienced in the area report that those winds can rip the wings off aircraft and Mark Twain remarked that they could roll up a tin house "like sheet music". One must wonder why such a skilled aviator was taking a gamble with such hostile conditions, given that he was looking for a flat stretch of land to race cars on, but that is one mystery we shall probably never know the answer to."
Biotech

Submission + - Hadrosaur Proteins Sequenced (genomeweb.com)

jd writes: "In a follow-up study to the one on proteins found in a T. Rex bone, the team responsible for the T. Rex study sequenced proteins found in an 80-million year old Hadrosaur fossil. According to the article, the proteins found confirm the results of the T. Rex study, proving that what was found in T. Rex was not a result of modern contamination as had been claimed by skeptics, but was indeed the genuine thing, real dino protein. Furthermore, despite the new fossil being 12 million years older, they claim they got more out — eight collagen peptides and 149 amino acids from four different samples. This, they say, places the Hadrosaur in the same family as T. Rex and Ostriches, but that not enough was recovered to say just how close or distant the relationship was."
Microsoft

Submission + - Did Microsoft Supply Al-Qaida? 2

jd writes: "In startling revelations, convicted terrorist Ali Saleh Kahlah al-Marri accused Al-Qaida of using public telephones, pre-paid calling cards, Google and Microsoft's Hotmail. Now, whilst the vision of seeing Balmer do time in Gitmo probably appeals to most Slashdotters, the first real story behind all this is that the Evil Bastards aren't using sophisticated methods to avoid detection or monitoring — which tells us just how crappy SIGINT really is right now. If the NSA needs to wiretap the whole of the USA because they can't break into a Hotmail account, you know they've problems. The second real story is that even though e-mail is virtually ubiquitous (the Queen of England started using it in 1975), the media still thinks "technology" and "free" combined is every bit as hot as any sex scandal."
Books

Submission + - Ancient Books Go Online

jd writes: "The BBC is reporting that the United Nations' World Digital Library has gone online with an initial 1,200 offering of ancient manuscripts, parchments and documents. To no great surprise, Europe comes in first with 380 items. South America comes in second with 320, with a very distant third place being given to the Middle East at a paltry 157 texts. This is only the initial round, so the leader board can be expected to change. There are, for example, many Sumerian and Babylonian tablets, many of which are already online elsewhere. Astonishingly, the collection is covered by numerous copyright laws, according to the legal page. Use of material from a given country is subject to whatever restrictions that country places, in addition to any local and international copyright laws. With some of the contributions being over 8,000 years old, this has to be the longest copyright extension ever offered. There is nothing on whether the original artists get royalties, however."
Security

Submission + - GRSecurity "victim of economy" (grsecurity.net)

jd writes: "GRSecurity, one of the mandatory access control patches for the Linux kernel, has lost its financial sponsorship due to the lousy economy and now the team of developers is saying that if they don't get backing for their project soon, they'll be forced to disband. Although I've had a few somewhat heated arguments with the development team, I would be very sad to see GRSecurity vanish. It is a very nice security solution, arguably better than SELinux is in many ways, but hasn't really had much exposure compared to other Linux hardening projects and isn't provided in a ready-built form in the major distros. Is this Darwinism in action, the effects of the Unseen Hand on drugs, or a sobering reflection of how even an Open Source project is vulnerable to the current climate? Can this project be rescued? If the fittest projects get the developers, should it survive? And if it should, what can be done to improve its chances?"
Programming

Submission + - What Parallel Programming? (llnl.gov) 1

jd writes: "Lawrence-Livermore publishes a nice summary of many (but not all) of the different ins and outs of parallel programming. Different programming languages implement either different models of parallel programming or radically different interpretations. Examples would include Cilk++ (a modded version of Gnu C++), Erlang and Occam (Pascal derivative with more hacks than a newspaper). You could write a small book on the communications libraries that facilitate parallel programming — not in describing them, just listing them. Everything from the major families of message passing (PVM, MPI and BSP) to sharing memory (Distributed Shared Memory, ccNUMA, RDMA) to remote calls (RMI, Corba, RPC) to platform-specific aids (MOSIX, Keerighed and even TIPC), and beyond. There are way, way too many options and no real good guides on when one options should be used above another. I've done my share of parallel processing, but I want to hear other people's opinions on what solutions they've personally tried, what situations they've thought would work (but didn't) or what situations unexpectedly worked better than they'd hoped."
Encryption

Submission + - NIST Cryptographic Hash Contest, round 1 (nist.gov)

jd writes: "NIST has announced the round 1 candidates for the Cryptographic Hash Algorithm Challenge. Of the 64 who submitted entries, 51 were accepted. Of those, in mere days, one has been definitely broken and three others are believed to have been. At this rate, it won't take the couple of years NIST were reckoning to whittle down the field to just one or two.

(In comparison, the European Union version, NESSIE, recieved just one cryptographic hash function for it's contest. One has to wonder if NIST and the crypto experts are so concerned about being overwhelmed with work for this current contest, why they all but ignored the European effort. A self-inflicted wound might hurt, but it's still self-inflicted.)

Popular wisdom has it that no product will have any support for any of these algorithms for years — if ever. Of course, popular wisdom is ignoring all Open Source projects that support cryptography (including the Linux kernel) which could add support for any of these tomorrow. Does it really matter if the algorithm is found to be flawed later on, if most of these packages support algorithms known to be flawed today? Wouldn't it be just, oh, geekier to have passwords in Blue Midnight Wish or SANDstorm rather than boring old MD5, even if it makes no practical difference whatsoever?"

Hardware Hacking

Submission + - New Attempt at World Land Speed Record (bbc.co.uk)

jd writes: "In the most eccentric of high-speed hardware hacks, Richard Noble's supersonic team is aiming to break their own World Land Speed Record, which currently stands at around 716 mph. This latest project straps a cockpit onto a Typhoon jet engine and a high-power rocket, with the aim of exceeding 1000 mph. The article gives most of the truly geeky stats on this project — nicknamed Bloodhound SSC — but omits that the driver will be experiencing an average of about 1.3g during the 40 seconds it takes to get up to full speed. The prior car to be developed by this team, Thrust SSC, had a jet engine on either side of a main body. At full speed, the sonic shockwave was etched into the sand. The damage caused by the vibrations came perilously close to shattering the entire vehicle. Had they gone much faster, neither the car nor the driver RAF pilot Andy Green would have survived. They also had enormous problems with downforce — the first practice run ripped into the ground badly. The new design is based on their experience with Thrust SSC, particularly on how not to design ground vehicles traveling faster than sound. The new vehicle is expected to debut at a desert near you in 2011."
Security

Submission + - San Fransisco Police Network Kidnapped (yahoo.com)

jd writes: "Network Administrator Terry Childs has been arrested for concealing the password for a crucial network that (according to the article) "handles city payroll files, jail bookings, law enforcement documents and official e-mail for San Francisco". Administrators are unable to gain access of any kind and, although the network is operating, city officials are claiming Mr. Childs provided a tap by which information could be accessed by third parties. What is not said is whether third-party access would allow data to be injected into the network, whether the data is encrypted, or whether information exposed is likely to be mundane or be potentially highly delicate. Nor is it explained as to how a single administrator could gain sufficient access rights, given the highly segregated nature of any secure computing environment. It is also unexplained as to why the remaining administrators are having difficulty regaining control, given that all they need is a backup of a password file from the last day they knew the passwords worked. Since the human element is always going to be unpredictable, it would seem obvious that this should wake people up to the needs of good security policies. Throwing it open to Slashdotters, if someone like that were working in your company, would they be any better at recovering?"
Networking

Submission + - World's slowest e-mail, garlic butter is extra (bbc.co.uk)

jd writes: "Bournemouth University has a new e-mail server. Well, snail-mail. Literally. Three snails are fitted with electronic tags which can collect data packets off transmitters — when they go close to one — and likewise deposit the data packet when they go near a reader. Large attachments can take a while. Indeed, any message will take a while — there's an average transfer rate of one packet every 1.96 days. Just imagine what the French would do with a Beowulf cluster of these..."
It's funny.  Laugh.

Submission + - It's quite definitely an ex-parrot.

jd writes: "It turns out, Monty Python was right. The Norwegian Blue parrot is not only real, but is also very, very dead. Perhaps the most remarkable part of this discovery is that not only did parrots once live in Scandanavia, that is where they came from before migrating south. With or without the aid of swallows is not made clear."

Slashdot Top Deals

This file will self-destruct in five minutes.

Working...