Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 121 declined, 51 accepted (172 total, 29.65% accepted)

×
Science

Submission + - You really are what you know (bbc.co.uk)

jd writes: "There has been research for some time that shows that London cab driver brains differ from other people's, with considerable enlargement of those areas dealing with spacial relationships and navigation, with follow-up work showing that it wasn't simply a product of driving a lot.

However, up until now it has been disputed as to whether the brain structure led people to become London cabbies or whether the brain structure changed as a result of their intensive training (which requires rote memorization of essentially the entire street map of one of the largest and least-organized cities in the world). Well, this latest study answers that. MRI scans before and after the training show that the regions of the brain substantially grow as a result of the training, that they're quite normal beforehand.

The practical upshot of this research is that — even for adult brains, which aren't supposed to change much — what you learn structurally changes your brain. Significantly."

Submission + - When and How to deal with GPL violations? (nicta.com.au) 1

jd writes: "There are many pieces of software out there such as seL4 (kindly brought to my attention by another reader) where the vendor has indeed written something that they're entitled to Close Source but where their closed-source license includes the modifications to GPLed software such as the Linux kernel.

Then there's a second type of behaviour. Code Sourcery produced two versions of their VSIPL++ image processing library — one closed-source, one GPLed. It was extremely decent of them. When Mentor Graphics bought them, they continued developing the closed-course one and discontinued, then deleted, the GPL variant. It's unclear to me if that's kosher as the closed variant must contain code that had been GPLed at one point.

Here's the problem: Complaining too much will mean we get code now that maybe 4 or 5 people tops will actually care about. It will also make corporations leery of any other such work in future, where that work will be of greater value to a greater number of people.

So the question I want to ask is this: When is it a good time to complain? By what rule-of-thumb might you decide that one violation is worth cracking down on and another should be let go to help encourage work we're never going to do ourselves?"

Science

Submission + - Open Source Cancer Research? (guardian.co.uk)

jd writes: "Dr Jay Bradner is claiming that it is possible to conduct cancer research using open source methodology. Certainly, his research lab has produced a few papers of interest, though the page describing the research is filled more with buzzwords (post-genomic?) and hype than with actual objectives and serious strategies. I'm certainly not seeing anything that fits either the "open source" and crowdsource models.

Certainly, there are some areas where open source really is exceedingly useful in science.

Then, there are plenty of projects that use volunteers to help solve complex problems.

So, I'm going to ask what is probably a dumb question — is there actually anything new that science can do with open source techniques? Has that path been mapped out, or are there actually new (as opposed to merely buzzword-compliant) approaches that could be followed which would get useful results?"

Science

Submission + - Brain uses Self-Modifying Code (bbc.co.uk)

jd writes: "Each and every brain cell alters its own DNA thousands of times over a person's lifetime, say researchers from the Roslyn Institute in Edinburgh, Scotland.

The paper, formally published in Nature (abstract visible, article behind paywall), describes the mechanism by which this happens.

I have written to the lead researcher to get confirmation that this is actually a change in the DNA sequence itself and not a change in the epigenome that alters what the DNA transcribes to. He has kindly written back to confirm the findings. It IS a change in the DNA. Every brain cell in you has a genome unique to itself.

In short, the brain is a cluster where each node is running self-modifying code — a practice that no computer scientist or software engineer would dream of trying to do, considering it way too fragile, too unpredictable and too difficult. The university I went to, you'd be murdered in the hall if you'd proposed even single-threaded self-modifying algorithms, never mind a few trillion tightly-coupled threads.

The hope in genetics is that this will lead to a better understanding of genetic diseases, such as the various forms of dementia. My fear is that it will have the opposite effect — you can't exactly sequence every cell in the brain of a live patient to see what is going on, which may lead to geneticists ruling the problem too hard.

The other consequence of this find is that we are all chimera. Human DNA can no longer be regarded as a single thing in a single person, with only a few exceptional cases. The terms "Chimera" and "Gestalt" apply to everyone on a fantastic scale. Which makes them meaningless, unless they get redefined to work around the problem.

Arguably, that's an academic point at the moment. Nomenclature is nothing too serious and there's no actual hard evidence that it would cause problems in DNA forensics, though the mere fact that there's a possibility might cause problems in the courtroom whatever the science itself says about the impact."

Technology

Submission + - Bloodhound SSC partially opens source (bloodhoundssc.com)

jd writes: "I've been monitoring the progress of Bloodhound SSC (the car aiming for the 1,000 MPH record) and it looks like they're opting for some interesting tactics. In April, the car itself went partially open source, with a complete set of schematics and specifications and an invite for engineering bugfixes. According to them, it's the first time a racing team has done this. Sounds likely enough. The latest patches to be released were a tripling in fin size and a switch to steel brakes because carbon fibre would explode."
Politics

Submission + - Wisconsin tried to ban Internet2 (the-scientist.com)

jd writes: "The Wisconsin legislature attempted to pass a budget that would ban any school, college or university from being a member of Internet2 or WiscNet on the grounds that such networks "unfairly competed" against commercial offerings.

Of course, Internet2 is already partly supplied by those very same commercial vendors and last I heard there weren't too many DSL providers offering 100 gigabit pipes running onto 9 terabit backbones. Nor, as we all know, do that many ISPs offer IPv6. So who, precisely, were Wisconsin concerned about?

For now, there has been a reprieve. But the legislature has made it clear that academic networks will not be tolerated in future and it does not seem far-fetched to expect other legislatures to prohibit such systems."

Science

Submission + - Just In: Yellowstone is big (livescience.com) 1

jd writes: "Really big. By using electrical conductivity tests rather than seismic waves, geologists have remapped the Yellowstone caldera. Whilst seismic waves indicate differences in the reflectivity of different materials, it doesn't show up everything and contrast isn't always great. By looking at the electrical conductivity instead, different charcacteristics of molten and semi-molten rock can be measured and observed.

The result — the caldera is far larger than had previously been suspected. This doesn't alter the chances of an eruption, it's not even clear it would change the scale (prior eruptions are very easy to study as they're on the surface) but it certainly changes the dynamics and our understanding of this fierce supervolcano."

Games

Submission + - Realtime Worlds Goes Under (bbc.co.uk)

jd writes: "On June 29th, Realtime Worlds released APB in North America. Less than two months later, they have gone bankrupt with the loss of 200 jobs (an additional 60 being shed last week). Created by the author of Lemmings and Grand Theft Auto, the company was probably the largest games outfit in the UK (Scotland to be exact), so this is more than just a loss to gamers. According to the article, APB had a poor reception, so it is unclear if this failure was genuinely a result of poor market conditions (as claimed in the article) or a result of a failure to understand the gamers."
Hardware

Submission + - When mistakes improve performance (bbc.co.uk)

jd writes: "Professor Rakesh Kumar at the University of Illinois has produced research showing that allowing communication errors between microprocessor components and then making the software more robust will actually result in chips that are faster and yet require less power. His argument is that at the current scale errors in transmission occur anyway and that the efforts of chip manufacturers to hide these to create the illusion of perfect reliability simply introduces a lot of unnecessary expense, demands excessive power and deoptimises the design. He favors a new architecture, which he calls the "stochastic processor" which is designed to gracefully handle data corruption and error recovery. He believes he has shown such a design would work and that it will permit Moore's Law to continue to operate into the foreseeable future. However, this is not the first time someone has tried to fundamentally revolutionize the CPU. The Transputer, the AMULET, the FM8501, the iWARP and the Crusoe were all supposed to be game-changers but died a cold, lonely death instead — and those were far closer to design philosophies programmers are currently familiar with. Modern software simply isn't written with the level of reliability the Stochastic Processor requires in mind (and many software packages are too big and too complex to port), and the volume of available software frequently makes or breaks new designs. Will this be "interesting but dead-end" research, or will the Professor pull off a CPU architectural revolution really not seen since the microprocessor was designed?"

Submission + - Car computers dangerously insecure (bbc.co.uk) 7

" rel="nofollow">jd writes: "According to the BBC, it is possible to remotely control the brakes, dashboard, car locks, engine and seatbelts. Researchers have actually done so, with nothing more than a packet sniffer to analyze the messages between the onboard computer systems and a means of injecting packets. There is no packet authentication or channel authentication of any kind, no sanity-checking and no obvious data validation. It appears to need a hardware hack, at present — such as a wireless device plugged into the diagnostics port — but it's not at all clear that this will be a limiting factor. There's no shortage of wireless devices that must make use of the fact you can inject packets to turn on/off the engine, lock/unlock the doors, track the car, etc. If it's a simple one-wire link, all you need is a transmitter tuned to that wire."
Operating Systems

Submission + - Experimental OS' for Evil Geniuses (arizona.edu)

jd writes: "There are now a vast range of experimental Open Source OS' of one kind or another. This is a very quick run-through of a few such OS' that might offer ideas or insights that could lead to improvements in Linux and/or the *BSDs. One of the problems with software development is that research is time-consuming, people-consuming, money-consuming and nerve-consuming. The reason you mostly see tried-and-true ideas being added in gradually, rather than really revolutionary stuff, is that revolutionary programmers are thin on the ground, and that revolutionary implementations are often too specific to be put into a general-purpose kernel if you want to keep the bloat down. The solution? Examine the revolutionary implementations of ideas by others, learn, then steal. Ooops, that should be "paraphrase".

First off, Scout is a networked/distributed OS for embedded devices. The first part is probably done better in Plan 9/Inferno, and the latter is probably done better in specialized embedded OS', especially as Scout is abandonware, but the sheer difficulty of combining these ideas in a space that could fit into a digital camera back when Scout was being developed means this actually has some rather ingenious ideas that are worth revisiting. Besides, for all the good ideas in Plan 9/Inferno, it's not an OS design anyone has exactly picked up and run with, and if Inferno hasn't been abandoned, it's as good as, so offers no advantages in that regard.

Barrelfish is perhaps the most recent and seems to allow you to build a highly-scalable high-performance heterogeneous cluster, for example, though I'm suspicious of Microsoft's involvement. Oh, I can believe they want to fund research that they can use to make their own OS' run faster, after all the complaints over Vista, but they're not exactly known for supporting non-Intel architectures. On the other hand, ETH Zurich are very respectable and I could see them coming up with some neat code. Anyways, the idea of having a cluster that can work over multiple architectures (ie: not a SSI-based cluster) is potentially very interesting.

But they're not the only guys doing interesting work. K42 (with HHGTTG references in the docs) is supposed to be a highly scalable, highly fault-tolerent OS from IBM, who quite definitely have an interest in doing precisely that kind of work. Given IBM is currently selling Linux on its HPC machines, it would be reasonable to suppose the K42 research is somehow related, perhaps with interesting ideas working their way into the Linux group. And if that isn't happenening, it damn well should, as directly as licenses and architecture permit.

The L4 microkernel group has been around for a long time now. Although microkernels have their issues, running modules in userspace has advantages for security and communicating via messages would (in principle) allow those kernel modules and kernel threads to migrate between cluster nodes — a major headache that Linux-based clusters (such as OpenMOSIX) have a very hard time solving.

One Open Source microkernel that does exactly that is Amoeba, though it has become abandonware. It's a truly amazing piece of engineering for distributed computing that is slowly becoming an amazing piece of dead code through bitrot. However, if you want to set out to compete with Kerrighed or MOSIX, this might be a good place to look for inspiration.

Then there's C5. Fortunately, not the one invented by Sir Clive Sinclair, but rather a rather intriguing "high-availability carrier-grade" microkernel. Jaluna is the Open Source OS toolkit that includes the C5 microkernel. Now, many are the boasts of "carrier-grade", but few are the systems that merit such a description. The term is usually taken to mean that the OS has 5N reliability (ie: it will be up 99.999% of the time). One of the problems in this case, though, is that if it requires additional layers to be useful, 5N reliability in the microkernel doesn't mean anything useful. You could build an OS that only supported /dev/null as a device, and do so with far far greater reliability than 5 nines, but so what? Nonetheless, there may be things in C5 that are worth looking at.

Calypso is a "metacomputing OS", which seems to be the latest buzzword-compliant neologism to describe a pile-of-PCs cluster. On the other hand, abstract and efficient parallel systems mean better utilization of SMP and multicore systems and therefore better servers and clients for MMORGs.

I think most Slashdotters will be familiar with FreeRTOS — an Open-Source version of a very popular real-time OS. This OS is being used by some members of the Portland State University's rocketry group as it is absolutely tiny and will actually fit on embedded computers small enough to shove into an amateur rocket. There's a commercial version that has "extra features". I don't like — or trust — companies that do this, as altering the number of pathways in code will alter the quality of the code that is left. Unless it's independently verified and QA'd (doubtful given the approach being followed), it is not safe to assume that because the full source is good that a cut-down version won't be destabilized. On the other hand, if you want a simple embedded computational device (for your killer robot army or whatever), FreeRTOS looks sufficiently general-purpose and sufficiently hard real-time.

There are, of course, plenty of other OS' — some closed-source (such as ThreadX) and some open-source (such as MINIX 3) which have some — indeed, sometimes many — points of interest. However, there's not much point in listing every OS out there (Slashdot would run out of space, I'd get tired of typing, and I'd rapidly run out of put-downs). Besides which, at the present time, the biggest problem people are trying to solve are multi-tasking on SMP and/or multi-core architectures and/or clusters, grids and clouds. Parallel systems are bloody difficult. The second problem is how to provide the option of having fixed-sized time-slices out of a fixed time interval — often for things like multimedia. This is not the same as "low latency". It's not even deterministic latency, except on average. (It's only deterministic if not only does a program have a guaranteed amount of runtime over a given time interval, but it ALSO has a guaranteed start-time within that interval.) What RTOS' normally provide is deterministic runtime and a guarantee that the latency cannot exceed some upper limit. From the number of times the scheduler in Linux has been replaced, it should be obvious to all-and-sundry that providing any kind of guarantee is extremely hard and — as with the O(1) scheduler — even when the guarantee is actually met, you've no guarantee it'll turn out to be the guarantee you want.

A third problem people have tried to tackle is reliability. There's a version of LynxOS (a Linux variant, I believe) which is FAA-approved for certain tasks (it has the lowest certification possible). There was, at one point at least, also a carrier-grade Linux distro, but I've not seen that mentioned for a while. If you include security as a facet of reliability, then there are also Linux distros that have achieved EAL4+ ratings, possibly EAL5+. Some of the requirements in these projects is mutually-exclusive, which is a problem, and clearly the way the requirements were implemented are or we'd be seeing projects evolving FROM these points rather than the projects being almost evolutionary dead-ends.

It would seem logical, then, to go back to the experimental kernels where the fringes of OS theory are being developed, dyed and permed. Study what people think might work, rather than the stuff that's already mainstream or already dead, see if there's a way to use what's being discovered to unify currently disparate projects, and see if that unification can become mainstream. Even if it can't, not having to re-invent everything is bound to speed up work on the areas that are least-understood and therefore in most need of work."

United States

Submission + - Steve Fossett Killed By Downdraft (NTSB) (yahoo.com)

jd writes: "The NTSB has now released the text of its examination into the crash of Steve Fossett's aircraft on Sept 3rd, 2007. It concludes that downdrafts were the likely cause of the crash, dragging the plane into the mountain with such force that, even at full power, it would have been impossible to escape the collision. Pilots experienced in the area report that those winds can rip the wings off aircraft and Mark Twain remarked that they could roll up a tin house "like sheet music". One must wonder why such a skilled aviator was taking a gamble with such hostile conditions, given that he was looking for a flat stretch of land to race cars on, but that is one mystery we shall probably never know the answer to."
Biotech

Submission + - Hadrosaur Proteins Sequenced (genomeweb.com)

jd writes: "In a follow-up study to the one on proteins found in a T. Rex bone, the team responsible for the T. Rex study sequenced proteins found in an 80-million year old Hadrosaur fossil. According to the article, the proteins found confirm the results of the T. Rex study, proving that what was found in T. Rex was not a result of modern contamination as had been claimed by skeptics, but was indeed the genuine thing, real dino protein. Furthermore, despite the new fossil being 12 million years older, they claim they got more out — eight collagen peptides and 149 amino acids from four different samples. This, they say, places the Hadrosaur in the same family as T. Rex and Ostriches, but that not enough was recovered to say just how close or distant the relationship was."
Microsoft

Submission + - Did Microsoft Supply Al-Qaida? 2

jd writes: "In startling revelations, convicted terrorist Ali Saleh Kahlah al-Marri accused Al-Qaida of using public telephones, pre-paid calling cards, Google and Microsoft's Hotmail. Now, whilst the vision of seeing Balmer do time in Gitmo probably appeals to most Slashdotters, the first real story behind all this is that the Evil Bastards aren't using sophisticated methods to avoid detection or monitoring — which tells us just how crappy SIGINT really is right now. If the NSA needs to wiretap the whole of the USA because they can't break into a Hotmail account, you know they've problems. The second real story is that even though e-mail is virtually ubiquitous (the Queen of England started using it in 1975), the media still thinks "technology" and "free" combined is every bit as hot as any sex scandal."

Slashdot Top Deals

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...