Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 112 declined, 38 accepted (150 total, 25.33% accepted)

Submission + - Cancer researcher vanishes with tens of millions of dollars->

jd writes: Steven Curley, MD, who ran the Akesogenx corporation (and may indeed have been the sole employee after the dismissal of Robert Zavala) had been working on a radio-frequency cure for cancer with an engineer by the name of John Kanzius.

Kanzius died, Steven Curley set up the aforementioned parallel company that bought all the rights and patents to the technology before shuttering the John Kanzius Foundation. So far, so very uncool.

Last year, just as the company started aproaching the FDA about clinical trials, Dr Curley got blasted with lawsuits accusing him of loading his shortly-to-be ex-wife's computer with spyware.

Two weeks ago, there was to be a major announcement "within two weeks". Shortly after, the company dropped off the Internet and Dr Curley dropped off the face of the planet.

Robert Zavala is the only name mentioned that could be a fit for the company's DNS record owner. The company does not appear to have any employees other than Dr Curley, making it very unlikely he could have ever run a complex engineering project well enough to get to trial stage. His wife doubtless has a few scores to settle. Donors, some providing several millions, were getting frustrated — and as we know from McAfee, not all in IT are terribly sane. There are many people who might want the money and have no confidence any results were forthcoming.

So, what precisely was the device? Simple enough. Every molecule has an absorption line. It can absorb energy on any other frequency. A technique widely exploited in physics, chemistry and astronomy. People have looked into various ways of using it in medicine for a long time.

The idea was to inject patients with nanoparticles on an absorption line well clear of anything the human body cares about. These particles would be preferentially picked up by cancer cells because they're greedy. Once that's done, you blast the body at the specified frequency. The cancer cells are charbroiled and healthy cells remain intact.

It's an idea that's so obvious I was posting about it here and elsewhere in 1998. The difference is, they had a prototype that seemed to work.

But now there is nothing but the sound of Silence, a suspect list of thousands and a list of things they could be suspected of stretching off to infinity. Most likely, there's a doctor sipping champaign on some island with no extradition treaty. Or a future next-door neighbour to Hans Reiser. Regardless, this will set back cancer research. Money is limited and so is trust. It was, in effect, crowdsource funded and that, too, will feel a blow if theft was involved.

Or it could just be the usual absent-minded scientist discovering he hasn't the skills or awesomeness needed, but has got too much pride to admit it, as has happened in so many science fraud cases.

Link to Original Source

Submission + - Ask Slashdot: Bitcoin over Tor is a bad idea?->

jd writes: Researchers studying Bitcoin have determined that the level of anonymity of the cryptocurrency is low and that using Bitcoin over Tor provides an opportunity for a Man-in-the-Middle attack against Bitcoin users. (I must confess, at this point, that I can certainly see anonymity limitations helping expose what machine is linked to what Bitcoin ID, putting users at risk of exposure, but I don't see how this is a function of Tor, as the paper implies.)

It would seem worthwhile to examine both the Tor and Bitcoin protocols to establish if there is an actual threat there, as it must surely apply to any semi-anonymous protocol over Tor and Bitcoin has limited value as a cryptocurrency if all transactions have to be carried out in plain sight.

What are the opinions of other Slashdottians on this announcement? Should we be working on an entirely new cryptocurrency system? Is this a problem with Tor? Is this a case of the Scarlett Fish (aka: a red herring) or something to take seriously?

Link to Original Source

Submission + - New revokable identity-based encryption scheme proposed->

jd writes: Identity-based public key encryption works on the idea of using something well-known (like an e-mail address) as the public key and having a private key generator do some wibbly-wobbly timey-wimey stuff to generate a secure private key out if it. A private key I can understand, secure is another matter.

In fact, the paper notes that security has been a big hastle in IBE-type encryption, as has revocation of keys. The authors claim, however, that they have accomplished both. Which implies the public key can't be an arbitrary string like an e-mail, since presumably you would still want messages going to said e-mail address, otherwise why bother revoking when you could just change address?

Anyways, this is not the only cool new crypto concept in town, but it is certainly one of the most intriguing as it would be a very simple platform for building mostly-transparent encryption into typical consumer apps. If it works as advertised.

I present it to Slashdot readers, to engender discussion on the method, RIBE in general and whether (in light of what's known) default strong encryption for everything is something users should just get whether they like it or not.

Link to Original Source
Android

Submission + - Petition to make Patent Trolls PAY->

jd writes: "The makers of X-Plane, Laminar Research, are unhappy. Very unhappy. They are being sued by a patent troll (Uniloc) over using an industry-standard Android library for copy protection. Essentially, if the troll wins, it will shut down Android (and, by implication the Kindle) because existing app writers aren't able to pay the sorts of money being asked. Open Source may survive, but most Android apps are not Open Source.

Copy protection brings its own issues, but setting those aside, this is a serious effort to bring patent trolling (and software patents) under some sort of control. This is one of those times where the Slashdot Effect could really be useful. If enough people sign, given the increasing hatred in industry towards trolls, we might see something done about it for a change."

Link to Original Source
Programming

Submission + - What modern paradigms are worth pursuing?->

jd writes: "There seem to be a number of new paradigms emerging in the programming world. Templates and other similar features don't seem to have solved the problems of complexity in modern software, with the inevitable result of people inventing other forms of abstraction.

Of these, the two that seem the most significant are Aspect-Oriented Programming (an example of a compiler can be found here) and Feature-Oriented Programming (>a href="http://wwwiti.cs.uni-magdeburg.de/iti_db/forschung/fop/featurec/">again, a sample compiler).

Intel, on the other hand, is disregarding the complexity of the problems and is focusing on the complexity of the solutions. They have bought and fully opened the Cilk++ frontend for G++, which adds instruction-level parallelism to C++ programs.

But are these actually any use? I don't expect to see businesses crying out for coders experienced in FOP, nor do I expect to see complex projects such as KDE exploit such features. Although Cilk and Cilk++ have now been out a while, I can name no program that uses them.

Are they underused because nobody's heard of them or because nobody who has heard of them has found anything they're a good solution for? Are additional layers on top of Object Oriented languages the equivalent of Fifth-Generation Languages — a warning flag that the entire approach has hit a brick wall, requiring a rethink rather than a new layer?"

Link to Original Source
AMD

Submission + - CPU competition heating up in 2012?->

jd writes: "2012 promises to be a fun year for hardware geeks, with three new "Aptiv-class" MIPS64 cores being circulated in soft form, a quad-core ARM A15, a Samsung ARM A9 variant, a seriously beefed-up 8-core Intel Itanium and AMD's mobile processors. There's a mix here of chips actually out, ready to be put on silicon, and in last stages of development. Obviously these are for different users (mobile CPUs don't generally fight for marketshare with Itanium dragsters) but it is still fascinating to see the differences in approach and the different visions of what is important in a modern CPU.

Combine this with the news reported earlier on the DDR4, and this promises to be a fun year with many new machines likely to appear that are radically different from the last generation.

Which leaves just one question — which Linux architecture will be fully updated first?"

Link to Original Source

Submission + - Handling large amounts of data with complex relationships?->

jd writes: "This is a problem I've mentioned in a couple of posts, but I really need the expert advice only Slashdot can offer. I have a lot of old photos (many hundreds) and old negatives (about 7,500 or so) covering 150 years and four different branches of the family.

The first challenge is to find a way to index every scan (date, geography, people) to be able to relate the images. Google+/Picasa doesn't even come close to what is needed — its capacity to relate information is very limited.

The second challenge is to identify major landmarks. Few of the pictures have any information and whilst I can identify some places I cannot identify everything. Not even close. Searching the web for similar images using the image as the "keyword" — that is an interesting challenge.

The third challenge is to store the images. Each of the scans is around 3.5 gigabytes in size using CCITT 4 compressed TIFF files. That gives me a storage requirement of 28 (SI) terabytes (27.3 real terabytes), which is more than I really want. Since I am producing a digital backup of the negatives, I don't want to lose resolution or detail where I can avoid it. Clearly, I can't avoid it completely — I can't afford a personal data silo — but keeping loss to a minimum is important.

What would people suggest as the best solution to these various technical problems? Besides getting a brain transplant and a new hobby."

Link to Original Source
Science

Submission + - You really are what you know->

jd writes: "There has been research for some time that shows that London cab driver brains differ from other people's, with considerable enlargement of those areas dealing with spacial relationships and navigation, with follow-up work showing that it wasn't simply a product of driving a lot.

However, up until now it has been disputed as to whether the brain structure led people to become London cabbies or whether the brain structure changed as a result of their intensive training (which requires rote memorization of essentially the entire street map of one of the largest and least-organized cities in the world). Well, this latest study answers that. MRI scans before and after the training show that the regions of the brain substantially grow as a result of the training, that they're quite normal beforehand.

The practical upshot of this research is that — even for adult brains, which aren't supposed to change much — what you learn structurally changes your brain. Significantly."

Link to Original Source

Submission + - When and How to deal with GPL violations?-> 1 1

jd writes: "There are many pieces of software out there such as seL4 (kindly brought to my attention by another reader) where the vendor has indeed written something that they're entitled to Close Source but where their closed-source license includes the modifications to GPLed software such as the Linux kernel.

Then there's a second type of behaviour. Code Sourcery produced two versions of their VSIPL++ image processing library — one closed-source, one GPLed. It was extremely decent of them. When Mentor Graphics bought them, they continued developing the closed-course one and discontinued, then deleted, the GPL variant. It's unclear to me if that's kosher as the closed variant must contain code that had been GPLed at one point.

Here's the problem: Complaining too much will mean we get code now that maybe 4 or 5 people tops will actually care about. It will also make corporations leery of any other such work in future, where that work will be of greater value to a greater number of people.

So the question I want to ask is this: When is it a good time to complain? By what rule-of-thumb might you decide that one violation is worth cracking down on and another should be let go to help encourage work we're never going to do ourselves?"

Link to Original Source
Science

Submission + - Open Source Cancer Research?->

jd writes: "Dr Jay Bradner is claiming that it is possible to conduct cancer research using open source methodology. Certainly, his research lab has produced a few papers of interest, though the page describing the research is filled more with buzzwords (post-genomic?) and hype than with actual objectives and serious strategies. I'm certainly not seeing anything that fits either the "open source" and crowdsource models.

Certainly, there are some areas where open source really is exceedingly useful in science.

Then, there are plenty of projects that use volunteers to help solve complex problems.

So, I'm going to ask what is probably a dumb question — is there actually anything new that science can do with open source techniques? Has that path been mapped out, or are there actually new (as opposed to merely buzzword-compliant) approaches that could be followed which would get useful results?"

Link to Original Source
Science

Submission + - Brain uses Self-Modifying Code->

jd writes: "Each and every brain cell alters its own DNA thousands of times over a person's lifetime, say researchers from the Roslyn Institute in Edinburgh, Scotland.

The paper, formally published in Nature (abstract visible, article behind paywall), describes the mechanism by which this happens.

I have written to the lead researcher to get confirmation that this is actually a change in the DNA sequence itself and not a change in the epigenome that alters what the DNA transcribes to. He has kindly written back to confirm the findings. It IS a change in the DNA. Every brain cell in you has a genome unique to itself.

In short, the brain is a cluster where each node is running self-modifying code — a practice that no computer scientist or software engineer would dream of trying to do, considering it way too fragile, too unpredictable and too difficult. The university I went to, you'd be murdered in the hall if you'd proposed even single-threaded self-modifying algorithms, never mind a few trillion tightly-coupled threads.

The hope in genetics is that this will lead to a better understanding of genetic diseases, such as the various forms of dementia. My fear is that it will have the opposite effect — you can't exactly sequence every cell in the brain of a live patient to see what is going on, which may lead to geneticists ruling the problem too hard.

The other consequence of this find is that we are all chimera. Human DNA can no longer be regarded as a single thing in a single person, with only a few exceptional cases. The terms "Chimera" and "Gestalt" apply to everyone on a fantastic scale. Which makes them meaningless, unless they get redefined to work around the problem.

Arguably, that's an academic point at the moment. Nomenclature is nothing too serious and there's no actual hard evidence that it would cause problems in DNA forensics, though the mere fact that there's a possibility might cause problems in the courtroom whatever the science itself says about the impact."

Link to Original Source
Technology

Submission + - Bloodhound SSC partially opens source->

jd writes: "I've been monitoring the progress of Bloodhound SSC (the car aiming for the 1,000 MPH record) and it looks like they're opting for some interesting tactics. In April, the car itself went partially open source, with a complete set of schematics and specifications and an invite for engineering bugfixes. According to them, it's the first time a racing team has done this. Sounds likely enough. The latest patches to be released were a tripling in fin size and a switch to steel brakes because carbon fibre would explode."
Link to Original Source
Politics

Submission + - Wisconsin tried to ban Internet2->

jd writes: "The Wisconsin legislature attempted to pass a budget that would ban any school, college or university from being a member of Internet2 or WiscNet on the grounds that such networks "unfairly competed" against commercial offerings.

Of course, Internet2 is already partly supplied by those very same commercial vendors and last I heard there weren't too many DSL providers offering 100 gigabit pipes running onto 9 terabit backbones. Nor, as we all know, do that many ISPs offer IPv6. So who, precisely, were Wisconsin concerned about?

For now, there has been a reprieve. But the legislature has made it clear that academic networks will not be tolerated in future and it does not seem far-fetched to expect other legislatures to prohibit such systems."

Link to Original Source
Science

Submission + - Just In: Yellowstone is big-> 1 1

jd writes: "Really big. By using electrical conductivity tests rather than seismic waves, geologists have remapped the Yellowstone caldera. Whilst seismic waves indicate differences in the reflectivity of different materials, it doesn't show up everything and contrast isn't always great. By looking at the electrical conductivity instead, different charcacteristics of molten and semi-molten rock can be measured and observed.

The result — the caldera is far larger than had previously been suspected. This doesn't alter the chances of an eruption, it's not even clear it would change the scale (prior eruptions are very easy to study as they're on the surface) but it certainly changes the dynamics and our understanding of this fierce supervolcano."

Link to Original Source

1000 pains = 1 Megahertz

Working...