Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:That's Baby Crying Frequency (Score 1) 176

As an aforementioned 1986 article comments "we might just as plausibly conclude that the reason our hair is brownish is that it enabled our monkey ancestors to hide amongst the coconuts".
Personally the first thing that comes to my mind when I see someone scraping a chalkboard is
- cover my ears
- throw something at the source
Certainly "feed the source of the noise" would not be high on my list on instinctive priorities.
You might also notice that _actual baby crying_ isn't nowhere as repelling.

Comment Re:I'd be wary of Google services (Score 1) 368

Sure, I guess Analytics, Blogger, Calendar, Chrome, Code, Docs, Groups, Language Tools, Picasa, Maps, OS, Reader, Scholar, Talk, Webmaster Tools, to name a few are going to join the stack of corpses in a minute. Sure bet. Or maybe you just have to fail sometimes when you push new services in dozens, to ensure some quality. If I knew Google+ was a 100% sure bet, now _that_ would worry me.

Comment Re:So much for "unbreakable" (Score 1) 86

Which part of "The existence of one-way functions would imply P!=NP." don't you understand? On the other hand P!=NP doesn't imply the existence of one-way functions (one-way is a _stronger_ assumption), so no, even P!=NP doesn't give you easy encryption (as far as our knowledge goes today). And ElGamal's security is based on an _even stronger_ assumption that the discrete logarithm is a one-way function - there's no simple reason to believe that it's true.

What physical theory was disproven? The principles of Quantum Information Theory never changed. There are extensions taking into account relativity, but they're irrelevant, just like Newton's laws of motion are still valid for all intents and purposes of a mechanical lock. Even Quantum Mechanics, of which QI is a very simple subset, are settled and still agree with modern theories under low gravity and such. The fact that new theories are invented with "Quantum" in their name doesn't mean that QM changed. You could just as well say that ElGamal is shaky, because maybe there's a law of physics that makes all ciphertexts magically appear plain on my screen.

From Wikipedia: "BB84 is the first quantum cryptography protocol." It may not be encryption, because you don't get a classical ciphertext, but it definitely serves the purpose of exchanging information while assuring it's unreadability to third parties. Exchanging secrets. Cryptography. Please.

Comment Re:So much for "unbreakable" (Score 1) 86

ElGamal's proof assumes the Diffie–Hellman assumptions, which are quite strong. Actually every modern asymmetric key encryption algorithm's security would imply the existence of one-way functions, which in turn would imply P!=NP - as far as my outdated information goes, we don't have a proof of that yet. But even if I'd trust P!=NP, there's a lot of other ways the strongers assumptions could fail, e.g. maybe your particular key is one of those 10% that's easy to revert.

I'm not sure why you say "there's no crypto" and call it quantum signaling - BB84 is obviously an encryption protocol. Maybe you were thinking about entangled states communication, which also is provably secure (assuming basic quantum information principles and some things about the physical detectors), but as a protocol is simple and relies almost solely on sending entangled pairs.

A 1024 qbit quantum computer _will_ give you an exponential advantage in RSA-breaking (compared to classical algorithms we know) even if the key is longer - the algorithm might get more complicated, but there obviously are things a QC can do (in a reasonable time) while a classical can't. Regardless of that - if your encryption protocol assumes nobody will have 5000 qbit quantum computer in fifty years, then it has a weakness. When Enigmas were being used, do you think anyone thought the Bombes - massive electromechanical devices capable of doing a massive analytical job - were possible? One more thing - a 5000 qbit computer is most probably easy to do, once you know how to do a 1024 one - it's not exponential in difficulty, the only problem is to find a solution to decoherence that will scale, that doesn't have an inherent limitation.

The BB84 quantum encryption protocol (invented in 1984) is already provably secure, assuming basic quantum information principles and some detector reliability (we don't assume they're perfect, BB84 takes into account all kinds of noises on detectors and emitters; noises are always assumed to be caused by a breaking attempt, we take into account the possibility of the enemy having parts more perfect than anything we could produce). The "basic QI principles" is basically the non-cloning principle (a bit more precisely - the principle that all observations are describable by unitary positive matrices) - which is something I'd trust a lot more than even P!=NP. Saying it could be wishful thinking is like saying Newton's motion is wishful thinking. A mechanical lock still works just as advertised, quantum mechanics or general relativity won't help you break it. Maybe particles do clone when you go outside QM, e.g. the universe's growth may create particles, but making that an exploit would require you to control the universe's growth :)
Those quantum exploits you see are caused by attempts at making the protocols more practical - of course there's a ton of problems with the theory going into practice. But saying "quantum information theory is shaky" is more crazy than saying "P=NP", and way more crazy than saying "there might be a fast algo for the discrete logarithm for certain primes we don't know of yet".

Comment Re:So much for "unbreakable" (Score 1) 86

I wonder what principle of quantum information ever changed? Or can you give any example of a few-decades-old principle of theoretical physics that looks silly today? Theories embrace new details, the underlying interpretation and math can totally change, but in 'normal' conditions (low gravity, low speeds or macroscopic scales, depending on the theory), they converge to classical principles. So all you need to assume in quantum cryptosystems is its pretty simple old principles and "Eve doesn't have a super mega flexible neutron star constellation with her".

Comment Re:What, exactly, is 3-SAT? (Score 1) 700

3-SAT: for a given formula of the form
(x[1] and !x[5] and x[2]) or (!x[1] and x[3] and x[6]) or ...
tell me if there is such an x[], that the formula is true.

So it's SAT with the (and) clauses limited to three variables. It turns out that it's an equivalent problem.

What's important is that if 3-SAT is solvable in polynomial time, all NP problems can be solved in polynomial time. And a pretty big part of complexity theory crumbles to obvious equalities, but that would only be sad for scientists :)

Comment Re:Lower emissions? (Score 1) 317

Ok, thanks - I didn't think about it. But sincerely I still don't think the main advantage to humanity from Self Driving Cars is a lowering of emissions. The concept could totally change the way we commute, but the headline focuses on something as important as the windshield's shape (almost a car analogy ;) ).

Another (maybe more risky) thought: wouldn't it be a thousand times more "green" if that effort was focused on fixing the problems of trains in my country, for example? Or even an ad campaign. That could make a significant amount of people switch from airplanes, and these are, to my knowledge, far more problematic if we count emissions. Disproportionally more.

I seriously ask, I stopped denying GW a few years ago (it was more "weak, a priori belief in thoughts inherited from my parents" than denial), maybe I'll change my mind in that matter too ;)

Comment Re:Lower emissions? (Score 1) 317

Ok, but the slashdot submitter wasn't forced to duplicate that in the headline, for example. Ideally, research money shouldn't be granted for "the Only Problem The World Ever Had and Will Have", but I'm not going to change all those socio-political mechanisms. What I can do, is to remember what is important and real, and write articles accordingly. Leave the political mumbo-jumbo where it has to be :P

Comment AI researchers should be more modest (Score 5, Insightful) 271

It's like a XV century man trying to simulate a PC by putting a candle behind colored glass and calling that a display screen. People often think AI is getting really smart and e.g. human translators are getting obsolete (a friend of mine was actually worried about her future as a linguist). But there is a fundamental barrier between that and the current state of automatic german->english translations (remember that article some time ago?), with error rates unacceptable for anything but personal usage.
Some researchers claim we can simulate intelligent parts of the human brain - I claim we can't simulate an average mouse (i.e. one that would survive long enough in real-life conditions), probably not even it's sight.
There's nothing interesting about this 'dreaming' - as long as the algorithm can't really manipulate abstract concepts. Automatic translations are a surprisingly good test for that. Protip: automatically dismiss any article like that if it doesn't mention actual progress in practical applications, or at least modestly admit that it's more of an artistic endeavour than anything else.

Comment Re:Cycle my ass ... (Score 3, Insightful) 167

And, if you're in extreme denial, you can see evidence of a cycle yourself, if you've got the patience to take a look at the sun (filtered/projected) and note the spot number for ~20 years. There's data since 1750. Sunspots are correlated with auroras, so it's also within the reach of a human with no modern equipment to check the effects of sun activity.

Comment Re:Every baby I know of gets a prick on the heel (Score 1) 544

-many posts up there are getting "insightful" for predicting that we'll turn into Gattaca if there's a law that allows storing 26 numbers from everybody's DNA
-actually (partly since 1960?), law requires storing the full specimen for a period of time and in some states allows to keep it indefinitely
- (which those insightful prophets, while being extremely cautious, seem to ignore)
- it's not like we're turning into an abominable dystopia now
I'm sorry, sincerely I don't understand. Could someone explain this to me?

One Quarter of Germans Happy To Have Chip Implants 170

justice4all writes "If it means shorter lines at the supermarket, a quarter of Germans would be happy to have a chip implanted under their skin. The head of Germany's main IT trade body told the audience at the opening ceremony of the CeBIT technology exhibition that one in four of his countrymen are happy to have a microchip inserted for ID purposes."

Comment Re:Dr. Zen's answer (Score 1) 951

In my opinion more users would benefit from a message like
File operation failed - please check permissions and try closing other programs that use the file. [Retry] [Cancel]

1. If the user understands what file permissions are, he'll immediately know what to do, instead of reading what the 'preferences file' is.
2. If he doesn't and wouldn't read the long message, now at least he might get an idea.
3. If he doesn't and would read the long one, he'll have to call tech support now. I don't believe he wouldn't anyway.
4. The number is of no use for the user - send it automatically (there are exceptions of course). Placing it in the first sentence is just distracting the user.
5. Usually the routine that fails at some file operation has no way of knowing it's the 'application preference file'. You could throw/catch exceptions, but you'd have to predict all possible errors and write a message for all of them.

If you really have time for writing the "more information", you'd better write a function that checks what permissions are missing and what locks are held.

Have you ever written messages like that in a real application?

Slashdot Top Deals

It is not best to swap horses while crossing the river. -- Abraham Lincoln