Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:This research should receive enormous funding. (Score 1) 202

IANAP, but I've hypothesized that you could also say that the slits influence the individual particle at the same time. I.e. the particle isn't interfering with itself, but is rather 'interfered' (I know) with by both slits.

If the particles that comprise the edges of the slits (or lack of those particles in the slits) have an influence on the trajectory of the fired particle that varies in a wave-like manner, the notion of 'interferes with itself' wouldn't be required to explain the resulting patterns. Again, IANAP; I'm just visualizing the elements of the slits as having some varying attraction/repulsion on the particle and am looking for reasons (preferably experimental results) why that visualization should be dismissed.

Comment Re:The most traditional pass time is... (Score 2) 140

The only effective solution is to make [regulation] so simple, that dodging becomes unnecessary.

FTFY.

Regulation (and legalese in general) becomes complex because it has to deal with all the crazy ways that creative, highly motivated, self-interested entities will find to circumvent it.

Don't get me wrong. I'm not saying that more complex regulation is better. Regulation should be as simple as possible. The key to that sentence and the problem in your understanding of this matter lies in the last part: 'as possible'. Everybody can yell 'Well, just have every x below parameter y be illegal! Problem solved!' until they are confronted with a case for which their simple rule does not solve the problem.

Relevant XKCD: http://xkcd.com/793/

Comment Re:I wonder... (Score 2) 566

There are quite a number of minor changes to the strings in the code (grammar fixes, additions of code comments).

Also, the specific changes you're talking about all concern changing 'English (U.S.) resources' to 'English (United States) resources'. That line is apparantly auto-generated by VS: https://www.reddit.com/r/priva...

Or just Google search for it:
https://www.google.com/webhp?s...
https://www.google.com/webhp?s...

Comment Re:Stupid? (Score 1) 339

The human brain runs on about 20 +/- 10 W.

That means that the current collective processing power requirement of the entire human race is about 7.2 * 10^9 * 20W = 144 GW.
If you take into account how much of the processing power is wasted on idiotic things and things otherwise useless to society as a whole, how much redundancy there is in that processing and how inefficient some processing tasks are performed, I'd say that an estimate of 10 GW for equaling the entire human race in processing performance in a hardly optimized manner isn't far fetched at all.

So, like 5 coal fired power plants or roughly 80 km^2 of solar panels would do the trick.

Let's face it. As a race, we really suck at putting our processing power to use to progress society as a whole. It's not surprising, though, considering that our processing is tuned to "Don't be eaten by a bear. Kill a bear and eat it. Use the bear skin to impress and get lots of sex."

Comment Re:From many points of data (Score 1) 772

Don't be an asshole. I know that humans are technically animals, but I also know that most people wouldn't regard them as such.

From Wikipedia: "The word "animal" comes from the Latin word animalis, meaning "having breath".[1] In everyday colloquial usage the word incorrectly excludes humans—that is, "animal" is often used to refer only to non-human members of the kingdom Animalia. Sometimes, only closer relatives of humans such as mammals and other vertebrates are meant in colloquial use.[2] The biological definition of the word refers to all members of the kingdom Animalia, encompassing creatures as diverse as sponges, jellyfish, insects, and humans.[3]"

The language of the other questions ('The universe started with a big explosion') implies that the colloquial interpretation of the word 'animal' fits best.

Comment Re:From many points of data (Score 2) 772

Looking at the actual data ( http://www.nsf.gov/statistics/... ), it seems that answering the question in TFS with true is very much correlated positively with 'verbal ability', 'family income', 'formal education', 'science mathematics education', 'trend factual knowledge of science scale' (whatever that may be) and negatively with 'age'.
The same pattern is visible in the other questions, just more pronounced.

Considering the retarded way the 'uncorrelated' questions were posed, I can imagine that respondents just didn't want to answer them or gave the 'wrong' answer. 'The universe started with a big explosion' is a ridiculous (almost pejorative) mischaracterization of the Big Bang and I would feel very uncomfortable answering 'true' to it.

'Human beings, as we know them today, developed from earlier species of animals' is also questionable, especially due to the addition 'as we know them today' combined with 'of animals'. It implies that the question specifically addresses homo sapiens. Technically, home sapiens evolved from species that most educated people wouldn't regard as 'animals', but as proto-humans. This interpretation correctly renders the statement false.

Comment Re:But that's not all Snowden did... (Score 1) 348

Do you have some inside information the rest of us don't that tells you how limited it was?

No, but we have the absence of such information in the Snowden leaks and evidence about the limitless character of the U.S. spying. I explicitly included '(or at least seems)' to make clear that I am not sure whether European countries (want to) spy on their allies. But I would be the first to condemn them if evidence of such came out.

Even most Europeans understand that for every major issue where we all (the US and the rest of the world) bitch about the US, the EU is generally at least on par with the US if not far worse.

Don't be ridiculous. You have no idea how much Western Europeans in general look down on the US as a society. The US political system is a terrifying joke, as are the US social security system, the US infrastructure, the US health care system, the US financial industry, and the US education system. Worker's rights are terrible, many children are spoiled brats, obesity is rampant, pretty much everybody is indoctrinated to be 'a patriot' and there is a strange and apparently impenetrable desire for everybody to be able to carry concealed firearms. The US military forces and command are seen as war-mongering imperialists. US protectionism and refusal to cooperate with international treaties such as Kyoto are regarded as appalling. Nobody understands why 'evolution' is even a subject of discussion in the US.

The most heard comment of Europeans returning from vacation in the US is 'They really are that fat!'

The things out of the US that are looked at in a positive light are US celebrities, the products of the US entertainment industry and the US technology industry.

What I'm saying is that Europeans in general do not think they have comparable or bigger issues than the US.

Take a look at the anti-immigrant movements there trying to expel various minorities from just about every country in Europe (whatever the flavor of the month is for blaming all societal ills: Roma Gypsies? Turks? West Africans? Pakistanis?)...

Yes, xenophobia is making a comeback in the EU. It started to flare up around september of 2001.
Of course, xenophobia never really goes away, but the heightened tension between Muslim nations and the West really created a boogeyman for both sides (one is symbolized by the Qu'ran and the other is symbolized by the American flag). It probably would have happened anyway, but the whole CIA meddling in the Middle East (Operation Cyclone etc.) also probably didn't help.

Besides the (almost unavoidable) prevalence of xenophobia, you've mentioned gay marriage: http://en.wikipedia.org/wiki/R... vs http://en.wikipedia.org/wiki/S...

Besides that, you've mentioned nothing. Honestly, I can't think of much (besides the technology industry) where the EU should look to the US as an example.

Comment Re:But that's not all Snowden did... (Score 1) 348

I think you misunderstand the US view, at least mine.

I commented on neither. I commented on a specific view uttered regularly by some US citizens on Slashdot (and elsewhere). If that is not your view, then my comment does not apply to you.

It would be nice if international folks could be a little more adult about US surveillance... How many americans do you think visit chinese message boards to whine about chinese govenment spying?

How many Europeans or Africans do that? Oh wait, that's right: pretty much none of the inhabitants of those continents speak Mandarin. As it happens, English is the de facto language of the (Western part of the) internet, which means that all message boards in English are essentially international message boards.
You can bet your bottom that there is a lot of animosity towards China on message boards where Mandarin is the main language. Just not originating from Westerners, but from Mandarin-speaking Asians.

You're just going to have to accept that if you speak in a language a lot of people understand, a lot of people will be able to criticize what you say.

Comment Re:But that's not all Snowden did... (Score 3, Informative) 348

Just curious, did any of those citizens of other countries say that it was wrong for THEIR country's intelligence agencies to spy on people from other countries?

The amount of spying on allies by those 'other countries' is (or at least seems) quite limited. Especially compared to the ridiculous dragnet the U.S. has deployed.

I really have to emphasize that the whole 'spying on Americans is wrong, but all other humans on this planet are fair game' is a sentiment that breeds deep, deep resentment. Being friends or allies centers around reciprocity. Guess what 'well, fuck the rest of the world' is reciprocated with?

Comment Re:Already known (Score 1) 230

Possible is quite different from easy. The surprising result is that they've been able to find at least one very similar but misclassified example for any neural network they've looked at. That they were able to find examples does not mean that most, or even many such images exist.

To be more specific: for each of the trained networks, the used the information about the network to construct the misclassified examples. The 'fool public facial recognition'-idea is obviously completely unfeasible, unless you have access to what exactly all those public facial recognition neural networks look like (in terms of weights, neuron count and topology) and if you can ensure that every image of you ends up in the misclassified bin for all the neural networks that analyse that image.

If you look in the original paper, you can see that the misclassified examples were misclassified 5-98% of the time when presented to other networks trained on the same data. In other words, the adversarial examples didn't pose a problem for some other networks at all. See Table 4 in the paper.

This is a "problem" with neural networks though. We can set up a topology and learning rules, but by the time they're trained, looking at neuron connection weights doesn't really provide any insight into how they make decisions. They're a black box, and that should be scary in any situation where safety is important.

That's not completely true anymore. The whole concept of deep learning is to use multiple layers, in which the first layers are mainly trained to pick up on salient features of (subsets of) the input. Basically: feature detectors. As you move up the layers, the feature detectors encompass a larger part of the input and represent a higher level abstraction of the input (parallels have been drawn with the 6-layered structure of the human neocortex [detail: we're the only mammal to have more than 5 layers]). The paper has a number of examples of specific feature detectors: examples of the patterns they fire on and a manually created description for the collection of patterns ('unit sensitive to round spiky flowers').
See the image on this page for a very simplified image: http://theanalyticsstore.com/d...

Now mapping the input to specific neurons in a meaningful way is still hard, but at least it's gotten more doable.

Comment Re:Errors (Score 4, Interesting) 230

The neural network "problem" they're talking about was while identifying a single image frame

Yes, and even more important: they designed an algorithm to generate exactly the images that the network misperformed on. The nature of these images is explained in the paper:

Indeed, if the network can generalize well, how can it be confused by these adversarial negatives, which are indistinguishable from the regular examples? The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers[)], and so it is found near every virtually every test case.

A network that generalizes well correctly classifies a large part of the test set. If you'd had the perfect dog classifier, trained with millions of dog images and tested with 100% accuracy on its test set, it would be really weird if the given 'adversarial negatives' would still exist. Considering that the networks did not generalize 100%, it isn't at all surprising that they made errors on seemingly easy images (humans would probably have very little problem in getting 100% accuracy for the test sets used). That is just how artificial neural networks are currently performing,

The slightly surprising part is that the misclassified images seem so close to those in the training set. If I'm interpreting the results correctly (IANANNE), what happens is that their algorithm modifies the images in such a way that the feature detectors in the 10 neuron wide penultimate layer fire just under the required threshold for the final binary classifier to fire.

Maybe the greatest thing about this research is that it contains a new way to automatically increase the size of the training set with these meaningful adversarial examples:

We have successfully trained a two layer 100-100-10 non-convolutional neural network with a test error below 1.2% by keeping a pool of adversarial examples a random subset of which is continuously replaced by newly generated adversarial examples and which is mixed into the original training set all the time. For comparison, a network of this size gets to 1.6% errors when regularized by weight decay alone and can be improved to around 1.3% by using carefully applied dropout. A subtle, but essential detail is that adversarial examples are generated for each layer output and are used to train all the layers above. Adversarial examples for the higher layers seem to be more useful than those on the input or lower layers.

It might prove to be much more effective in terms of learning speed than just adding noise to the training samples as it seems to grow the test set based on which features the network already uses in its classification instead of the naive noise approach. In fact, the authors hint at exactly that:

Already, a variety of recent state of the art computer vision models employ input deformations during training for increasing the robustness and convergence speed of the models [9, 13]. These deformations are, however, statistically inefcient, for a given example: they are highly correlated and are drawn from the same distribution throughout the entire training of the model. We propose a scheme to make this process adaptive in a way that exploits the model and its deciencies in modeling the local space around the training data.

Slashdot Top Deals

Serving coffee on aircraft causes turbulence.

Working...