Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:False. (Score 1) 227

My experience (no science here, only personal encounters) is there are 2 types of racists, and both are wrong but not in the same way.

Racist theorists think we can achieve a better optimum in a society by removing bad elements. The idea is, if you remove the low values, the mean goes up. They completely fail at understanding the benefits of stochastic exploration in something as complex as a society. If evolution is so performing good it is also because of the stochastic exploration it uses introducing mutations or crossing-over.

Racists people are just mediocre guys that need to be proud of something. Now, since they never achieved something in their live, they turn onto something they did nothing for, like their country or the color of their skin. That's the old "I'm better than you", from people that in reality aren't, but cannot stand that fact. Misplaced pride, or something like that.

Comment Re:False. (Score 1) 227

You miss the point. It is obvious that based on genetic criteria, people are not physically equal. Some run faster, some spring higher while others are better at abstraction or emotions. That it is not uniformly distributed among ethnicities is completely irrelevant, and hopefully you'll understand why.

The way we have to consider equality among men is by definition, like an axiom. That way, we can build rules that are much more interesting than the ones where all men are not equal. In particular, it gives you much more liberty, for we consider our society to be above arbitrariness and randomness.

Let's face it, no one chooses where, when and from whom he/she is born, it's either arbitrary (depending on whether you believe in some cosmic plan) or random. The consequences of that is that you are born with a limited amount of possibilities. Now, we can either shorten these possibilities by exploiting this arbitrariness/randomness - it's the "brave new world" scenario where you have to fit that gamma role you were born for -, or we can enlarge them by deciding not to take it into account and state that all human beings are equal. We choose the later since the Declaration of the Rights of Man. It is a great choice based on logical reasoning rather than obsolete bigotry, because it takes into account its consequences. It is a choice that puts forward our capacity (as a species) to think, plan and build, which is by far our greatest ability. But it also as practical advantages for our societies, like a good mix of robustness, resilience and adaptability.

Comment Re:A shift in economic metrics (Score 1) 509

People do not want to work less. If it were the case, we would see massive political propositions in that sense, which is not what we observe.

In France, the legal work duration is 35h/w (remeber the crappy commercial by cadillac), and people do not like it. Sarkozy was elected on the leitmotiv "work more, earn more", and proposed a system for taxes free overtime. Hollande reintroduced the taxes, and people got mad and angry. Every single week, you can hear some french politician saying we need to move back to 40h/w, and you never hear someone proposing to lower at maybe 32h/w or even below 30h/w - such proposition would not be highly impopular.

Basically, people want more, even if it's pointless and even if it 's harmful for the entire society. Stupidity? Tragedy of the anticommons.

Comment Re:Most humans couldn't pass that test (Score 1) 285

The most ridicule part being "must not be able to explain how". That doesn't even make sense for humans! If you ask artists, they'll tell you what their influences are, if you ask critics, they'll tell you why this particular piece of art was made this way and not in a completely different manner.

Fun fact: any program with yet unseen bugs that make their behavior totally unexplainable to their developers has passed the test. That gives you either an idea of the soundness of this crap, or a deep insight of what type of failure humankind is.

Comment Re:AI is always "right around the corner". (Score 2) 564

It depends of what you expect from an AI. If it is a perfect replica of a human mind, with which you can talk and share life as if it were human, then it will probably never be around. But that's also pretty useless, and most development in machine learning (ML) are in a more abstract level than trying to solve a very specific goal like this.

Now if you consider AI to be completely new intelligent species, that behave in an intelligent way (volontary fuzzy definition here), then it's probably already there. I mean, the ML programs that dictate the behaviour of you insurance policies so as to send you sport ads when you're a bit overweight, or holidays at the sea when you're close to a burn out, that raise the price of things predicted to induce a loss and lower the ones of things predicted have big return in order to influence your choices, etc, that, to me, sounds exactly like what you do with your pets when you decide they should eat that instead of this for some reason they could not handle with their inferior minds. Now, if you think of all the interconnected ML programs searching for new optima every second and exchanging information, you can view it as the new superior species of this planet.

A very short example: the vast majority of the human race wants to put an end to automated short-sighted finance, just like the vast majority of dogs wants to get free from their leashes. Bot never will until their recpective superior species allow them to. We talked a lot of the facebook experiment lately, the real question is how long has it been already done by the machine to fulfill goals we are not able to grasp? Maybe the singularity is already there since a few years, and just like for peak oil, we'll know it some time after. If we get to notice something more intelligent than us is governing our lives.

Comment Re:Give WEKA a try (Score 4, Insightful) 56

I have only one problem with fancy GUI that allow you to train a predicting model in 2 clicks: how confident can you be in your model, since all the parameters are masked and you have no knowledge about them? I still think it is dangerous to rely on a tool you don't understand and you can't control up to a satisfactory level, especially when it is to be used in prediction - something we expect to be highly reliable in many aspects due to old development of science like balistics.

I've written a ML library myself (also in Java, more lightweight than weka, but with no gui - although it comes with standalone binaries for some basic setups) and I can tell you there is no good default tunning that works well for every kind of situation. ML is seriously a young science that gets rapidly tricky even on very common problems, which is very different than field for which we have very accurate solvers that work most of the time (again balistics is probably a good example, at least because it is taught in school and sets the prototype of what we name science). I fear hidding the youth (and thus the imperfection) is only going to cause damage through misconception and false interpretation.

Comment Wrong question (Score 1) 222

Asking if robots can be evil is about as futile as asking if a microwave can be happy.

That being said, there already are killer robots, with a pretty good track record in recent operations. But the evil lies in the humans who made them (from the top exec that launch the program to the small hand that does the job) and used them, not in the pile of steel and semiconductors.

caveat: Looking at your food, your microwave is probably sad, which explains their tendency to commit suicide.

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...