Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:How about mandatory felony sentences instead? (Score 1) 420

Is there evidence against the efficacy of a mandatory interlock program? On the other hand, there is plenty of evidence that harsh sentencing in other drug-related crimes does not work.

Reserve the harsher punishments for anyone who violates one of these restrictions, or who facilitates any such violation (the weakest link that I see in this proposal is the loaning of cars by relatives and friends.)

Comment Re: Rossi (Score 1) 183

The first sentence in the Wikipedia article: "Andrea Rossi (born 3 June 1950) is an Italian convicted fraudster, inventor and entrepreneur." (Though the footnote to "fraudster" indicates he was ultimately acquitted, on what appears to be a technicality, of the major charges relating to an alleged oil-from-trash scam.) The best you can say about E-Cat is that Rossi seems to be doing everything possible to make it look like a scam (Starts with a Bang.)

Rossi's E-Cat was the first thing I thought of when I read of Gates trip to Italy, but he was apparently visiting the Frascati ENEA labs of the University of Verona, which is "recognized for excellence in [cold] nuclear fusion research", whatever that means. I do not know if it has any connection to Rossi.

Comment Technically Illiterate (Score 3, Informative) 183

The 'Tech Metals Insider' article contains a link to what it describes as another of its articles on Low Energy Nuclear Reactors, but it is actually about the hohlraums used in some inertial-confinement laser fusion research. The author is apparently unaware that this is a very different technology, and so cannot be regarded as a reliable guide on the subject.

Comment Depth Limit for Fish (Score 5, Informative) 33

A recent article in New Scientist (paywalled, I don't have an alternative) suggests that 8 km is about the limit for fish. The problem, apparently, is that the pressure distorts protein shapes, eventually preventing them from working properly. The tissue (particularly muscle) of deep-sea fishes contains trimethylamine oxide, which may protect against this problem, and the deeper you go, the more of it the fish have, but by about 8km they are saturated with it.

Invertebrates have been found deeper, so presumably they have a different mechanism.

       

Comment Re:Actually a Great Step Forward (Score 1) 130

Computer learns to pick out salient features to identify images. Then we are shocked that when trained with no supervision the salient features aren’t what we would have chosen.

There is a huge difference: humans pick relevant features guided by a deep understanding of the world, while machine learning, unguided by any understanding, only does so by chance.

Now that we know what computers are picking out as salient features, we can modify the algorithms to add additional constraints on what additional salient features must or must not be in an object identified, such that it would correspond more closely to how humans would classify objects. Baseballs must have curvature for instance not just zig-zag red lines on white.

Hand-coded fixes are not AI - that would be as if we have we had a higher-level intelligent agent in our heads to correct our mistakes (see the homunculus fallacy).

Comment Re:Not smart or stupid (Score 1) 130

These are computer programs, not artificial intelligences as some have come to think of them. They are simply some charges flipping around in some chips.

And minds are just charges flipping around in some brain (at one level of abstraction, it is chemical, but chemistry is explained by the movement of charges.)

As John Searle said, brains make minds.

Everything else is just speculating.

If you look at John Searle's arguments in detail, they ultimately end up as nothing more than "I can't believe that this is just physics." Searle's view is actually rather more speculative than the one he rejects, as it implies an unknown extension to atomic physics.

Nevertheless, none of what I write here should be construed as a claim that artificial intelligence has been achieved.

Comment Re:Clickbait (Score 1) 130

So it needs to learn that these exact images are tricks being played on it, so it can safely ignore it.

No. Learning that the "exact images" presented here are tricks would not be a solution to the problem revealed by this study. The goal in any form of machine learning is software that can effectively extrapolate beyond the training set.

What's the story?

Once you understand the problem, you will see what the story is.

Comment Re:Also... (Score 1) 130

If the network was trained to always return a "best match" then it's working correctly. To return "no image", it would need to be trained to be able to return that, just like humans are given feedback when there is no image.

It seems highly unlikely that such an elementary mistake was made: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year."

The fact that the net returns a confidence level implies that it does have a way to return a result of 'not recognized'.

Comment Re:Training classifiers require "rejectable" sampl (Score 1) 130

The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.

These classifiers were not trained on samples from outside the target set.

This is not some network hastily trained by people who are ignorant of a very basic and long-known problem: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year." From a paper by the developers of AlexNet: "To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective."

It does not seem plausible that this result can be explained away as an elementary mistake.

Comment Re:Also... (Score 1) 130

Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.

Right, and it has created a great deal of misery throughout human history. Just because it is prevalent does not mean it is not a problem.

More specifically, the overconfidence displayed by the networks here should lead to a corresponding skepticism, in a rational observer, to the notion that they have cracked the image recognition problem.

Comment Re:seems a lot like human vision to me (Score 1) 130

I think it was fairly clear what was going on, the neural networks latch on to conditions that are necessary but not sufficient because they found common characteristics of real images but never got any negative feedback.

You seem to be suggesting that it is 'simply' a case of overfitting, but overfitting is a problem that has been recognized for some time. I don't doubt that the developers of these networks have thought long and hard about the issue, so this study suggests that it is a hard and as-yet unsolved problem in this domain.

One thing that humans use, but which these systems do not seem to have developed, is a fairly consistent theory of reality. Combine this with analytical reasoning, and we can take a difficult image and either work out what is being depicted, or realize that we are not succeeding in doing so.

Slashdot Top Deals

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...