Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Depth Limit for Fish (Score 5, Informative) 33

A recent article in New Scientist (paywalled, I don't have an alternative) suggests that 8 km is about the limit for fish. The problem, apparently, is that the pressure distorts protein shapes, eventually preventing them from working properly. The tissue (particularly muscle) of deep-sea fishes contains trimethylamine oxide, which may protect against this problem, and the deeper you go, the more of it the fish have, but by about 8km they are saturated with it.

Invertebrates have been found deeper, so presumably they have a different mechanism.

       

Comment Re:Actually a Great Step Forward (Score 1) 130

Computer learns to pick out salient features to identify images. Then we are shocked that when trained with no supervision the salient features aren’t what we would have chosen.

There is a huge difference: humans pick relevant features guided by a deep understanding of the world, while machine learning, unguided by any understanding, only does so by chance.

Now that we know what computers are picking out as salient features, we can modify the algorithms to add additional constraints on what additional salient features must or must not be in an object identified, such that it would correspond more closely to how humans would classify objects. Baseballs must have curvature for instance not just zig-zag red lines on white.

Hand-coded fixes are not AI - that would be as if we have we had a higher-level intelligent agent in our heads to correct our mistakes (see the homunculus fallacy).

Comment Re:Not smart or stupid (Score 1) 130

These are computer programs, not artificial intelligences as some have come to think of them. They are simply some charges flipping around in some chips.

And minds are just charges flipping around in some brain (at one level of abstraction, it is chemical, but chemistry is explained by the movement of charges.)

As John Searle said, brains make minds.

Everything else is just speculating.

If you look at John Searle's arguments in detail, they ultimately end up as nothing more than "I can't believe that this is just physics." Searle's view is actually rather more speculative than the one he rejects, as it implies an unknown extension to atomic physics.

Nevertheless, none of what I write here should be construed as a claim that artificial intelligence has been achieved.

Comment Re:Clickbait (Score 1) 130

So it needs to learn that these exact images are tricks being played on it, so it can safely ignore it.

No. Learning that the "exact images" presented here are tricks would not be a solution to the problem revealed by this study. The goal in any form of machine learning is software that can effectively extrapolate beyond the training set.

What's the story?

Once you understand the problem, you will see what the story is.

Comment Re:Also... (Score 1) 130

If the network was trained to always return a "best match" then it's working correctly. To return "no image", it would need to be trained to be able to return that, just like humans are given feedback when there is no image.

It seems highly unlikely that such an elementary mistake was made: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year."

The fact that the net returns a confidence level implies that it does have a way to return a result of 'not recognized'.

Comment Re:Training classifiers require "rejectable" sampl (Score 1) 130

The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.

These classifiers were not trained on samples from outside the target set.

This is not some network hastily trained by people who are ignorant of a very basic and long-known problem: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year." From a paper by the developers of AlexNet: "To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective."

It does not seem plausible that this result can be explained away as an elementary mistake.

Comment Re:Also... (Score 1) 130

Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.

Right, and it has created a great deal of misery throughout human history. Just because it is prevalent does not mean it is not a problem.

More specifically, the overconfidence displayed by the networks here should lead to a corresponding skepticism, in a rational observer, to the notion that they have cracked the image recognition problem.

Comment Re:seems a lot like human vision to me (Score 1) 130

I think it was fairly clear what was going on, the neural networks latch on to conditions that are necessary but not sufficient because they found common characteristics of real images but never got any negative feedback.

You seem to be suggesting that it is 'simply' a case of overfitting, but overfitting is a problem that has been recognized for some time. I don't doubt that the developers of these networks have thought long and hard about the issue, so this study suggests that it is a hard and as-yet unsolved problem in this domain.

One thing that humans use, but which these systems do not seem to have developed, is a fairly consistent theory of reality. Combine this with analytical reasoning, and we can take a difficult image and either work out what is being depicted, or realize that we are not succeeding in doing so.

Submission + - An Online Game of Skill, Played for Money

Capt.Albatross writes: Jason Rohrer, a game developer with an artistic flair (Passage, Sleep is Death) is developing a new game, Cordial Minuet (an anagram of 'demonic ritual'). It is a two-person game of skill, to be played online for money. Rohrer believes that, as a game of skill, it avoids falling foul of U.S. gambling legislation. Emanuel Maiberg's interview of Rohrer discusses the game play, Rohrer's steps to avoid legal problems while monetizing it, and whether games of skill avoid the ethical problems of gambling.

Comment Re:Then don't sign the contract (Score 1) 189

This isn't uncommon in industry (it's also not the normal way of things). If we want to to be certain that a supplier builds something the right way, we might specify every detail of the tooling, and sometimes buy it and install it ourselves.

I think the fact that Apple did not indicates that it did not think there was much chance of success, and was not, by then, expecting (or even much hoping) to ship with a sapphire screen.

Comment Re:Then don't sign the contract (Score 1) 189

My guess is that at some point Apple decided the new manufacturing technology was unlikely to work in their timescale, and was not going to make its plans dependent on it (I imagine this was shortly before it backed out of acquiring the manufacturing equipment.) At this point, GT became the party desperately seeking a deal, and Apple effectively said 'show us, and we will consider it.'

Slashdot Top Deals

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...