Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Clickbait (Score 1) 130

I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics

But they're not trying to do that. They're trying to debunk the claims of "near human" performance, which they do very nicely by showing that the algorithms make vast numbers of mistakes when the data in is not very, very close to the original data.

They also present a good way of finding amusing failure cases. I'd never thought of optimizing misclassifications to find how and where an algorithm fails.

Comment Re:seems a lot like human vision to me (Score 1) 130

I think I understand... vaguely. To simplify, you're saying it's been trained on a specific dataset, and it chooses whichever image in the dataset the input is most like.

A bit.

It's easier to imagine in 2D. Imagine you have a bunch of height/weigt measurements and a lable telling you whether a person is overweight. Plot them on a graph, and you will see that in one corner people are generally overweight and in another corner, they are not.

If you have a new pair of measurements come along with no label, you could just find the closest height/weight pair and use that. That is in fact a nearest neighbour classifier. It works, except that you need to keep all the original data around.

If you imagine taking 1000 points along the two axes (1,000,000 in total) you could classify each of them according to who is nearest. If you do that you can see that there is more or less a line separating the two groups.

Machine learning is generally the process of finding that line, or an approximation to it somehow.

The DNNs don't find the nearest neighbour explicitly: they just tell you which side of the line a given input is on. They also have a bunch of domain specific knowledge buit in because we know something about the shape of the line, which helps find it. For example, image objects may be scaled up or down in size or distorted in a variety of ways.

Is that about the gist? I'm probably not going to understand things about higher dimensions without a lot of additional information.

The answer is in fact tied into dimensionality. In the 2D example, you can cover the whole space with 1,000,000 points. In 3D to do the same, you need 1,000,000,000. Beyond that the numbers rapidly become completely infeasible.

Comment Re:The biggest problem is fluid dynamics. (Score 1) 58

Sure, it's an expensive toy - far more than *I* would be willing to pay certainly - but it squirts plastic out of a nozzle to make weak, crude plastic "toys". Arguably useful, especially when you're $4k/pound away from the nearest general store, but not remotely in the same league as the professional-grade printers working in laser-cured resin, sintered titanium, high temperature ceramics, etc.

Stratasys are the single largest 3D printer company and they sell pretty mich exclusively to businesses. I.e. they're selling them to people who do stuff for money, and only that. That makes them "professional grade" by definition.

The other ones you mention are much slower to run and much more expensive to boot. Not to mention that the resolution/strength is overkill for many applications. Part of being a professional is knowing how to make the right trade-offs and select the correct tool for the job.

You also missed out the starch powder printers which are even weaker than the FDM ones. Another professional tool due to the expense.

Comment Re:In IT, remember to wash your hands (Score 1) 153

Minivans are what happens when you take a car and stretch it into another vehicle.

Nope. The Nissan Serena was about the same size as a saloon car on the ground. It was nothing at all like a stretched car. In fact it looked more like a van adapted to partial passenger use.

Minivans get crap mileage

I looked at a few CUVs online. They get similar mileage to minivans at the penalty of being able to hold fewer passengers in confort and haul less cargo.

and have crap handling.

Neither of them are race cars. My experience driving minivans is that they provide more than adequate handling for safe operation on normal roads when driven at an appropriate speed for the conditions.

Sure if you try to hammer round a tight curve well above the speed limit, you'll look like a fool at a much lower speed for a minivan than for a Bugatti.

If you like handling then nothing apart from a dedicated sports car will be adequate.

Comment Re:Clickbait (Score 1) 130

Why was my characterization of their approach "hardly fair"?

You called it cheating.

Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole).

It pretty much is. If you input some data far away from the training set you'll wind up at a completely arbitrary point in the decision boundary.

The research is not at all what it is sold as.

The research shows very nicely that the much-hyped deep learning systems are no different in many ways from everything that's come before. They have a few lovely illustrations of things that fool it, some of which are what you'd get if you follow the decision boundary a good way from the data, rather than jumping in at a random point.

I'd say there's not a huge amount novel in the research, but it's certainly not cheating.

Don't multi-class identification networks typically have independent output ANNs, so that several can have high scores?

My understaning is that they usually have one output node per class, but the previous layers are all common to the different classes.

I assumed, perhaps incorrectly, that the 99+% measures they cited were cases where only one output class had a high score, and the rest were low.

I'd expect that too.

If they were effectively using single-class identifiers, either in fact or by considering only the maximum score in a multi-class identifier,

Isn't that uisually how it's done? You have a bunch of outputs the strength of which indicates class/not class for a bunch of classes, then you take max over them to find out which class is dominant. Most ML algorithms are generalised to multiclass by using a one-versus-all or one-versus-one system like that (usually the former since the latter hasa quadratic cost).

Only a relatively few (e.g. trees and therefore forests) naturally support multiple classes.

Comment Re:Image processing; LIDAR; ADAS perspective (Score 1) 130

I've done some image processing work.. It seems to me that you can take the output of this Neural network and correlate it with some other image processing routines, like feature detection, feature meteorology, etc;

If you look at the convolutions learned in the bottom layers, you typically end up with a bunch that look awfully like Gabor filters. In other words, it's learning a feature detection stage and already doing that.

Some sort of depth sensing certainly does help.

Comment Re:Clickbait (Score 1) 130

The researchers also basically cheated by "training" their distractor images on a fixed neural network.

That's hardly fair: they were trying to find images that fooled the network. What better way to do that than feeding images in until you find a good one (with derivatives).

The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not make any mistake, by assigning a low probability for all classes).

Probably not, but it would stil lclassify the images as something random, probably with high confidence.

and perhaps not make any mistake, by assigning a low probability for all classes

Not likely: there's no good ways yet for these systems to return such information when it it very, far away from s decision boundary. A way of doing that reliably would be a significant breakthrough.

Comment Re:So, useless then? (Score 1) 130

In the early '80s people were laughing about computers trying to play chess.

Were they? I'm not sure they were laughing about it. By the early 90s you could buy rather slick chess computers which had a board with sensors under each square (pressure in the cheap ones, magnetic in the fancy ones), and LEDs up each side to indicate row/column.

You could play them at chess and they'd tell you their moves by flashing the row/column lights. Those weren't just programs by that stage they were full blown integrated consumer products. Of course they would get thrashed by a sufficiently good player then.

A concrete idea of a chess playing computer (people had always imagined such things, the mechanical Turk being a hoax based on such an idea) came up in 1946, when Zuse actually wrote a program for it (untested).

Comment Re:seems a lot like human vision to me (Score 1) 130

The computer isn't trying to find food or avoid predators, so what is it "trying to do" when it "sees"

Fortunately we know this because we (in the general sense) designed the algorithms.

It's trying very specifically to get a good score on the MINST or ImageNet datasets. Anything far away from the data results in funny results. I'm not being glib. This results in the following:

One generally assumes that the data lies on some low dimensional manifold of the 256x256 dimensional space (for 256x256 greyscale images). This is reasonable: a 256^2 sized space is very, very large.

A neural net essentially warps the crap out of the space, projects up into higher dimensions, warps the crap out of it again (and so on) and eventually places down a linear classifier. Things one side of a hyperplane belong to one class, things the other side belong to another class.

Or, if you prefer, it places some curved decision boundary down in the original space.

Things that are close to the decision boundary generally get low confidence, because it is hard to decide which side of the boundary they really lie.

Points far, far away from the boundary are classified with a high confidence because there is no ambiguity. Because it's far away you can move the datapoint around quite a bit and it will STILL be the same side of the boundary.

The thing is, the algorithm only optimizes the boundary near by to the datapoints it's trained with because that's what it's trying to do: optimize the performance on the training data.

If you generate a random datapoint, it will be far, far away from the manifold that the training data lies on, and therefore likely far, far away from the decision boundary. As a result, it winds up in a completely arbitrary class but with really high confidence.

People have made efforts to try to figure out when a point is too far away from anything and classify it as "unknown". However, this is tricky. Firstly NNs and other learning algorithms, like SVMs and boosting (i.e anything involving a linear classifier in a warped space) try tp push the training datapoints as far from the boundary as possible, because points too near are uncertainly classified.

Secondly, high dimensional spaces are unimaginably sparse so there's the rather irritating tendency for nothing to be near anything else.

Comment Re:The biggest problem is fluid dynamics. (Score 1) 58

What on earth do you mean by "toy grade"?

If you mean FDM, then it would be quite hilarious to refer to a Stratasys for example as "toy grade". Either that or you can afford much better toys than me.

The thing that actually went up there also does not look like "toy grade" either:

http://www.nasa.gov/content/in...

Comment Re:In IT, remember to wash your hands (Score 1) 153

Minivans are dying. They have turned out to be a fad. They are being replaced by CUVs. It turns out almost nobody actually wanted to carry cargo and a lot of passengers, and a minivan is half-assed at both.

huh? minivans are excellent at both. The only things better at hauling cargo are vans, and perhaps trucks and both suck at hauling people.

I'd say CUVs are a fad. They're for people who need a minivan but feel emasculated by not owning some ludicrous SUV.

I drove a minivan (Nissan Serena) for a few years when I was a kid. Was an excellent vehicle. I enjoued driving it (nice high driving position, huge mirrors), held 6 adults in comfort or could carry a crapload of stuff.

And fitted into the same size parking space as a mid size saloon car.

What's not to like?

Comment Wasn't it becsause...? (Score 4, Informative) 138

6005.99999 years ago, one of them flipped God the bird and so He did Smite them and lo their teeh were no more and there was lamentaion and suffering.

Also, beaks are much lighter than teeth, which was probably a significant factor.

Also also, if you're thinking about mammal teeth, you're probably imagining it wrong. One of the unique things about mammal teeth is their complexity relative to the other branches of the vertibrates. Studying mammal evolution has been described as an exercise in studying teeth.

It's thought this advanced tooth development went hand in hand with warm blooded development during the pre-mammal period as more adavanced, inerlocking teeth were requied to mash up food better for quicker digestion which was required for a faster metabolism.

Most reptile teeth look primitive by comparison. Except that simple teeth are easily replacable and so reptiles can regrow lost teeth much more easily (later on some mammals in the ungulates developed open roots for continuous growth which was useful for grazers, whereas others hae a large stock of teeth then starve to death when they run out). The specialisation makes these much harder.

It seems likely that birds did not have the great teeth for supporting warm blooded metabolisms, but rather the simple, robust general purpose teeth of other reptiles, so in this sense there were not losing nearly as much. They also solved the grinding problem in a different way, using a gizzard (this may well predate birds: crocs have gizzards as well and it is speculated that some dinosaurs did). As a result they were replacing the bit that grips and possibly does some initial cutting of food with a much more lightweight structure.

Comment Re:Similar to Affirmative Action - a white man (Score 1) 307

And the other half of this is that students who not only have the pre-requisites but have already learned the course material should be able to test out. Perhaps required to test out,

Possibly. Might just be easier to tell them that it's an optional catch up course for those not already up to speed. Students rarely take optional catch up courses if they don't need them.

No need to faff with burdening everyone with extra exams.

Slashdot Top Deals

Suggest you just sit there and wait till life gets easier.

Working...