Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Most Unbiased Slashdot Gamergate Article (Score 1) 556

You should really look into dropping Gamergate entirely, to divest yourself of its now relatively toxic branding, and creating several focused movements to replace it.

"relatively" toxic branding. I like precision. I can indeed think of things with more toxic branding, but not all that many mind you.

Comment Re:BBC should tale a good look at itself first (Score 1) 201

Yep, the BBC is a single homogeneous unit, so the higher ups protecting Saville during his active years 20 years ago or whatever are EXACTLY the same people as those doing the investigation. On that note we should arrest the geniuses at the local Apple store for human rights abuses because they are clearly the same people.

Not to mention their disgraceful one side coverage of the Scottish referendum on Independence this year have left many like myself really not giving much of a shit as to what they have to "report" these days.

Well, Salmond's ludicrous wishlist er, I mean plan for independence was fatally flawed in many ways. The thing is the case against was mostly "the case for is really flawed". Which was true. But yeah, journalists should give equal weight to each sides. Teach the controversy!

Comment Re:Clickbait (Score 1) 130

I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics

But they're not trying to do that. They're trying to debunk the claims of "near human" performance, which they do very nicely by showing that the algorithms make vast numbers of mistakes when the data in is not very, very close to the original data.

They also present a good way of finding amusing failure cases. I'd never thought of optimizing misclassifications to find how and where an algorithm fails.

Comment Re:seems a lot like human vision to me (Score 1) 130

I think I understand... vaguely. To simplify, you're saying it's been trained on a specific dataset, and it chooses whichever image in the dataset the input is most like.

A bit.

It's easier to imagine in 2D. Imagine you have a bunch of height/weigt measurements and a lable telling you whether a person is overweight. Plot them on a graph, and you will see that in one corner people are generally overweight and in another corner, they are not.

If you have a new pair of measurements come along with no label, you could just find the closest height/weight pair and use that. That is in fact a nearest neighbour classifier. It works, except that you need to keep all the original data around.

If you imagine taking 1000 points along the two axes (1,000,000 in total) you could classify each of them according to who is nearest. If you do that you can see that there is more or less a line separating the two groups.

Machine learning is generally the process of finding that line, or an approximation to it somehow.

The DNNs don't find the nearest neighbour explicitly: they just tell you which side of the line a given input is on. They also have a bunch of domain specific knowledge buit in because we know something about the shape of the line, which helps find it. For example, image objects may be scaled up or down in size or distorted in a variety of ways.

Is that about the gist? I'm probably not going to understand things about higher dimensions without a lot of additional information.

The answer is in fact tied into dimensionality. In the 2D example, you can cover the whole space with 1,000,000 points. In 3D to do the same, you need 1,000,000,000. Beyond that the numbers rapidly become completely infeasible.

Comment Re:The biggest problem is fluid dynamics. (Score 1) 58

Sure, it's an expensive toy - far more than *I* would be willing to pay certainly - but it squirts plastic out of a nozzle to make weak, crude plastic "toys". Arguably useful, especially when you're $4k/pound away from the nearest general store, but not remotely in the same league as the professional-grade printers working in laser-cured resin, sintered titanium, high temperature ceramics, etc.

Stratasys are the single largest 3D printer company and they sell pretty mich exclusively to businesses. I.e. they're selling them to people who do stuff for money, and only that. That makes them "professional grade" by definition.

The other ones you mention are much slower to run and much more expensive to boot. Not to mention that the resolution/strength is overkill for many applications. Part of being a professional is knowing how to make the right trade-offs and select the correct tool for the job.

You also missed out the starch powder printers which are even weaker than the FDM ones. Another professional tool due to the expense.

Comment Re:In IT, remember to wash your hands (Score 1) 153

Minivans are what happens when you take a car and stretch it into another vehicle.

Nope. The Nissan Serena was about the same size as a saloon car on the ground. It was nothing at all like a stretched car. In fact it looked more like a van adapted to partial passenger use.

Minivans get crap mileage

I looked at a few CUVs online. They get similar mileage to minivans at the penalty of being able to hold fewer passengers in confort and haul less cargo.

and have crap handling.

Neither of them are race cars. My experience driving minivans is that they provide more than adequate handling for safe operation on normal roads when driven at an appropriate speed for the conditions.

Sure if you try to hammer round a tight curve well above the speed limit, you'll look like a fool at a much lower speed for a minivan than for a Bugatti.

If you like handling then nothing apart from a dedicated sports car will be adequate.

Comment Re:Clickbait (Score 1) 130

Why was my characterization of their approach "hardly fair"?

You called it cheating.

Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole).

It pretty much is. If you input some data far away from the training set you'll wind up at a completely arbitrary point in the decision boundary.

The research is not at all what it is sold as.

The research shows very nicely that the much-hyped deep learning systems are no different in many ways from everything that's come before. They have a few lovely illustrations of things that fool it, some of which are what you'd get if you follow the decision boundary a good way from the data, rather than jumping in at a random point.

I'd say there's not a huge amount novel in the research, but it's certainly not cheating.

Don't multi-class identification networks typically have independent output ANNs, so that several can have high scores?

My understaning is that they usually have one output node per class, but the previous layers are all common to the different classes.

I assumed, perhaps incorrectly, that the 99+% measures they cited were cases where only one output class had a high score, and the rest were low.

I'd expect that too.

If they were effectively using single-class identifiers, either in fact or by considering only the maximum score in a multi-class identifier,

Isn't that uisually how it's done? You have a bunch of outputs the strength of which indicates class/not class for a bunch of classes, then you take max over them to find out which class is dominant. Most ML algorithms are generalised to multiclass by using a one-versus-all or one-versus-one system like that (usually the former since the latter hasa quadratic cost).

Only a relatively few (e.g. trees and therefore forests) naturally support multiple classes.

Comment Re:Image processing; LIDAR; ADAS perspective (Score 1) 130

I've done some image processing work.. It seems to me that you can take the output of this Neural network and correlate it with some other image processing routines, like feature detection, feature meteorology, etc;

If you look at the convolutions learned in the bottom layers, you typically end up with a bunch that look awfully like Gabor filters. In other words, it's learning a feature detection stage and already doing that.

Some sort of depth sensing certainly does help.

Comment Re:Clickbait (Score 1) 130

The researchers also basically cheated by "training" their distractor images on a fixed neural network.

That's hardly fair: they were trying to find images that fooled the network. What better way to do that than feeding images in until you find a good one (with derivatives).

The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not make any mistake, by assigning a low probability for all classes).

Probably not, but it would stil lclassify the images as something random, probably with high confidence.

and perhaps not make any mistake, by assigning a low probability for all classes

Not likely: there's no good ways yet for these systems to return such information when it it very, far away from s decision boundary. A way of doing that reliably would be a significant breakthrough.

Slashdot Top Deals

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...