
Do Neural Nets Dream of Electric Sheep? (aiweirdness.com) 201
An anonymous reader shares a post: If you've been on the internet today, you've probably interacted with a neural network. They're a type of machine learning algorithm that's used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies -- including Google, Microsoft, IBM, and Facebook -- have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes. Microsoft Azure's computer vision API added the above caption and tags. But there are no sheep in the image. None. I zoomed all the way in and inspected every speck. It also tagged sheep in this image. I happen to know there were sheep nearby. But none actually present. Here's one more example. In fact, the neural network hallucinated sheep every time it saw a landscape of this type. What's going on here?
Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities. Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs.
Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities. Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs.
No (Score:1)
Re: (Score:2)
There are about a dozen different approaches to machine learning, and the neural net approach is probably the oldest in terms of being useful for something. None of them are "smart": all they can do is optimize, mostly randomly, until they succeed.
Image recognition in particular is something that has proven hard for machine learning, perhaps because the categories are fuzzy, or perhaps because humans are so good at it and that's the bar for comparison.
The classic, textbook example is handwriting recognitio
Re: (Score:2)
This!! I cannot get over how people think AI is new. Deep Learning is really just a minor addition to neural nets, taking advantage of our modern fast chips to add convolution operations to the mix. But neural nets are old tech. Brooks' paper Intelligence without representation [fc.uaem.mx] and Minksy's Perceptrons [wikipedia.org] both came out in 1969
Re: (Score:2)
And the Space Shuttle was just a minor addition to the Wright Brothers' Flyer (or if you prefer, to the Congreve rockets used against Fort McHenry in 1814).
Re: (Score:2)
Your sarcasm is noted, but it's still a pretty cool field. As much as everyone pointlessly frets about self-aware "AI" taking over, it doesn't seem farfetced that we'll see a collection of mahcine learning bit achieve the intelligence of, say, a chicken in our lifetimes. Able to train and optimize from general sensory data, not carefully chosen examples with perfectly matched feedback.
Re: (Score:2)
Oh, I think it's more like 20 now. Neural networks were just too constrained by raw compute power until recently. That's why the earliest commercial applications were mainframe-sized voice recognition, a task which still mostly requires the mothership to be any good at. Suddenly there's a cloud's worth of compute, and lots of commercial funding form companies that expect results, so there's been serious acceleration the the field. Plus the well-funded war between reCaptcha and spammers has made massive
The breakthrough (Score:2)
GPUs
Re: (Score:2)
Well, voice recognition has moved from "takes a mainframe" to "takes a server", and can at least recognize activation words on a small device. That's real progress thanks to the difference in computing power. As someone else point out, the massive parallelism of GPUs turns out to be useful for AI models.
But all of this is just self-optimizing systems. I don't think there's any real danger of "strong AI" happening any time soon, regardless of computing power. Our neurology evolved from a combination of a
Re: (Score:2)
So one anecdote I heard about neural nets was they were being trained to distinguish US tanks from Soviet tanks via pictures. I worked fine in training and split set verification. Then it was tested in earnest and failed miserably. After more testing they went back to the training set. Someone noticed that the Soviet tanks photographs were all on cloudy days while the US tanks were all on sunny days (or vice versa). So the neural net had been trained to distinguish between cloudy and sunny days.
There was no
Re: (Score:2)
You should know that humans have been known to identify their spouses as a hat.
They've also denied that their left arm is their left arm.
These are people who are other wise sane and normal.
They just have one tiny part of the brain which isn't functioning correctly.
And they can't self-check to correct their error.
Re: (Score:2)
Your two-year old could read the whole alphabet? Get that kid into a gifted program!
Why do you need a special NN for each one of those tasks?
Or one big one given different training sets. Or a collection of NN managed by and to be called upon by a overlord NN. Because your two year old ALSO had to learn shapes, animals, and objects before learning the alphabet. And he ALSO has different areas of his brain that are dedicated to certain tasks. Getting them all to work together is one hell of a trick.
But you ALWAYS come to any AI thread and ALWAYS claim there is
Re: (Score:2)
tell the difference between a sheep and a letter, yes. So you are saying that all you need is a big neural net with different training set, or multiple sub-NN?
Yeah, if you want an NN to know about something you have to train it. If you want it to know about two different things, you have to give it a training set that spans both. Or you could have some sort of tiered affair. The letter recognition wouldn't see anything while the animal recognition would see a sheep, with something managing both. Generally the broader the training set the longer it takes to figure anything out.
You know this, stop feigning ignorance.
If that is the case, why don't we have one?
We do. Like the article mentions. Micr
Re: (Score:2)
What do you mean by consciousness?
Comment removed (Score:3)
Re: (Score:2)
Computers Are As Lazy as We Are (Score:2)
Neural network technology scales with processor advancements, so I understand why AI researches stay so excited about throwing neural networks at everything - it just keeps getting better and better on its own. The thing is, as great as modern processors are, they aren't even close to in the same league as a biological brain. It is unrealistic to expect a computer based neural network to approach the capabilities of even a biological brain in the near future.
AI researchers will only make progress if they pu
This is Known... (Score:4, Funny)
They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities
This is known as "profiling". The sheep will protest, especially the black ones.
Re: (Score:2)
I'm an old white guy that grew up in Detroit, and I agree with you. But, I hoped people got a smile out of my original comment.
Re: (Score:2)
Re: (Score:2)
Given that most neural net imaging these days will split off the color and brightness channels from the image to 'recognize' something, I can see where these blurry pictures get some weird tags.
I've lost count of the times I was made fun of saying that HSV was useful for image processing, doubly so before 2010. It was just one of those mantras CS people tended to repeat without really thinking it through. It may be 17 years late, but I think a strongly worded email to my undergrad TA is in order.
They don't form proper models (Score:5, Funny)
Now what is that story where an AI is trained to turn the air on in an alien(?) train station when the train enters the platform? I can't find it on Google.
The way I remember it the AI is trained, and then left alone and does a great job until one day when it kills all the passengers because it didn't turn the air on. The reason was that the station clock was broken. The AI didn't learn the train-at-platform correlation, but rather the wall clock schedule (I guess those trains were never early or late).
Re: (Score:2)
This is a constant real-world problem with most of the AU approaches - if you make them too big relative to the problem, they'll just "memorize" the training data. That is, they'll over-optimize on the specifics of the training data and not generalize well at all outside of it.
Re: (Score:3)
Aren't we doing the exact same thing with school students, training them to pass the tests rather than to apply the things they learn in the real world?
Re: (Score:2)
Ha! Yup, pretty much the same problem. What always amazed me was how easy those tests generally are, but kids are so bad at learning/generalizing because they're only taught the test that the teacher has no time to do anything but teach the test. Nasty feedback loop, there.
Re: (Score:2)
Yeah, I guess that explains a few things like the platypus. Australia is a pretty big.
Re: (Score:2)
I'm fairly certain you're remembering a Peter Watts novel; IIRC that's from Starfish (or possibly the sequel Maelstrom?)
There was a story here last week about some researchers who'd managed to 3D print a turtle that would be reliably misidentified as a rifle, despite not actually looking anything like one. These remind me that AI don't really work the way Hollywood (or even sci-fi) would typically want them to.
Re: (Score:2)
Thanks! That's it! I found this PDF [rifters.com] of the Starfish book.
At page 198:
"There is no pilot. It's a smart gel."
"Really? You don't say." Jarvis frowns. "Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?"
Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."
Joel's heard this before. The punchline's got something to do with a broken clock, if he remembers it right.
"These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."
"Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."
Google still won't bring up the book even with "smart gel" instead of "AI" in the search terms...
I can see the sheep (Score:3)
You got to remember the algorithms are still relatively primitive. My guess is that in that pictures were geo-tagged in a region known for sheep. It saw the tubes coming out the ground as legs. In the other photo it saw the white rocks in the creek bed as wool with shadows.
Re: (Score:2)
Re: (Score:2)
You got to remember the algorithms are still relatively primitive. My guess is that in that pictures were geo-tagged in a region known for sheep. It saw the tubes coming out the ground as legs. In the other photo it saw the white rocks in the creek bed as wool with shadows.
The training overall matters, if the location is part of it, that can lead to false positives. Also if the neural net does not try to separate unique objects and then identify them, it might identify the grass as "part" of the sheep. Machine learning is still only as good as the data it is being trained on, if it is trained with data with a false correlation, it cannot filter it out without additional training on data without the false correlation.
Re: (Score:2)
So do you need to train an AI to recognize every object separately? There are many billions of different objects. How long is this going to take? They seem to have a hard time training it to recognize sheep. When is someone going to work on that?
You don't need to train the AI to recognize every object, but identify what is an object even if the machine learning can't recognize what that object is. If it can determine the sheep is something, the grass is something and the mountain is something, the machine learning can then identify one of those somethings is a sheep and it doesn't care about the others. Now it no longer correlates grass and sheep as a variant on sheep and mountain, instead sees a sheep among a bunch of unknown objects.
Re: (Score:2)
You don't need to train the AI to recognize every object, but identify what is an object even if the machine learning can't recognize what that object is
How can a ML system recognize "object" as an abstract concept?
Abstraction is a higher-order brain function that is far above mechanical pattern recognition, and that is all a neural network is.
Re: (Score:2)
You can use unexpected color changes to identify object borders to get a general idea what an object is. I did one project that enhanced the contrast, making the edges of objects stronger, so the number of bacteria could be counted. It isn't 100% but the human eye doesn't always properly recognize the border of an object either and can result in mentally blurring objects together. This is where video would make it easier to identify separate objects, since separate objects tend to have different movement
Re: (Score:2)
So once the machine breaks down an image as a number of objects, it can them magically know which objects are sheep and which objects are grass? How does it know if an object is a sheep versus a fence post? Why didn't it work in this case? Did the researchers not do it right?
Most of the machine learning work I've done (minimal but I got a good grade in a graduate course so I think I'm minimally qualified to speak on the topic), training is done by giving the machine learning algorithm an image and an indication if any sheep are in the image (how many sheep would be better but more complicated). The algorithm then looks at thousands or millions of images with sheep and without sheep and finds what is in images with sheep.
I don't know enough about their training to say why they
Re: (Score:2)
Well it takes years to train a human brain to make that kind of recognition. Machine learning is doing this kind of training on orders of magnitude less neurons and the training is done (sometimes) in hours, not years. So getting an accuracy even remotely approaching a 3 year old is pretty good. Besides, cows are boring, we only want to learn to recognize sheep, then maybe the sheep can all be found in the voting pool.
Re: (Score:2)
I think it's inaccurate to say that it takes years to train a human brain to recognize sheep, or school buses, or whatever. Children take several years to learn to recognize thousands of objects (say, one type of object for every non-abstract noun we have in our language); but it only takes a few labeled exposures to each kind of object--maybe only one. And I think that holds of things one has only seen in a picture (you could probably identify a camel, or a duck-billed platypus, or a python, long before
Re: (Score:2)
It would be good if someone were to take a sheep and smother you with it until you died. Then, someone could take some photos of your corpse and train a NN to recognize piles of shit.
Re: (Score:2)
You wont hear me saying AI is just around the corner. Machine learning is really cool in what it can do, but it has some severe limitations when not used just right.
Re: (Score:2)
Far more accurate than other 2 week old babies.
Re: (Score:2)
Doesn't even need geo-tagging. That's sheep grazing land... the close cropped grass is indicitive of sheep.
Re: (Score:2)
You got to remember the algorithms are still relatively primitive. My guess is that in that pictures were geo-tagged in a region known for sheep. It saw the tubes coming out the ground as legs. In the other photo it saw the white rocks in the creek bed as wool with shadows.
More likely it's working as a scene type detection algorithm. It's an easier task to classify a scene in many cases, so it was probably lerning that and using it as a strong prior. The learning algorithm will pick up on correlations, whet
Re: (Score:2)
Why are the algorithms so primitive?
Because the whole model used is trash.
Are neural networks new? The concept of neural networks was invented in the 1940s. Why can't they recognize sheep yet?
Again, the whole process is trash. Rather than instructing the algorithm at all about what a sheep is, it is provided two folders of images. One is labelled "has sheep" and the other is labelled "no sheep" and, with no prior perspective on what a sheep is, the algorithm finds some sort of pattern that is present in the "has sheep" folder but not in the "no sheep" folder.
Because the scenes are not controlled to have identical situations other than the existence or non-existence of sheep, there will be other correlations that line up with the folders. Like the old airplane identification training that instead of analyzing the objects in the pictures that the researches wanted it to analyze ended up simply evaluating the brightness level of the picture because all the "has plane" images were taken on bright days and the "no plane" images were grey and overcast. Humans try to make more perfectly random sets, but with sufficiently complicated documents (and that's all a computer sees with pictures, yet another kind of document), there will be unexpected correlations regardless of how well a human tries to filter it.
That is a bit simplistic, if not inaccurate. One of the things I learned with machine learning is false correlations are really bad. So accurately training means (preferably) nothing in the image can be correlated, except for what the machine learning program is supposed to be trained on. This fails miserably if there are too many other correlations (fields and mountain sides) in the training images. Humans are some what the same way, give us a picture of a fish swimming through long fields of grass and
Re: (Score:2)
Nice copy paste skills, it actually has different words from the other reply!
Re: (Score:2)
I wouldn't bother arguing with this guy. He's angry about deep learning for some reason (can't figure out how to run Pytorch?) so he's determined to spam every thread with lidicrous straw man arguments, like if it's not 100% strong AI it's not useful, or the pretense that the algorithms haven't improved in the last 60 years.
Re: (Score:2)
Yep, just like a brain works, like, a brain that makes a gnat look like Einstein. That they work as well as they do is rather impressive.
This is where prejudices come from (Score:2)
A prejudice is simply when you apply a usually-correct general rule to an individual, without first verifying that it's actually true in that individual
Re:This is where prejudices come from (Score:4, Insightful)
I'm with you except for the part about the general rules underlying prejudices being usually correct. I don't believe that is a requirement for human beings to accept the rule. So I would say the "pre" in "prejudice" really means the rule doesn't get tested for accuracy or revised.
Fundamentally, thinking of deep learning as machine-generated prejudice changes one's enthusiasm for the technology.
Re: (Score:2)
I'm with you except for the part about the general rules underlying prejudices being usually correct.
Depends what you mean by "generally correct". All sorts of correlations exist, and humans are bad at determining which are causitive and which are not (machines are worse). Recording a correlation is near worthless for predictive power.
Fundamentally, thinking of deep learning as machine-generated prejudice changes one's enthusiasm for the technology.
Deep learning (all machine learning) is particularly bad. I
Human makes the same mistakes (Score:2)
... especially under any of the conditions below:
# under time constraint, given only a fraction of a second to exam a sample
# have to process large amount of samples
# excessive amount of details
# tasked with subjects they are not dealt with often: recognizing the different types plants, different types of cells, etc.
In fact human beings likely make more silly mistakes than neural nets under those conditions.
Re: (Score:2)
That would only be relevant if the system ran for a limited time rather than until it produces an answer.
It doesn't.
Re: (Score:2)
Re: (Score:2)
I assume that testing is based on using the same distribution as training. (The easiest way to do that is to take a big set of examples and randomly split it into train and
Well, if they do ... (Score:2)
Only if they help the tortoise who is upside down (Score:2)
Re: (Score:2)
Eh, even if they don't flip over the turtle, we can make sure they stay INTERLINKED. You are a collection of cells. cells. do you want mod points? interlinked. is microsoft evil? interlinked. is wayland the way? interlinked. within cells interlinked.
oh come on you lazy slashot filter. Grow some AI and pick up when capslock is funny... ok, for full effect, assume I'm yelling at you in the last half. You know the scene.
I saw sheep also (Score:2)
Re: (Score:2)
Obl. Pratchett quote (Score:2)
"Real stupidity beats artificial intelligence every time." TERRY PRATCHETT
The lesson is that AI will have biases (Score:2)
The lesson is that AI will have biases. They will have the exact same sort of problems and issues that people have when it comes to presumptions built up from prior experience. Stereotypes, prejudices, and bias. Sounds bad right? But it's the basis of CONTEXT. It's how language works. Things like pronouns and "it" can refer to anything and you have to rely on context to link it to something. And we do so based on what makes sense based on experience. Our eyeballs do the same thing. They fill in a lot
Re: (Score:2)
So the AI was biased because it preferred black and white fence posts in the ground to sheep?
It'll have biases due to it's training set. In this one, I think it's shown a bunch of pictures of fields of sheep and told those are sheep, so it assumes fields with white blobs are sheep. To MS's Azure, "sheep" isn't an animal with wool and split hooves, it's a field with white blobs. At least in part. The two are tied to each other. If you blur this picture enough and ask people about it, they might make the same sort of assumption. But they'll know that it's a field with sheep in it. I'm not sure Azu
Re: (Score:2)
See, this is why it's actually sometimes useful to argue with trolls. Nothing quite motivates me as delivering a brow-beating on an ignorant dumbfuck.
Remember when I said I was running with assumptions? Yeah, turns out I right about that and things have come a long way since 2012. Just like I mentioned in the other post that you likely didn't really read, This article [medium.com] points out that there has been significant advances. And they've got tools to help object recognition figure out object localization (drawin
Re: (Score:2)
If you were standing a long ways off, and didn't have binoculars, you might make the same mistake, at least until you waited an hour and none of the sheep moved. But of course in a picture, sheep never move.
NN only as good as the training data (Score:2)
Neural nets can be only as good as the data used to train them. Outside of the training data, they are pretty much a wild guess. Which points to the real problem with Neural networks. If your training data doesn't cover the actual real world data very well, your network will not be good at all those unique edge cases. Over training (using too much specific training data) is as much of an issue as bad training data too. Over trained networks jump to conclusions based on the wrong things and are just as b
Dijkstra's wisdom (Score:2)
Does your air conditioning filter understand the difference between air and dust?
Does your cell phone's finger print reader or facial unlocker recognize you? Does your mirror?
Do your head
Re: (Score:3)
Do you?
Re: (Score:3)
"Does your calculator know or understand mathematics? What about an abacus?" Oooh, for a moment I thought you were going to insult my slide rule.
Re: (Score:2)
what exactly do you mean?
I think this question is a good but tricky question. I think mankind has been struggling for thousands of years to really nail down what it means to be conscious, to think, to understand, to know. The best I response I have is a wimpy one: "People are intelligent, people know and understand, people are conscious. Machines are not."
If you don't know anything about machines or people then this answer won't help.
If one wishes to be intentionally obtuse or if one really wishes to equivocate then again this a
Re: (Score:2)
Can submarines swim ?
"People are intelligent, people know and understand, people are conscious. Machines are not."
The problem with it is not even that it is wimpy. It is making "Artificial Intelligence" almost impossible by definition. This definition of intelligence is excluding non-humans, so only until we artificially manufacture what you call "people" , will we have any hope for artificial intelligence.
Why is it a problem? It is a perfectly serviceable definition of intelligence. It is a problem because it is useless. We have real work here for machines to do. Some of which humans historically
Define "sheep" for a Neural Net (Score:2)
Here's one more example [tumblr.com]. In fact, the neural network hallucinated sheep every time it saw a landscape of this type. What's going on here?
Computers don't recognize organic life forms. A "sheep" is nothing more than a pattern of pixels. In this case, a black snout, white body, and black legs below -- like this [wikimedia.org]. Do we see anything similar to that in the picture?
Still of some value (Score:2)
Algorithm Method (Score:3)
"Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs."
Run after a sheep with your kilt hoiked up around your chest and they're labeled as Scottish girlfriends.
Re: (Score:2)
Neural Networks are nothing more than an approximation of a math function. Mathematically they are analogous to spline interpolation or Taylor series expansion. The only difference is that splines and Taylor series have a well known method to figure out the unknown parameters. Neural nets are just trained by finding a minimum in parameter space. Like splines and Taylor series, these donâ(TM)t work outsides of their bounds.
This is literally nothing intelligent put them.
While all of that is true, and worse (they tend to optimize to the first local minimum they stumble upon, which might be a poor choice), don't exaggerate the difference between that and how the brains of simple animals work. If we can model a space as a set of objects, that's more than half the battle.
Re: (Score:2)
Computers that can cheaply, quickly do matrix math with thousands of rows are new.
Re: (Score:2)
Edge cases are infinite, at some point the only thing which can improve performance further is abstract reasoning.
Re: (Score:2)
So? (Score:3)
Re: (Score:2)
Just keep walking up that evolutionary tree until it's close enough.
We're all just advanced small furry mammals.
Re: (Score:3)
Neural nets do actually work in the way the neurons work, at least abstractly. Sure, the implementation is a bit different, as it's all just a bunch of matrix math and normalization, rather than an analog "wire logic" network, but the computational result is similar. It's more a matter of scale (AI neural nets are quite small) and refinement (who knows how many layers of optimizing-how-to-optimize even a simple animal brain has).
Re: (Score:2)
Neural nets do actually work in the way the neurons work
I don't see how that conclusion is justified, given that we are just scratching the surface in understanding how living neurons work.
Re: (Score:2)
The basics of how individual neurons work is fairly well understood, and there's been remarkable progress in the past couple decades in understanding how simple neural systems work. Researchers are now doing stuff like modeling the simplest brains down to each neuron, and testing the model against the source (with reasonable success). It's enough to confirm we're not totally off-base.
Re: (Score:2)
How long have brains been around for? 40 years is nothing.
Re: (Score:2)
They do work in the same way? Then why can't it recognize sheep?
It is recognizing sheep. The problem is that it's recognizing sheep when it shouldn't.
This is a problem that is very familiar to humans. You see it every time numerologists find "codes" in their holy book that supposedly prophesy real events. Every time somebody finds a shape in a cloud or Jesus in burnt toast.
The neural network went, "I expect to see sheep flocks in pastures, this is a pasture, and there are whitish things here. I see a pattern, so I'm now classifying the whiteish things as sheep."
Then why can't it recognize sheep? A two year old can.
We come
Re: (Score:2)
Here are some things artificial neural networks will kick your ass in.
Yeah about that...
The first one is that neural networks are "better" at humans at image and object recognition. Let me assure you, they are very much not. You can go download a state of the art pre-trained net, and run it on data if you don't believe me. they can do better on classifying "imagenet" in some circumstances, but image net is a remarkably restricted dataset and it's also got a fair amount of label noise.
If you construct an art
Re: (Score:2)
Sheep and humans share a fair bit of ancestry, and DNA. Human ancestors have spent a lot of time with animals even closer to sheep genetically than humans themselves are. Humans have an obvious advantage over computers in recognizing sheep.
If you construct an artificial enough task in this area you can make nets look better than humans, but they really are not close.
Actually recognizing sheep is an "artificial" task to make humans look better than nets. In the general case, nets can easily be better than humans per joule of energy consumed for training + task.
Re: (Score:2)
Actually recognizing sheep is an "artificial" task to make humans look better than nets.
It's a known failure mode of nets. The net has no uderstanding of the object so it's unable ot learn the difference between an object and the object's context. Humans can, nets can't, and until they can they'll be much worse at a very wide range of tasks.
In the general case, nets can easily be better than humans per joule of energy consumed
What general case are you talking about where nets routinely outperform humans?
Re: I don't understand (Score:2)
I didn't say they do. I said they can. Humans are the bosses, humans decide what these nets do, and humans generally make them do things that make sense to us, and laugh at them when they fail.
General case could be as simple as detecting the color in a particular spot in the picture. Humans get too biased by the surrounding color.
More complicated general cases cannot be written easily in a /. post. "Sheep" is a complicated pattern, which humans convey in a single word. A relatively simple attempt to descr
Re: (Score:2)
I believe (but don't quote me) that the neural nets in our brains (and for that matter, the brains of wolves) that are involved in sheep recognition are much larger than those in computers.
---------
Little Red Riding Hood, you sure are looking good!
You're everything a big bad wolf could want.
-- Sam the Sham & The Pharaohs
Re: (Score:2)
Human brains are the result of at least 100 million years of evolution. So 99999960 more years to go.
Re: (Score:2)
"Black Sheep" https://www.youtube.com/watch?... [youtube.com]
a great movie
Re: (Score:2)
neural networks are not practically auditable.
I'd say that goes for any sort of machine learning or AI. Same thing is certainly true for genetic algorithms as well.
However if you present them with an example that is significantly different, it may not work as well.
Right, but that's what's interpolation and extrapolation does, it sees trends and applies that knowledge to new situations. If the new thing doesn't follow the trend of your past experience, you're screwed. Just like people. The trick with AI at this point is giving them broad experiences rather than niche. Expanding their horizon is almost certainly going to involve a collection
Re: (Score:2)
We have been training neural networks for over 40 years now. Why can't they recognize sheep yet? What progress has there been?
You've been spamming this thread with the same question for ages. what progress has been made?
Re: (Score:2)
I see you can copy-paste. Why can't you type new content? What progress have you made?
Re: (Score:2)
But in a more blurred picture you can jump to the same conclusions.
Yup, I took off my glasses so I couldn't see and exactly matched the "AI's" guess for the first photo.
Re: (Score:2)
How is that fundamentally different from what happens in your head when you see a picture of a sheep?
Re: (Score:2)
Atoms have a nucleus which attracts electrons. Electrons repel other electrons and dance around the nucleus at various shells. Their position isn't like the orbit of the planets around the sun or.... "rubber balls rotating around each other". Which... would presumably need... string and a diorama, or a big rubber sheet.... or something... They are vaguely similar in the sense that balls could represent particles. Different sized or colored balls could be similar to how different particles have differe
Re: (Score:2)
We conceptualize,
I dunno, that's kind of an empty statement when talking about how we learn things. If I told you training an NN is effectively "conceptualizing" how would you refute that?
generalize
I think NN does this. It takes a whole bunch of pictures of sheep and figures out that they're the same and generally called "sheep".
and abstract concepts we understand and have meaning for, we make associations and replace concepts with one another.
NN definitely makes associations. But yeah, I don't think they handle abstract concepts at all so far. Neither do ants or rats, but I did ask about human brains.
We do not learn directly the real world, we in fact learn the representations we continuously make ourselves of it.
Did you mean we not only learn from the real
Re: (Score:2)
Well that gets into a "no true scotsman" fallacy. Think about it like "life". Humans are alive and are really complex. But so are nematodes, bacteria, and arguably viruses and prions. The issues comes up that we don't have a solid definition of what "intelligence" or "alive" really are. Personally, I think viruses are alive. Even though my highschool biology told me otherwise. It self-replicates and makes copies which self-replicate. That's about And I think intelligence is defined by learning. If it
Re: (Score:2)
How about intelligence is the ability to provide answers when the input to output processing cannot be encoded in symbols? [vaguely things like 'intuition'];
I'm not sure what that looks like. Could you give me an example of this sort of thing show that you or I are intellgent. (And maybe, like, rats and ants?)
And, uh.... 100% of your intuition is encoded in a 2-bit DNA sequence of 3 billion base pairs, or about 725 megabytes of data. It can be represented by the symbols GTAC. If it's not directly in the DNA, the thing that dictates your intuition is designed in and created by DNA. There's also some details like how it gets wound up. And physica
Re: (Score:2)
you need to go meta physical. outside matter.
I dunno man, I avoid mystical shit like the plague. The idea that there's keter-level stuff out there that we not only don't know about but fundamentally can't comprehend is DEFINITELY "just a belief". How would you prove it? Even for Heisenberg uncertainty principle, we know what's unknowable, and can work with it enough that we've got qbit computers now.
The claim about intuition encoded in matter/DNA -- is there a scientific proof for it? isn't that just a belief?
After throwing that last one at me about how things are unknowable metaphyics, you then turn around and ask for proof about DNA and instinct? That's on
Re: (Score:2)
go metaphysical when you are bored
uuuuuuuuhhhhh....
or not satisfied with the physical/matter answers.
YES, there we go. People invoke the metaphyical when physics is insufficient to explain the phenomena. ....But... we can explain this one.
about rats mating etc; these can be explained by matter. But claiming thoughts/intuition also arises from dna/matter is just a belief
... "Matter" in this case is the DNA inside of rats. The real physical non-meta molecules that exist inside of cells. Rats know to mate through instinct. The thought "go stick my dick in that" is instinctual. You're agreeing that the mating instinct in rats is explained through DNA. ie, DNA dictates instinct. Which is kind of the crux of the argument