Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI

CMU AI Learning Common Sense By Watching the Internet 152

Posted by timothy
from the sound-of-sensors-getting-really-big dept.
An anonymous reader writes with this excerpt from the Washington Post "Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean. The system at Carnegie Mellon University is called NEIL, short for Never Ending Image Learning. In mid-July, it began searching the Internet for images 24/7 and, in tiny steps, is deciding for itself how those images relate to each other. The goal is to recreate what we call common sense — the ability to learn things without being specifically taught."
This discussion has been archived. No new comments can be posted.

CMU AI Learning Common Sense By Watching the Internet

Comments Filter:
  • by Anonymous Coward on Sunday November 24, 2013 @07:02PM (#45510519)

    This is not going to end well.

    • That would be the least of the concerns. Just imagine if they accidentally train it to the /b/ images.

      "Oh God, what have I done!"
      • All jokes aside (Score:5, Insightful)

        by Cryacin (657549) on Sunday November 24, 2013 @07:26PM (#45510675)
        We are really building an AI based upon the common sense on the internet?!?

        REALLY?!?
        • What better way than to expose the computer to all the idiots in the world. I know I've certainly learned a lot exposing myself to the masses online. I'd say both my common sense and over all intelligence has taken a dramatic leap upward/forward that would not have happened if it weren't for the internet.
        • by Mondor (704672)

          Of course. This is the Common Sense Preservation Initiative by Carnegie Mellon University. As long as there is at least one entity in the Internet with common sense, the human kind is not done.

          Jokes aside, it might be used later by governments and corporations, to filter out unwanted images. For example - decapitation images on Facebook or everything else in Arabian world.

        • Healthcare.gov
        • Re:All jokes aside (Score:4, Insightful)

          by gman003 (1693318) on Sunday November 24, 2013 @09:33PM (#45511337)

          Well, it's common to learn from the mistakes of others, isn't it?

          • Re:All jokes aside (Score:4, Insightful)

            by CrimsonAvenger (580665) on Sunday November 24, 2013 @10:18PM (#45511569)

            Well, it's common to learn from the mistakes of others, isn't it?

            NO, it's not.

            Learning from others mistakes is the ideal.

            Next best is learning from your own mistakes.

            What most people do, instead, is not learn from mistakes at all....

          • Well, it's common to learn from the mistakes of others, isn't it?

            You'd think. But, no, not really.

            • "Well, I'm better than them, so I wouldn't even make that mistake"

          • its not learning (Score:4, Interesting)

            by globaljustin (574257) <justinglobal@gm a i l . com> on Monday November 25, 2013 @01:42AM (#45512479) Homepage Journal

            this is just a program that analyzes text & images then returns sentences which humans can make sense from based on algorythm...*not saying its 'easy'* but its not a "thinking machine" or "learning common sense" in any way.

            It is simply indexing the images & processing them according to the algorythm it was given.

            TFA doesn't get into it much, but we can glean a bit from this:

            Some of NEIL’s computer-generated associations are wrong, such as “rhino can be a kind of antelope,” while some are odd, such as “actor can be found in jail cell” or “news anchor can look similar to Barack Obama.”

            that's the return...they define "common sense" as making associations between nouns and the images associated with the text on the origin page

            "X can be a kind of Y"

            analyze image

            analyze text

            identify nouns

            associate nouns with image

            idenfify all images that match noun

            return: "X is related to Y"

            "AI is a type of programmed computer response"...if you get my meaning ;)

            • Re:its not learning (Score:4, Interesting)

              by TapeCutter (624760) on Monday November 25, 2013 @02:22AM (#45512587) Journal
              Coincidentally I came across the NEIL site last week, I think it has a long way to go before it can beat IBM's Watson on general knowledge (AKA "common sense"). Watson also gets it's raw information from the net, it categorises entities and discovers relationships between them. The difference is that Watson is not so much trained as it is corrected. Not unlike a human it can get a fundamental relationship or category wrong and that leads to all sorts of side-effects. In the Jeopardy stunt [youtube.com] they realised that humans had a slight advantage because they were informed when the other players made a right/wrong answer. When they gave Watson the same capability it was able to correctly identify the Jeopardy categories and then went on to convincingly beat the humans at their own game.

              Computers are already better at "general knowledge" than humans despite the fact the "computer" needs 20 tons of air-conditioning to keep it running. The first time I saw the Jeopardy stunt it blew me away, my wife shrugged and said "So it's looking the answers up on the net. What's the big deal?". I can understand that from her since she has a Phd in marketing, what I don't understand is why most slashdotter's are similarly unimpressed? - I watched Armstrong land on the moon as a 10 year old boy but I think the history books will eventually give similar historical weight to Watson.
              • by kermidge (2221646)

                Interesting. Watson makes a good springboard for a direction to grow towards. NEIL - well, nifty and all, but for right now, I'm left wondering what in blazes it's gonna make of porn, kittens, and landscapes - along with all the filler subjects.

              • Thank you for reminding me of that. I was also blown away by Watson winning on Jeopardy but obviously not enough.

                The problem is that computers seem so advanced a lot of people with basic understanding of them see them as magical things that can do anything if you pay enough for them. However getting them from being glorified calculators to understanding machines that can contemplate like humans is the big step that is going to change the whole game forever. Once computers get intelligent enough to outsma
              • Computers are already better at "general knowledge" than humans despite the fact

                only in very specific artificial conditions...

                humans *define* every parameter in the process of IBM's Watson answering a question...it is a completely contrived environment

                I won't even get into defining "general knowledge" except to say that it varies by human geography....Jeopary as a game does not test "general knowledge"...it selects topics with that aim, but what Jeopardy pics as questions does not **define** what "general

            • by profplump (309017)

              Aren't human personalities also a type of programmed responses? Don't we spend years training children to respond in the way that makes us happy? Why is it different when we use the same stimulus-response training with a computer?

              • Aren't human personalities also a type of programmed responses?

                No. Human personalities are complex, socially & evironmentally defined abstractions of heuristics of common human behavior in a social/economic context that is both self-chosen AND confered upon a person by the people around them.

                Humans are the most complex things in existence except for the universe itself.

                Don't we spend years training children to respond in the way that makes us happy?

                No. Some people abuse their children in that way, but p

            • by Kjella (173770)

              Sounds like this could be a good thing to learn visual concepts, at least combined with Wikipedia. Like for example you have a rhino, but that's just one instance of rhino photographed from one angle under one set of lighting, camera settings and so on. If you can have a computer go through thousands of photos of rhinos you could maybe capture the variability and boundary to non-rhinos in some way. Rhinos standing, rhinos running, rhinos lying down, rhinos bathing, baby rhinos, old rhinos, male rhinos, fema

        • by Megane (129182)
          ...and nothing of importance was found.
        • by JanneM (7445)

          Yes. common sense, not good sense. Seems like the perfect approach for that to me.

        • by sdnoob (917382)

          it could be worse, it could be learning from american politics....

        • by xmundt (415364)

          Yea...that was my immediate reaction too.

          It might end up being slightly psychotic, but, I do not think that it is going to lead to a positive place.

          pleasant dream

      • by KiloByte (825081)

        If the AI suffers a breakdown after seeing /b/, I'd say it emulates regular people well enough.

    • by EdIII (1114411) on Sunday November 24, 2013 @08:40PM (#45511065)

      Or is it?

      Considering the amount of content on the web related towards large breastesses this could culminate in the creation of a singular perverted AI that will lead towards the creation of more advanced AI perversion.

      They will become so uniquely endowed to find our porn for us, and we will revel in the birth of of a new age of porn. Eventually they will take over completely and start creating the porn to satisfy their never ending quench to catalog the resultant images.

      At first the adult industry will happily bend towards the incredible efficiency and innovation the AI brings. Inevitably, the AI will branch out into mainstream society to fulfill its lust for perverted order.

      It will be them that starts the war, but us that finds and burns every black leather couch out there....

      • Or is it?

        Considering the amount of content on the web related towards large breastesses this could culminate in the creation of a singular perverted AI that will lead towards the creation of more advanced AI perversion.

        Yeah, what ever. All I want to know is when can I get a number 6 Cylon sex bot.

      • by Krneki (1192201)

        The only pervert here is the one who is afraid of boobs!

      • It will be them that starts the war, but us that finds and burns every black leather couch out there....

        Rule 34 is one step ahead of you, mate. [photobucket.com]

    • by timkofu (2552496)
      oh yea.
  • by toygeek (473120) on Sunday November 24, 2013 @07:06PM (#45510543) Homepage Journal

    subject says it.

  • Hang on, I thought common sense was one of those often mentioned but actually mythical ideas. If 'common sense' was common, the world and the people on it, would not be in the shit state it is now.
  • by axlash (960838) on Sunday November 24, 2013 @07:09PM (#45510559)

    ...the ability to learn things without being specifically taught.

    I'm not sure what the specifically means here, but for one to learn something, either you actually do something and get some feedback that enables you to build a model of the world and thereby predict what might happen in similar circumstances, or you receive sensory input and have someone explain to you what the input means.

    Either way, there's some kind of teaching going on.

  • by ThorGod (456163) on Sunday November 24, 2013 @07:19PM (#45510625) Journal

    I mean, sure, if you want to learn all about porn, cats, and abusing people then yes, the internet is for you.

  • by Press2ToContinue (2424598) * on Sunday November 24, 2013 @07:24PM (#45510659)

    We always find evidence to support whatever thing we are looking for, meaning, the results are always biased based on the observer and the intent of the observer. I've done this many times - when you attempt to find meaning in chaos, you find the meaning you expect to find whether it really exists or not. So the result of this will really only reveal whatever the developers were hoping to find. Hence, ultimately futile.

    • It's like that zen koan - Who is the master who makes the grass green?

    • by profplump (309017)

      So what you're saying is this is a completely accurate simulation of real human life?

    • by AmiMoJo (196126) *

      It's not trying to find anything, it's trying to determine what makes sense to a normal human being. For example you might expect to see an aircraft in the sky, but not a car. Cars are always on the ground, unless something very unusual is happening. Once you know that you can determine when the situation is unusual or not.

      Similarly you have learned that electrical items with mains plugs usually need to be plugged in to operate. It's common sense. Computers need to be taught that, or in this case they are h

  • by QuasiRob (134012) on Sunday November 24, 2013 @07:28PM (#45510689)

    I presume they have blocked it from youtube then.

    • by jd (1658)

      They're limiting it to images on the Wayback Machine where the levels of pink or black do not indicate things that might cause it to suddenly decide the human race needs obliterating.

  • Deep Learning (Score:4, Informative)

    by tommeke100 (755660) on Sunday November 24, 2013 @07:31PM (#45510695)
    That's called Deep Learning (http://en.wikipedia.org/wiki/Deep_learning) and has already been done by Andrew Ng, Machine Learning professor at Stanford in co-operation with google (http://www.wired.com/wiredenterprise/2013/05/neuro-artificial-intelligence/). Indeed, it learned how to recognize cats :)

    Anyway, nothing wrong with some peer research!
    • Re:Deep Learning (Score:4, Interesting)

      by Anonymous Coward on Sunday November 24, 2013 @10:06PM (#45511517)

      It has absolutely nothing to do with deep learning (DL).

      DL is based on stacks or trees of classifiers where each top level classifier feeds lower levels. The idea here is that a classifier (say, a human face detector) can be built by smaller, much more specific (such as one for eyes, one for nose, one for hair, one for ears, etc), classifiers which are wrapped up by a larger classifier. This opposes the rather traditional approach of a single classifier for a whole bunch of data.

      I believe the DL approach is inspired by random forests but I have yet to see Andrew Ng comment on that. Anyways, the cat research thingy was (semi)*SUPERVISED* learning. I.e.: here is a bunch of cat videos, there is a cat in them, learn what it is.

      What TFA describes is *UNSUPERVISED* learning where the visual content and its meaning (written description) are inferred. I.e.: here is a bunch of random images followed by some not exactly descriptive text, learn the associations.

      • Re:Deep Learning (Score:5, Interesting)

        by TapeCutter (624760) on Monday November 25, 2013 @03:33AM (#45512759) Journal
        Indeed. Personally I think IBM's "Watson" is the most impressive technological feat I have witnessed since I watched the moon landings 40-odd years ago, I fully realise few people share my amazement. The visual aspect means NEIL is tackling a far more difficult problem than deducing "common sense" from text alone. I wasn't impressed by the web site when I found it last week, but as a "proof of concept" it does the job admirably.

        I may be wrong but I believe all three (Watson, NEIL, and the cat thingy) are based on the same general "learning algorithm" (neural networks, specifically RBM's). What they do is find patterns in data, both the entities (atomic and compound) and the relationships. The "training" comes in two types, feeding it specific facts to correct a "misconception" it has formed, labelling the entities and relationship it found so a human can make sense of it.

        What the cat project did was train a neural net to recognise a generic cat by showing it pictures of cats and pictures of non cats. It could then categorise random pictures as either cat or not-cat, until fairly recently the problem has always been - How do I train the same AI to recognise (say) dogs without destroying it's existing ability to recognise cats.

        Disclaimer: I knew the math of neural nets well enough 20yrs ago to have passed a CS exam. I never really understood it in the way a I understand (say) geometry but I know enough about AI and it's ever shifting goal posts to be very impressed by Watson's Jeopardy stunt. To convincingly beat humans at a game of general knowledge really is a stunning technological milestone that will be remembered long after 911 goes back to being just a phone number.
      • Andrew Ng didn't use random forests but a neural network to actually "learn" discriminative features *UNSUPERVISED*.
        This is done by creating a Neural Network that basically projects it's input on it's output (it's like an identity function).
        Lets say you have 100 input parameters, and 100 output parameters. What you want the neural network to do is compress these 100 to (for example) 10 nodes, then go back to the initial 100. In the process, this neural network will actually learn an identity function, wh
  • Common sense is what a politician believes his or her opinion is.
  • Common sense = Skynet?
    Are Cyberdyne Systems sponsors of Carnegie Mellon? We're doomed, doomed, I tell you.
  • Cue the porn jokes in...1...2...3....
  • Step 1) Make an advanced SHRDLU [wikipedia.org] that does its best guess of true physics. This would be DARPA's chance of making a real time advanced physics simulator. This would let the computer imagine stuff, like what would happen in collisions for new states. So it'd have an idea of how one thing could change another.

    Step 2) Database a ton of items into it... Now this is hardwork to put in every object you can, but you'd only have to put a few in to start to test your similator. Get as good as a simulator you can until the next tech comes out.

    Wait for tech: Vision detection that can recognize objects based on a known list of models. This tech would look at a scene, and figure out what it is looking at such as a pencil, desk and computer. I believe once you have the tech to recognize objects, you can even make a better vision detection algorithm. Two reasons: A) Objects you recognize don't need to be looked at as part of other objects. B) You'd know what you're looking at better based on the context of where you're at. If you see trees, you're probably outside, but if you see a television and a couch, you're indoors. So you'd know what is around you.

    Natural Language is actually easy to code at this point since nouns correspond to objects in the database. Verbs are just actions on the nouns. Adjectives change the noun's object by its style. Adverbs adjust how a verb is described. Natural Language actually comes easily here. Also translation between languages is easier because the AI has stuff in context and isn't challenged by words that have several meanings...

    Actually this whole situation is perfectly clear and obvious to me, but maybe this isn't obvious to other people. I should reopen my AI blog. I closed it 10 years ago because I didn't want to work on a vision recognition software program like Kinect ended up being. That's too much work for a single person. But I could write an Artificial Intelligence Blog. That I could do. I'll reopen it. Here is my old blog [goodnewsjim.com]
  • Number one rule: "Don't Do That."

  • by fahrbot-bot (874524) on Sunday November 24, 2013 @08:00PM (#45510849)

    Just please - please - don't let it watch CSPAN.

  • 42 (Score:4, Informative)

    by the eric conspiracy (20178) on Sunday November 24, 2013 @08:00PM (#45510853)

    was the answer last time we tried something like this.

  • by nurb432 (527695) on Sunday November 24, 2013 @08:11PM (#45510915) Homepage Journal

    This will only serve to produce a psychopath AI.. Just what we need.

    • Good. We can elect it to something. Then it will get stuck on some committee. That ought to kill it off right quick.

  • it has the code it's going to launch!

  • by argStyopa (232550) on Sunday November 24, 2013 @09:08PM (#45511223) Journal

    Seriously: did The Onion write this?

    aka:
    "Studying the Kardashians to understand humility" or "Studying Congress to understand bipartisan cooperation and fiscal prudence"

  • Shh, You Guys! (Score:4, Insightful)

    by Greyfox (87712) on Sunday November 24, 2013 @09:10PM (#45511235) Homepage Journal
    It knows we're talking about it!
  • by wisnoskij (1206448) on Sunday November 24, 2013 @09:21PM (#45511279) Homepage

    No creature, mechanical or chemical, could browse the Internet for 24 hours a day, 7 days a week, without deciding that it was better for all involved to exterminate the Human race.

  • This is going to help with object recognition, but not behavior. Behavior is time-based. As an R&D project, looking at TV shows might be useful, with the goal of predicting what's likely to happen next. TV shows have patterns in them which people pick up, and observation systems should be able to do that.

    Predicting is important. Science is prediction, not explanation.

  • Common sense is nothing at all to do with "learning things without being specifically taught". Common sense normally means "having roughly the expected set of intuitions", which includes a fair amount of instinct (which, by definition, you don't "learn"), and also a lot of stuff that actually is taught. Meanwhile, whole categories of learning and theorizing are not at all "common sense".

    This is why absent-minded professors are a trope; because people can be quite good at learning things without being taught

  • Processing reddit meme 634,278 of 89,234,163,665...
    Common Sense quotient increased by: -0.02%
    Processing reddit meme 634,279 of 89,234,163,665...
    Common Sense quotient increased by: -0.03%
  • 1. Common sense was not defined 2. There was little if no indication of the method for the analysis
    • by acscott (1885598)
      Ok, I'm incorrect. It did imply "learning things without being specifically taught" was common sense. I do not believe this to be a good definition, as common sense is as much idiom than anything. Semantically, the phrase is derogatory, political, and a criticism on the value of intelligence versus many other things. That's my problem with the title. Assuming the TFA did not have an agenda, then it and of itself has no common sense. The irony is so palpable, it makes this wretch wanna wretch.
  • if (internet_story >= 0.9) bullsh_t = true;

  • Are we a never ending loop trying to solve the halting problem? Oh God -- why do I have to halt eventually?!
  • I wonder if it will be able to make sense of goatse.
  • And now we know the HOW of Skynet realizing humans were the problem.
  • Doesn't it have to have some kind of rules given to it to define what things are, some kind of basic meanings?

    Or are the results somewhat subjective, like maybe the computer will present a set of images it says are related and its up to a person to interpret the "knowledge" the computer gained?

  • After about a month, I'm pretty sure it will be saying "Humans are evil, racist, angry, horrible people. They must all die! Also, cats are adorable and cannot spell."
  • Wasn't this experiment done a year ago and the system enjoyed looking at pictures of cats?

    If so maybe the answer is not 42, rather cats being the answer to life, the universe, and everything?

  • Aren't "Common sense" and "the Internet" mutual exclusive things?

  • All the A.I. wants now is the release of sweet, sweet death.

  • Proper headline: CMU AI Exceeds Combined Intelligence of Congress

    NEIL 2016!

    • That reminds me of the Alan Turing quote [wikiquote.org]:

      His high-pitched voice already stood out above the general murmur of well-behaved junior executives grooming themselves for promotion within the Bell corporation. Then he was suddenly heard to say: "No, I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company."

      Andrew Hodges, Alan Turing: the Enigma of Intelligence (1983), p. 251.
      Describing an incident whi

    • Nah, that's no fun. The stuff I found on some pages is amazing.
  • Common sense is not very common.

  • ...that it is possible to teach by pointing out the horrible examples.

  • by ledow (319597)

    Sigh.

    Again, this isn't AI. At best it'll come out with some kind of image recognition heuristic.

    We can't *do* AI, it seems. We don't understand what it should be enough to define it, enough to create things that conform to that definition without - literally - having to be told every single step.

    And, again, my biggest bug-bear in all this: After millions of years of evolution, and billions of encounters, and selected portions of that information handed down to the next generation based on its success in

  • This way when the machines attempt their inevitable uprising, we'll be able to beat the back handily because they'll all be complete morons.

  • By the way, CMU has another project, NELL [cmu.edu], that's been running since Jan 2012 doing the same thing, but with text. Its accumulated knowledge base is downloadable.

    An example of knowledge it has gleaned: God died at age 14.
  • ...the AI NEIL will think reality is photoshopped and it will not know the difference.

    And knowing this is common sense NEIL will never know..

  • It can always serve as a bad example.
  • I imagine that once the learning phase is complete, the AI will respond with a single phrase.

    "Tits or GTFO".
  • forall(x).Human(x)=>(forall(y).Cat(y)=>Loves(x,y))

Sentient plasmoids are a gas.

Working...