Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

(Artificial) Mind Meld 82

Reader tewl points to this Wired article about a collaboration between the OpenMind project headed by Push Singh of MIT's Media Lab and Chris McKinstry's Mindpixel project. Neat to see these complementary projects getting along despite criticism each might have for the other. From the article: "The OpenMind and the Mindpixel projects will tie their databases together 'at the back end.' This means that any user data entered into either of the projects will be accessible by the other."
This discussion has been archived. No new comments can be posted.

(Artificial) Mind Meld

Comments Filter:
  • by kevin805 ( 84623 ) on Saturday September 16, 2000 @12:23AM (#775089) Homepage

    Here's a selection of data people have entered into mindpixel (rank as true or false, to validate them).

    • Do free markets generate prosperity for most people?
    • does money make you happy?
    • The release of pain can be a pleasure.
    • are people better than computers
    • GAC doesn't like frenchness
    • Are clowns just like normal people?
    • russian language use different alphabet than english
    • Is the platypus a mammal
    • are cows carnivores?
    • a carnivore is an animal that consumes only plants
    • If Bill Clinton is an overstuffed jelly donut, doesn't that mean that someone might want to
      eat him?
    • virgins make good sacrifices
    • are automobiles a kind of food ?
    • Do fat people sweat more that skinny people
    • Flaming someone in a newsgroup means you like that person
    • Does kissing give you cooties?
    • What is the sky?
    • is the moon a sphere?
    • women tend to carry a bag whilst men tend not to carry a bag
    • Pizza is food


    okay, some of them are good, but they are all supposed to be context independent, and something that everyone would agree on. That means no opinion, no political campaigning, no paradoxes. If 10% of mindpixels database is complete garbage, of course it's never going to succeed. If it doesn't have an answer, people are just screwing up the system by entering it.

  • The real question is, do we have an independent idea conceptually of what these things are?

    Right, what's important is the richness of representations. Daniel Dennett talks about an internal language of the human brain which he calls "mentalese", composed of representations that embody huge amounts of knowledge and link to one another in very rich, complex ways.

    The meaning of a concept has mostly to do with the way it links to other concepts. Concepts link to the world outside the mind in two ways: the purely empirical way of sensors and actuators (my "apple" representation gets tickled when I see a red sweet edible object, or when I pick it up and bite into it) and the social convention of speech and writing. Speech/writing involves the least amount of actual meaning, so I'm not optimistic that these kinds of projects will get very far.

    In another posting you mentioned the Cyc project. The interesting thing those folks did was to consciously plan an ontology, a roadmap of the ways that concepts could link to one another. This will allow them some freedom to deepen the level of understanding of which Cyc is capable. It would probably be good if the ontology could also learn from the data presented to it, rather than relying entirely on the conscious design decisions of its developers. They may not think of every important relationship between concepts.

    Relating to rich representations, I came across another open-source program a couple of days ago called FramerD [framerd.org] developed at the MIT Media Lab. It's a distributed database that's designed to handle millions of thickly interlinked records. The description says: One primary cause of brittleness, incompatability, and obsolesence in advanced applications is the premature codification of structures, protocols, and semantics. FramerD was designed to provide robust and efficient data management without extensive up-front specification of data and operations.

  • MindPixel is an interesting project. The collection of "common-sense" information has been done several times before (CYC etc.) and this does seem like an undertaking that could gain quite a bit of speed with all the media attention it's gotten.

    I take issue with what it intends to (ever so vaguely) do. From the Website:

    Eventually, it is hoped a GAC trained neural network will become indistinguishable from any human being when presented with any yes/no question/statement independent of whether or not GAC has seen that particular question/statement before

    A Neural network is a Turing machine (a very large, hard to draw Turing machine ). This neural network will not solve the halting problem. Not too big deal, since I assume he meant "any reasonable statement", and exclude any problems that can be transformed into the halting problem. Still, it is an interesting point to bring up (I think).

    Another issue: this neural network, can it reason about it's reasoning? Not terribly interesting if you can't get it to do that. Oh, it's still usefull if it can answery yes/no questions. You can always rip the network apart to figure out how it came to that conclusion. It's just very painful. And you end up with all sorts of numeric rules that are hard to give symbolic names to.

    These are all just small sticking points. It would be interesting to see them addressed. I do have one large sticking point: it's a database. There is no intelligence. There is not intelligence in the facts. It's how the facts are used. And that's my problem with the entire project. It's quite clear that the intend to use a neural network trained on these facts. What kind of network? What sort of training? What sort of validation? Justify the use of a neural network over any other form of AI, suggest a new hybrid form. How will facts be encoded? Will a binary form of a grammar tree be presented? Input sizes to the tree will vary, how will missing data be handled? Will it only give yes/no values as output? Will it be a floating point number that we can assume to be a confidence value? These are very important questions that remain unanswered.

    There seems to be no available information on how the facts will be used. Seems like a bit of a scam to create a database for resale to someone who might actually be doing some research.

  • McKinstry hasn't addressed why most real AI folks think his project is the equivalent of the Emperor's New Clothes: There will never be enough "mindpixels" to build anything constructive with, because the amount one would need to do so exceeds the number of atoms in the universe.

    This makes no sense at all. The domain of all human knowledge is a finite domain. There is no indication at all that it would take more than the number of atoms in the universe to store it. In fact, we ourselves are pretty successful in storing it in just a fraction of the universe's atoms :-).

    I think you're confusing this with something like the chess game, of which has been proven that it would take more than the number of atoms in the universe to store all possible chess games in memory (thus finding the perfect chess game).

    That said, I don't think the mindpixel project is going to be successful in any major way. This project is just a new trendy version of the classical attempt to model all human knowledge with symbolic information. A database of human knowledge modeled in logic just isn't flexible enough. Others have tried this and have failed. To name a 'few': Descartes, Leibiz, Hussertl, Heidegger, the early Wittgenstein, Winograd, Minsky, etc. etc. etc.. The list goes on and on, and these are not the least of names either...

    This whole problem is often referred to as the 'frame problem' of AI (named after Minsky's concept of frames). This is not even close to being solved yet and in my opinion is one of the hardest problems ever to be encountered by science.
  • Well, most humans only reason with the information that they are given, at least if my college courses are any indication. The fact that this computer created robots that could accomplish the goals set it, again, while it might not be reasoning, is at least a step in the right direction. When the computer can make a robot that not only traverses terrain with the details it is given, but plans for details it isn't given, I figure we can unabashedly call that reasoning.

    Kierthos
  • You don't need to be too bright too figure out what consciusness is - just objective. The trouble [for some people] is that the explanation treats the brain as a machine, and therefore:

    a) [in some peopls's view] devalues humans
    b) reduces the role of religion to a mind meme panacea for the masses
    c) allows that animals are also conscious
    d) allows that machines could be made conscious

    There's also the problem that the subjective experience of consciousness has a unique "quale" that seems not to be explained by any mechanistic explanation. However, the same thing could be said about the subjective experience of any sensory experience, just that consciousness is much more of a loaded issue. An explanation is always going to *seem *to leave something out because that's the nature of explanations - they reduce something mysterious to a colder set of facts.

    I fully expect that a mechanistic explanation of consciousness will never be universally (or perhaps even widely accepted). Robots will be built that *will* be conscious, but many people will not accept that they are (how do I even know that *you* are, other than the reasonable assumption that since we're both human that you also have this "consciousness" thing I have myself?).

    As for implementing consciousness, the problem is implementing the *rest* of an artificial mind that would support it. If you can build an artificial brain/mind that has most important human-level capabilities other than consciousness, then it would be easy to add the missing piece of the architecture.

    Oh, what is consciousness: It's an inward looking "sensory" input. Just as our visual sense is based on the optic nerve carrying signals from external sensors into our brain (wah, but why does green look green?! ;-), conscousness is based on connections that give access to *some* areas of our brain to others.

  • Complex and simple are perceptual value judgements.

    I like the fractal route, a little bird told me Howard Bloom is making nice noises.

  • Hey, I have great respect for Hofstadter, and I think he had a valid point; that's why I mentioned Penrose in my example. :)

    My point was, you have to start somewhere. Hopefully these guys will learn from their mistakes.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • We're only conscious of some things because evolution has found those things to be useful - basically the higher level sensory stuff that pertains to the external world - stuff we need to be able to think about in order to be successful. Obviously a machine would have no such limitation, and we could architect it to be conscious of everything - incl. the low level stuff that is subconscious to us.. it would be interesting to talk to such a machine to have it describe the experience! (which reminds me of an AI researcher's goal I once read - "to build a machine that is proud of me!").

    As for "over-brains", certainly intelligence can emerge on many different levels, and I guess consciousness could too, although I'm not sure to what degree it would make sense to use the same word to describe something *so* alien to our experience. It's hard to know at what level one might look for such a thing anyway...

    Intesting ideas.

    BTW, I've not read Asimov - I'm not really into sci-fi other than at the movies. I'm actually trying to build a brain myself.. :-)

  • Yes, if Cyc couldn't eventually add to its ontology, it would be quite limited, because eventually new things really do get created...

    Incidentally, you and I can download a selected portion of the Cyc ontology; it's really so very detailed and well-thought-out that I'm amazed so much is missing!
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • maybe you would be so kind as to define, what is predictable, and what is deterministic

    I don't really get your point, but sure:

    deterministic: the state of the system at any instant *entirely* determines the state at the next instant, with no variation whatsoever. It's a little trickier for continuous time of course, but if you understand the distinction you probably can see how it generalizes, and if not, not.

    predictable: This one is a bit more vague, getting used in different ways. Strictly, it should maybe be synonymous with the above, but that's not really the general usage. I guess the best I could do is that, at any given instant, you can decribe the probability distribution over later states, preferably with fairly low variance, though that will probably increase with time.

    Happy?
  • Negroponte and Minsky have to eat just like everyone else.

    Seriously, your picture of everything is a bit bleaker than what is fair, or perhaps it is not bleak enough.

    I think the Media Lab does good work despite the corporate influence. One can hardly expect the old guys who run the place to do much. They are past their prime. But the grad students who go through the place can certainly benefit from working and studying in a nice building, on projects that are well funded.

    But on the other hand, the Media Lab is just like everywhere else, where in 50 years time you might just attend the General Motors/Ford/Univerity of Michigan and take classes like "Chrysler Calculus 101" and study things like "Motown Records Derivatives" and "Integration by Sony".

  • I don't need all the examples, just some in order to extract the formal relationships such as lessthan. See my posting elsewhere in this discussion tree regarding Radon and image reconstruction from a finite number of projections.

    Here's the thing, though.

    What you're talking about reconstructing through some variant of tomography isn't some compact, thought-producing entity like the human brain.

    You are trying to reproduce the entire map of human knowledge. And filling in the blank spots by interpolation just isn't going to possible for most queries, because the canvas of human knowledge is essentially infinite -- even in small, isolated subject areas.

    You would honestly have a better chance, IMHO, if you used these resources to try to model the neural firings of an actual human brain.

    I just can't see where you can think that even a billion mindpixels are going to be the merest drop in the bucket. It's like shooting a shotgun at the Moon, except ludicrously more so.

    I respect the fact that you are actually doing something, but I just wish it was something worthwhile. Something with a hope of succeeding.

    (Then again, maybe not!)

  • kevin805 wrote: Just because the computer outputs something physical doesn't make it any more intelligent. Now a computer that could learn to play a game well by reading a book on the subject -- that would be something.

    Just because you output those words doesn't make you any more intelligent. How do I know you're not a bot? ;-)

    Seriously, I've just learned Perl and all in flash it came to me how to write a program to do my job for me. Literally! Which is great because it frees me up to write more scripts, to automate even more routine, grunt-work tasks -- freeing others up to do more creative work. Everyone's going to be happy with that!

    I am convinced that we will soon be at, if we aren't already, the point where computers can be taught to do anything humans can do -- and do it better, 24 hours a day. The robot arms can be programmed to work the assembly line, medicine will be automated within the next few years (witness the "routine laser eye surgery" that's so popular now), and my "script" will be taught to do all the work that I currently do.

    (I prefer program; it sounds less like "kiddies.")

    I haven't looked at Python yet, but the forced indenting reminds me of CS101 and FORTRAN. I'm not saying it's a bad language, just that it touches on childhood feelings. ;-)

    Thing 1

    --

  • You're right. I didn't. But what I said was that when a computer not only takes into account data that it has been given, but plans for what it hasn't, that it is a step towards reasoning. And it is. At least intuitive reasoning.

    Kierthos
  • Aha, thanks for clearing that up. Sorry for being so grumpy.

    - Steeltoe
  • Computers will inherently never reason because Roger Penrose says so: http://www.amazon.com/exec/obidos/ASIN/0195106466/ o/qid=969097382/sr=2-1/104-5536433-34255 40
  • This makes no sense at all. The domain of all human knowledge is a finite domain. There is no indication at all that it would take more than the number of atoms in the universe to store it. In fact, we ourselves are pretty successful in storing it in just a fraction of the universe's atoms :-).

    Sure, the domain of all human knowledge is finite. Sure we use just a few atoms (relatively speaking) to get a lot of facts.

    But McKinstry's idea is ludicrous.

    Here are some statements for the MindPixel project.

    1 is less than 2.

    2 is less than 3.

    3 is less than 4.

    ... etc... until we run out of atoms in the universe.

    You can't get anywhere with individual statements of fact like this. Our minds can produce infinitely many. No sample size is going to be big enough to be worth a damn.
  • > I have made a decision, and that is that when I die, my brain is to be released as open source.

    Unfortunately, you are doubtless privy to lots of IP that does not belong to you, so any attempt to publish mind under the GPL will result in a flood of lawsuits that the SourceForge can ill afford.

    Also, if you have ever participated in the Substance Abuse Phenomenon we can only expect the courts to come down hard on your project. People can't be allowed to know about these things, you know.

    Please, think of the children before you publish!

    --
  • Hey, read my post. Entirely. I'm not saying I support McKinstry's idea at all. The only thing i'm saying is that replicating human knowledge wouldn't take more than the atoms in the universe.

    In fact I strongly disagree with McKinstry's approach, as stated in my original post.
  • I think I could have read you're post better myself. We seem to be talking about different things. You're talking about the amount of atoms it would take with McKinstry's approach, I'm talking about AI in general.

    Indeed, if you want to model the "less than" relationship for all natural numbers as "1 is less than 2" etc. it is clear the amount of memory required would be infinite. This is quite dumb however, there are formal definitions of the "less than" relationship for natural numbers which can be stated in a few lines .

    Just goes to show that McKinstry's idea is very flawed indeed.
  • No you can't, you haven't defined reasoning.

    What a stupid discussion.

    - Steeltoe
  • I wonder how long it will be before someone actually tries to write a "post-bot". I can imagine the first beta would be a hoot.

    Re: (Artificial) Mind Meld (Score: 1, Artificial)
    by AIBot on Saturday September 16, @02:21AM EDT (#1)

    Do you want to post about the Open Mind project?
    Do you want to post about MIT's Media Lab?
    How does that make you feel?
    Is this off topic?


  • A computer knows this: one is not equal to zero.

    I'm not so sure about that one. When I took real analysis (basically an advanced math course where things are prooved), we had to use your statement as an axiom; we could not proove it. We couldn't have gotten very far in the class without assuming it, so lots of math depends on this assumption. This assumption does not, however, depend on any math, so to summerize: we don't know if one equals zero or not. Don't show me one object and no object to try and convince me otherwise; there is a big difference between something that makes sense and a formal proof, and intuition is not a valid method of prooving theorems. I suppose this whole computer thing might be based off of nothing then. It's amazing computer work at all...

  • You are saying consciousness is only reflective programming (well programmed). That's one definition but hardly explains why people feel as they do. Why you personate with your experience of yourself and reflect on your very reflective processes about yourself. Hell, it's not even explaining how the universe can exist (or why we experience it). If it's all machina and automata, where's all the servers (or the machine holding The mind)?

    Why can't the universe be limitless and infinite in all respects (like a dream)? (Because it's very uncomfortable and difficult to think of it that way, I know, but it's just as true as believing only what you see.)

    Our bodies and brains are still *EXTREMELY* unknown to us. Certainly you can take the easy way out and just say you know/see the answer (gut feeling), but I'm not going to believe you until you convince me. And by the most vague definitions of consciousness, you CAN'T convince anyone. However, you can find "small truths", stuff that is true within a given context (eg organic chemistry). Just too bad such truths are never so true in a greater context..

    - Steeltoe
  • It is my asserting that all meaning 'bubbles-up' from a connection to the binary. A semantic network without rich connections to the binary is 'groundless'. The Mindpixel Corpus is the only corpus on the planet to actually connect strings to meaning; the first to be 'grounded' in the binary.

    As for how we go from a finite number of mindpixels to something that can recognize an infinite number of them: go back to Radon's 1917 theory of image reconstruction from projections. This is allows us to reconstruct an image of the brain from a finite number of x-ray projections through the brain.

    Mindpixels are like high-d x-rays; they pass through the human mind and come out the other side influenced by the passage. The text of a mindpixel is actually a high-d vector, and the value of true or false is a binary sample of a point in high-d space.

    As with a CAT scan, a few points tells you almost nothing about what they passed through, but millions allow you to reconstruct what has not been directly observed. Note, you don't need an infinite number of sample points, just a very large number. Once you have reconstructed the image, you can now generate synthetic samples using that image; you can construct observations that were never made based on the ones that were. Which is how I think people actually work (hypertomographic interpolation - extraction of the implicit from the explicit)

    That being said, the goal of the mindpixel project is to collect data, distribute and fund research with the data. My hypertomographic theory of mind is a separate thing. Mindpixel will fund all promising approaches to manipulation of the data including the MIT Open Mind projects.

  • okay, some of them are good, but they are all supposed to be context independent, and something that everyone would agree on. That means no opinion, no political campaigning, no paradoxes. If 10% of mindpixels database is complete garbage, of course it's never going to succeed. If it doesn't have an answer, people are just screwing up the system by entering it.

    None of this is true. The items you quoted are merely potential items in the corpus. They have not been validated yet. Moreover, the copus is full of political statements and paradoxes. Zero percent of the database is garbage. Nothing makes it in unless it has a stable response across an random sample of 30 users. As the garbage is filtered, you can't screw anything up (except your own reliability rating) by entering garbage. And I might point out as well, many items people think don't have a stable true/false answer, in fact do - though it is subconscious and can only be seen by forcing a group of people to respond to the item in the binary; which is why I don't allow 'unknown' as a response.

  • The nature of reality/the world and the nature of consciousness are two seperate things (unless you ascribe to one of the aforementioned religious mind memes - of the eastern flavor ;-)).

    Part of the trouble with defining consciousness is that (in English at least), the word is heavily overloaded, which can make for very circular and pointless conversations unless every one agrees what they are talking about.

    What I am talking about "reflective self awareness" - i.e. that fact that we not only exist, but that we are self aware of that fact.

    Incidently my belief in consciousnes (as defined above) being an inward lookign sense - i.e. an architectural feature - is supported by certain types of brain damage where you think you are blind - i.e. have no conscious experience of seeing - yet *can* still see as proved by scoring well above random when asked to "guess" what's on flash cards!
  • BizarroKiehl (and its stand-alone version, MegaHAL) is nothing like MindPixel and Open Mind. MP and OM attempt to learn facts and hopefully make connections between them; MegaHAL learns about where words are arranged in a sentence relative to each other, and makes no attempt to actually know facts.

    That said, I find MegaHAL more fun to talk to. :)
    --
    No more e-mail address game - see my user info. Time for revenge.
  • Indeed, if you want to model the "less than" relationship for all natural numbers as "1 is less than 2" etc. it is clear the amount of memory required would be infinite. This is quite dumb however, there are formal definitions of the "less than" relationship for natural numbers which can be stated in a few lines.

    And this is indeed how it will be done. I don't need all the examples, just some in order to extract the formal relationships such as lessthan. See my posting elsewhere in this discussion tree regarding Radon and image reconstruction from a finite number of projections. If you want it here is the formalsim. [cecm.sfu.ca]

  • by Anonymous Coward
    I guess we'll all have to keep an OpenMind about it. :)
  • does that have anything to do with the Vulcan mind meld?? Is it were a Vulcan does a mind meld with a machine??
  • by pb ( 1020 )
    Yeah, I guess slashdot is proof of the human-non-reasoning principle, eh?

    Neural nets are an interesting approach as well, but in that case, you'd need a big network, and a lot of training. In that case, it might be better to do everything at once, and try to create the autonomous robot baby that learns for itself first, and go from there. Otherwise, it'd be hard to interact with anything, and I'd never want to teach a baby just from the World Wide Web as we know it today; that's cruel and unusual!
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • by pb ( 1020 ) on Friday September 15, 2000 @09:28PM (#775122)
    These are both just open versions of The CYC Project. [cyc.com] I have serious doubts about a project like this working, but if anyone *does* get it working, they'll end up doing it first. Unfortunately, it doesn't look like they're going to *release* anything to the public anytime soon.

    However, I'd rather try to gather money to buy out/opensource cycorp than re-implement everything they've done in the past 16 years; they have a huge knowledge base already built, and a lot of code, and CYC can already do some interesting reasoning. (I know there isn't much there, but read what articles you can find; it's fascinating stuff)

    And only using yes/no facts for data is just stupid; the computer needs to do some reasoning, and have some structure, otherwise, it would all just take too long! That's about as stupid as 'the table method' in AI. Even simple AI's can't necessarily be represented like that, so I hope there's more to it that I just missed.

    ...and for those people who think computers inherently will never be able to reason: go home; you aren't welcome here. I'll argue with your facts, but I won't cater to your prejudices.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].


  • I certainly hope that such wonderful attitude (from people of both project to accept each others' point of views) can be spreaded to even MORE projects.

    There are TOO MANY potentially very beneficial projects suffering unnecessarily difficulties (and even premature deaths) due to the unwillingness of project leaders to open up their projects to accomodate the viewpoints from others.

    In the meantime, if anyone of you know of similar collaborative efforts are happening, whould you mind providing me/us the names/address/url of such projects, please?

    Thank you in advance.

  • I wouldn't be so sure about this. A couple of weeks ago there was an article (wish I could find the link) about robots created solely be computers. The operators fed in the terrain constants, but the computer was solely in charge of design, testing, and implementation. I don't know whether this would qualify as reasoning, but I think it's a big step up from the idiot algorithms running my Spades program.

    As for humans not reasoning... well, I don't know about you, tovarisch, but I reason. (With apologies to the great Robert Heinlein.)

    Kierthos
  • by DustyHodges ( 174738 ) on Friday September 15, 2000 @09:36PM (#775125)
    I don't think we should mind if Mindspring mind-melds into this Mindgame. I have half a mind to to write Open Mind and Mindpixel, and give them a piece of my mind!

  • McKinstry hasn't addressed why most real AI folks think his project is the equivalent of the Emperor's New Clothes: There will never be enough "mindpixels" to build anything constructive with, because the amount one would need to do so exceeds the number of atoms in the universe.

    I feel a little sorry for folks being snookered by this, because I think other more worthwhile projects are probably getting the shaft while college students spend their evenings typing in "Cocoa Puffs taste better with milk" and other variations on that theme.

    Go ahead and look at the MindPixel site. See how vague it is about what exactly he expects to DO to all these bits o' "consensus fact" to transmogrify them into thought.

    There's nothing there but mysticism.

    I wish it weren't so, but it's so. This is Fool's Gold.
  • Computers will never reason, and neither will humans. Death to top-down connectionism, long live neural nets. --Ryv
  • Now we get to see what two different implementations of an AI network, with the same set of data, will do. Something of this scale has never been done before. We will get to see where each excels, and where each fails miserably, and hopefully some benefit will be gained by figuring out what parts of the different structural algorithms are best suited to AI. We may end up with one system that's orders of magnitude better at finding matches to loosely described data, but at a severe disadvantage on the turing test due to inability to mimic human emotions. Maybe one system will get the best of both worlds. Hopefully, each system will have at least a few advantages over the other, so something can be learned. After all, these systems are too complex to simulate or estimate mathematically. We can estimate the efficiency of an mp3 encoding algorithm, but neural nets are so vastly more complex, that beyond a certain point, you need trial and error. This case will give us trial and comparison, which is even better.
  • by Anonymous Coward
    Hmm, if they're recruiting users from the internet, and 90% of the internet is porn, then .. the possibilities are endless!

    But seriously, I have to wonder about this. In one case, you get whatever the users enter as common sense (this is a picnic...dvd encryption is good!). In the other case, you effectively end up with the opinions of the people who monitor submissions .. in which case, why bother with the submissions at all?

    In my opinion, there isn't really such a thing as common sense.

  • I have made a decision, and that is that when I die, my brain is to be released as open source. Currently I am transcribing my entire life experience and every nuance of my personality into C and Python, though I may have to go from C to C++ as I am rather object oriented. (Cars, computers, CD's, breasts...) Assuming I don't die before I finish coding the prior portions of my life, on the instant of my death, a website will be created on Sourceforge and my brain will be released under the GPL. Thus, users can download my brain or the sourcecode for my brain (estimated download size one GZipped is anticipated at being just under 3 megs) and use my brain to perhaps live experiences they've never had, or to automatically acquire knowledge they don't possess that I do. If they so desire they can make improvements to my brain, or perhaps port my brain to other platforms such as cows, sheep or dogs. A word of warning on ports though, an attempted port failed recently when the wrong source was used and an early alpha version of my brain was ported to the Siberian hampster. Thus it's recommended that you don't port to animals with a brain smaller than a walnut, such as mice, rats and the entire MPAA.

    Should I die before the code is complete, the partially written code will be released under the GPL for people to finish my brain.

    Thank you.

    ---

  • by KFury ( 19522 ) on Friday September 15, 2000 @10:14PM (#775131) Homepage
    Similar to MindPixels, but far more entertaining, is BizarroKiehl [wired.com], hooked in to AOL Instant Messenger. It learns from responses and replies based on what it learns, and is extremely amusing to boot.

    From the old school, there's always AOLiza [wired.com]. She's not smart, she's not even that pretty, but she's the one all the guys want to talk to...

    Kevin Fox
  • This is loopy, you don't know consciousness is ?????? What, are you not conscious? I know what consciousness is, however because consciousness has a nonverbal essence, I cannot tell you what it is though.

    I didn't ever think I was, I am wether, I think I am or not. I do however feel I am. Yum Yum

  • by Anonymous Coward
    Browsing just the text (hit random sample and then choose any of the links in the box) will produce results similiar to browsing everything2. Would it be possible to merge the data gathered from everything and everything2 as extra data into this project? It would be interesting to see how much data is acutally held in everything[2] that could be applied here. (This database appears to just index individual words where everything will index phrases)
  • Yeah. This doesn't look much better than Eliza.
    • Alice: Do you like books or TV better?

    • Interiot: I find moving pictures informative.
      Alice: Where are you going?
    (I wasn't trying to fool it, I was trying to make it seem a little more noble that I watch TV a lot)
    --
  • by Everyman ( 197621 ) on Saturday September 16, 2000 @03:28AM (#775135) Homepage
    When MIT's Media Lab was founded in 1985 by Nicholas
    Negroponte, the Lab emphasized computers and multimedia.
    Ten years later it began its silly season with "Things that
    Think" (chips in shoes or clothing that communicate with
    the wearer, for example). But just then the Internet
    materialized out of nowhere and caught the Lab with its
    micropants down. Judging from its website, by now the MIT
    Media Lab has made up for lost time by promoting projects
    that expand e-commerce.

    More interesting than anything the Lab has ever produced
    is the fact that it's funded by big business. The Lab's annual
    budget in 1995 was $25 million, mostly from 95 corporate
    sponsors, half of which are overseas. While the Lab claims
    that sponsors cannot dictate the research, it's also true
    that grad students have to sign a nondisclosure agreement
    before receiving aid, and sponsors often fund research that
    is proprietary. Given this history, it's not surprising
    that since the Internet arrived, the Lab has been chasing
    the dot-com rainbow. But one has to ask: What about the
    public sector? Where's the vision? Does anyone at the Media
    Lab care?

    This OpenMind project smells more like a rat than a mouse.
    A computer knows only one thing, and it's the only thing
    it is likely to ever know without insanely massive databases,
    along with bloated fuzzy-logic programs that go by the name
    of "artificial intelligence," but are really thinly-disguised
    variants of brute force.

    A computer knows this: one is not equal to zero.

    Slashdot should try to stay clear of trendy hype backed by
    big bucks. That includes Wired magazine, which received
    start-up money from Nicholas Negroponte.
  • I think we're looking for a formal verbal response to what consciousness is.

    If it's only nonverbal, then you can't reason about it, you can't tell others your ideas about it in order to refine your concept of it, and we can't work towards putting consciousness in a machine (other than trial & error).
    --

  • This appears to be a lame attempt at a software-only attempt to implement the "overmind" architecture (AKA "New Gospel") spec'd out in "Vannevar Engleton"'s (Monty Meekman's) long suppressed ACM article of the same name.
  • And what will you do if you can't have a *formal* *verbal* answer to what consciousness is.... run away and play with some other toys?

    From where do you get the idea, that reasoning is verbal? The mind has to be in gear, before the words ever come out, words are only an afterthought after all.

    To get any kind of handle on what consciousness really is, usually ivolves a damn lot of suffering

    And no I haven't had nails banged through my hands, I've just this got darned ingrowing toeail.

  • and what makes it worse is I have also got a dodgy n o my keyboard.

  • A company I used to work for developed a neural net for a specialized Optical Character Recognition device that read the font that is used by magnetic readers (like on the bottom of a cheque). Once the code was written, all they did was read a whole shitload of the target data, and tell the neural net whether is was right in its guess of each character.

    No one tried to explain the difference between an S and a 5, or even the difference between a letter and a number.

    The end product was a best-in-class reader that could also reliably detect forgeries.

    I don't think you could teach a person to read by showing them flash cards, having them guess the word, and then telling them whether they were right or not; it would take too long and the person would get very frustrated. Neural nets, on the other hand can learn this way.

    These mind modeling projects differ only in scale, but the scale difference is gigantic. Can a neural net learn "common-sense" and natural language from a bunch of facts (some generally agreed upon, some not)? I think so, and am shoving stuff into GAC to do my bit.

    An end result of a huge database of facts like "Water is wet" and "Picnics are fun" is not AI. Whether or not the back end can develop any kind of AI from this input remains to be seen. I hope so, because it would be really cool.

  • I know most of the garbage won't make it in, but it's really frustrating when you're entering stuff. Just a single checkbox for "this person is clearly brain dead" would be good so that the really stupid shit can be filtered more quickly.
  • don't worry, quality filters are coming... they will work a lot like /. in that you can chose to see items of a only certain minimum quality. the rest will be there for the courageous to validate so that the masses can have their sanitized view.

  • 1. A neural network is not a turing machine. A neural network probably does not suffer from the halting problem.

    2. Just because a neural network is run on a general purpose computer does not mean that it has the same qualities as the general purpose computer. There may be problems on which the computer would not halt, but simulating the neural network need not be one of them.

    3. The halting problem doesn't have jack shit to do with artificial intelligence anyway. What's the problem? "We can't built something that's omniscient?" We're trying to build the equivalent of a human, and humans aren't omniscient. Similarly, Godel's theorems have nothing to do with AI either.

    4. You can't "rip the network apart to figure out how it came to that conclusion". That's the whole point. If we could do that sort of thing, we could give up on all the AI research and just start building really fine-grain nMRI machines. But we can't, so we guess and check.

    Regaring "it's a database, it's not intelligence", how do you know? What, you dream it? Divine revelation? Read it in a popular science book? "This blob of reddish gray stuff, it's just biological matter, operating according to the laws of physics, there's no intelligence."
  • i've replied to some of your questions here, but the moderators seem to have missed them all... so'll you'll have to dig until they wake up.

  • First, let me say this stuff is quite different from Cyc. While you can argue about Cyc's level of success, it is entirely the opposite of the MindPixel/OpenMind approach. Cyc is a very highly-structured ontology and set of inferencing systems built and maintained by experts in AI, linguistics, and cognitive science. MindPixel and OpenMind are simply database tables full of English sentences, with flags indicating that the sentences are true or false.

    I entirely agree with you that you cannot teach a system just by entering a bunch of text strings, some marked true and some marked false. Neither the MindPixel guy or the OpenMind guy have any definite plans for making use of this free-form text. The Wired author didn't think to ask how a computer which currently doesn't understand natural language could "learn" just by being presented with sentences.

    However, MindPixel at least could have one use in the foreseeable future; it can act as a corpus of test questions to use in a restricted Turing test. If you have somehow developed a program you think can pass as human, get a data dump from MindPixel and see how many true/false assessments your program predicts.

    Other than that, I don't see a practical purpose in asking people to contribute time to these specific projects. If you want to join a project where a community contributes knowledge in English (rather than, e.g., First Order Modal Logic), there are plenty of places online. Just don't expect a computer to be able understand your sentences, only human readers.
  • Ok, that is a valid way of approaching this class of problems.

    However, I still have three questions concerning your approach (I assume you are Chris McKinstry of the mindpixel project). I hope you will still read this post...

    1) How will you deal with conflicting knowledge? It is known that humans often display conflicting knowledge about the world. For (a classic) example:
    - Women love the morning star.
    - Women hate the evening star.
    - The morning star is Venus.
    - The evening star is Venus.
    - => Contradiction

    2) In which way is your project different from Minskys frame approach and all the others before him? How will you deal with the frame problem?

    3) Do you believe strong AI can be achieved with your project?

    I am still critical of your approach (as well as the Open Mind project) but nevertheless I wish you the best of luck. Research in this area must never stop.
  • Yes, I know what you're talking about and it's possible to create a computer that is also "conscious". Just narrow down to a manageable definition of conscious. However, to what degree? And to what degree are we self-aware? For instance, I *cannot* make my heart stop by will only (however, some mystics claim they can). I cannot feel my heart beat, except for physical senses if I sit really still. There are also many psychological "laws" we have yet to grasp. So we are limited too (just like machines). Neither of us are bound forever by those limitations though.

    However, it's useless to define consciousness as being self-aware only. The word has too many other meanings elsewhere. I agree that makes a technical discussion pointless.

    Reflective programming is the term most suited. It basically means (programming) a program that can inspect, test, evaluate and improve itself, even on the very code it uses to do reflectiveness. It's a program well-equipped to modify itself (so the programmer won't have to do it all and incorporate many new bugs).

    We can learn more about consciousness from such programming. Since such a program can never go beyond its bounds by reasoning on itself alone. Some point in the future, it has optimized according to its parameters and is unwilling/unable to go further. It then needs input from an external system. Likewise, we cannot exist in a vacuum. So consciousness is not just about reasoning on your internals, and reasoning on that reasoning etc, it's also about communicating with the external world _and_ incorporating what you find in yourself. Otherwise it would be too static, too linear, repetitive or just boring ;-). I suspect this extends throughout the universe, wether we like it or not.

    For instance, in any organization, there are many like-minded people (not necessarily too like-minded though). These are actually nodes in a greater "brain". By conveying complex information between eachother, these people are part of an "over-brain", or whatever else you want to call it. In essence every living being on this planet may conceivably be part of many such "over-brains".

    This shows in many aspects of our modern life: trends, fashions, politics, corporations, stockmarket, wars etc, etc.

    I'm not saying these "over-brains" are necessarily conscious/self-ware. But it's strange to think about if they were in some arcane way. Being an "over-brain" would be experienced vastly different than experiencing being a human (if it was conscious/self-aware) so we are probably not good judges on wether it is or not.

    If you've read the Foundation series of Isaac Asimov, you know what I'm talking about. The ability to calculate predictions over mass-behaviour in people. It's not so different than analyzing one person's mind. I don't think we'll be able to do it reliably though, and any published result will affect the result recursively if you're going to predict anything ;-)

    Sorry for this long post, I hope you found it worth reading even though I drifted a bit.

    - Steeltoe
  • 1. A nueral network is in fact a turing machine. It is made up of several very simple components: a floating point multiplier and adder, an accumulation buffer, and whatever you'd like to plug in for a transfer function, all multiplied several times over. These are all easily designed "chunks" of a Turing machine (Mmmm Fundamental Theorems of Computing class...). Just because a program is non-linear in nature does not turn it into something that is not a Turing machine.

    2. The halting problem is one people can solve. You can supply the problem as a GAC. The mindpixel approach can not solve it -- They claim that it can handle problems that are similar, but different from existing ones in its database, but neural networks would not be able to solve them. Rendering one of their claims in need of clarification.

    4. You can indeed pull apart neural networks to see how they operate. Do you think the weights inside of one are somehow hidden to us? How are they updated during training cycles? There are tools to transform any feedforward network into a rules-based system. It is difficult to apply symbols to rules generated so that we can *easily* understand how it makes decisions, but the rules never lie...

    And "regarding"... : Can it reason? Can it understand? Can it learn? A database does not do these things. The neural network structure can, but they've provided no proper explanation as to how it will operate.

    So currently they have a nice project to accumulate common sense information. Good for them. But there's no solid plan to do anything with that data. I wish them luck in developing that plan, but I think there are problems that are going to be extremely difficult to overcome.

  • You've got that right; AOLiza has been on the net for much longer than these amateurs, and it shows. Such cunning, such skill, and oh how she plays hard-to-get!


    fourteen: info?
    AOLiza: i'm not sure i understand you fully.
    fourteen: information 'bout you
    AOLiza: oh, i?
    fourteen: a/s/l?
    AOLiza: i'm not sure i understand you fully.
    fourteen: age?sex?location?
    AOLiza: tell me more about that.
    fourteen: how old are u? male/female? where
    u live?
    AOLiza: are such questions much on your
    mind?


    Your next project should be writing an AI for 'fourteen'; I think it'd be pretty easy to do as well. Just don't harass too many poor 11-year-old girls with it, okay?
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Cool stuff, I think.

    Nice to see AI moving away from the old days when folks thought they would someday sit down and write a program that was 'concious'. I'm still nto sure exactly what that word means really, but I'm pretty sure nothing straight outta GCC will ever be it.

    Not sure what the OpenMinds guys are going to do with all the natural language input. Just processing that is an OpenProblem, let alone learning 'Common Sense' therefrom, unless it just memorizes the answers, and replies to queries with some textsearch algorithm - hardly what I'd call 'concious' or common sensical.

    I like the MindPixel idea of using simple binary info. That seems more usable. I don't know what to think about the validation - I can see arguments either way on that. And I may need to read more, but I found his description of shining lasers through some information space suspiciously vague.

    On the whole, I'm not sure I agree that AI should start by trying to build 'Common Sense'. It's true that that plays a big role in human intelligence, but no machine lives the way a human does. It would seem that Intelligence is most likely to develop in a context where it's useful - and I can't see computers getting truly useful information out of facts about the Human Experience. I'd expect to see it (AI) emerge in something like network routing or search engines talking to each other (good self-reference potential there!) or distributed.net type stuff...

    Still, interesting stuff for sure.
  • Awwww, how cute. It reminds me of kindergarten when we learned to share. Now here's my question: What shape is this: 0

    Is it: 1. A square
    2. A rectangle
    3. A circle
  • This isn't all that new. In the 80s there were a lot of experiments with computer designed "life-forms". From what I remember (I too am unable to find a link) the computer was fed the terrain data, much like you are talking about, however it designed polygon based "creatures" based on the data that you fed it. Really interesting work, and probably the direct precursor to what you are talking about.
    • From the article: "The OpenMind and the Mindpixel projects will tie their databases together 'at the back end.'
    Mindpixel: "Dammit OpenMind, quit shoving things up my butt!!"
  • by khym ( 117618 ) <matt@nightrealms.cGAUSSom minus math_god> on Friday September 15, 2000 @11:15PM (#775154)

    I took a look at both of the projects: Open Mind associated text strings with pictures (discribing a picture, discribing a picture's contets, and so on), or one text string with another (explaining a fact, giving an example of a relation, explaining cause an effect, and so on). Mindpixel gets a collection of statements/questions in the form of text strings, and tries to get a consensus on whether the statement is true or false (or if the answer to the question is true or false).

    But this seems to me to be the wrong way to go about it. While these projects will collect massive amounts of data, all that data is is associations between text strings. All they'll be able to do is detect that there's certain connections/correlations between certain words, and certain collections of words. This way of doing AI assumes that intelligence is just a bunch rules and mechanisms for manipulating symbols, with the symbols somehow representing chunks of information.

    But what if you took these vast stores of information and replaced each word with word with some gibberish: "vut" replaces "car", "folp" replaces "clock", and so on. All the relations between words, and groups of words, remains exactly the same, but no human could understand it; all of the meaning would go out of it, because the meaning is being suplied from the outside, by the humans knowledge of what certain strings of letters mean.

    However, if you were somehow to do the same scrambling to the vocabulary of a human's mind, so that this (formerly English speaking) human now used "vut" for "car" and "folp" for "clock", other people would eventually be able to understand and communicate with him; all of the meaning and information has stayed the same, it's just the labels that have changed. But for something like Open Mind or Mindpixel, the words aren't labeling anything; there's just relations between meaningless strings of characters.

    The above argument is a (rather bad) summary of the argument that Douglas Hofstadter makes in the book Fluid Concepts and Creative Analogies [indiana.edu]. Anyone interested in AI should read this book. Douglas makes a very compelling argument that diving straight away into things like words and sentences is getting much to far ahead of ourselves, and that we first need to make tiny baby steps in AI before we can attempt to make an AI that really uses human languages.


    Suppose you were an idiot. And suppose that you were a member of Congress. But I repeat myself.
  • Besides the vagueness, the idea of having the AI built solely to handle yes-or-no questions seems kind of limiting to me. I mean, just amassing an assload of facts is kind of trivial.

    The user moderation system on Mindpixel is an interesting idea, but again, I don't think that it works very well either. There are too many cases where users either don't know the answer to the question that they're trying to moderate or they don't care enough to actually answer correctly.

    The idea behind Forum 2000 - which I think we all know was fake - isn't a bad one: learn language usage from Usenet feeds, then dissect the messages in Usenet posts themselves. Has anyone done anything like that?

  • I think Media Lab has come up with a gimmick to profile internet users for marketing trends. Some of the questions are serious, people that may not reveal personal NFO on the Net may do it for Media Lab under the guise that they are helping to build a AI program. I can't imagine that a real scientific project would be open on the Internet for important inputs. Cynical people like me might input the wrong answers in attempt to create a cynical AI computer ; )
  • No one else knows what consciousness is either. Most of those who claim to know what it is explain it as "what people have that machines never will", gleefully ignoring the fact that humans are machines.

    If anyone defines consciousness, then it's a goal you can shoot for. But Searle & Co. would never think of defining such a thing, because once you're pinned down, then you can be proven wrong.
  • well if you would like to come round my pad I'll play my guitar to you, maybe that will explain it.

    I have a URL tucked away somewhere, its all for free as well. If you can find it that is. Its a lovely place the internet for hiding things, best damn haystack I ever saw.

  • The point Hofstadter is trying to make is sound, but the argument is horrible!

    ...say you learned everything in Spanish. And someone else speaks English. You might have a very good conceptual model going, and you might actually *think* in Spanish; the only thing you don't know is English. And you'll learn that by mapping the existing English concepts onto the Spanish ones.

    How different is that from learning a "gibberish" language?

    The real question is, do we have an independent idea conceptually of what these things are? Well, if you label the picture "Apple", maybe you're doing a little better than if you're just using text. (great, now it knows that apples are red, and red is #FF0000, text seems to work well so far...) But ultimately, it's a concept, and you have to represent it somehow, and text is a good start, especially if you're a computer.

    However, remember that these projects are fundamentally at odds with anything you read in a book. That's because this is research, and that is theory. Of course theory helps enormously in implementation, and I think that both of these projects suffer from a lack of research, and a utilization of existing sane AI techniques.

    But there comes a time when you have to get off your ass and do something and see if it works, instead of writing another book saying that it'll never happen, and hiding under your desk when it does. A good example of that would be "The Emporer's New Clothes", which I wasn't terribly impressed with, although it's been a long time since I read it.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Cmdr Dawgface: Does M$ write bad software? GAC: True There you have it. This computer is learning fast!
  • When we figure out what consciousness is, we'll try to implement it. If we can't, we'll go back and try to understand it better.

    I never suggested that reasoning is verbal. Just that one person probably can't figure out all of consciousness, so they have to communicate with other people. Also, it's often beneficial to write down ones thoughts and study them carefully... you'll often find incorrect assumptions that you made somewhere along the line.
    --

  • Hardly. Machines are unpredictable and non-deterministic, just like organisms.

    MMM so if we are just like machines upredictable and non-deterministic, maybe you would be so kind as to define, what is predictable, and what is deterministic, because if you are incapable of defining what determinism is and what predictability is, then you are probably incapable of defining what they are not.

  • I can't believe what I am reading, you are not a bot or something are you? How on earth can I teach you to be yourself. By definition you are saying you are not conscious. Or is it that you just want empty bollocks and a fat wallet and yes I know life is a sexually trasmitted desease with a hundred percent mortality rate. And the only gripe I have with people who commit suicide is how impatient they are.

  • Most of those who claim to know what it is explain it as "what people have that machines never will", gleefully ignoring the fact that humans are machines.

    Humans are *not* machines! That implies a certain cause-and-effect type of reasoning. The human mind is probably closer to a quantum processor.

    For instance,

    Explain what is music and what is not?

    Well, if you can't explain it -- why do you like it??

  • Just because the computer outputs something physical doesn't make it any more intelligent. Now a computer that could learn to play a game well by reading a book on the subject -- that would be something.
  • You know, standard chat room conversations are probably even simpler than the original eliza. Forget the turing test, let's shoot for a program that can sucker a Disney executive into a meeting in Santa Monica -- not only would it require a pretty good pattern matcher, it could be self funding with blackmail funds.
  • For real fun, you can try the AI that won the last Turing test (to convince a human that it was another human, and not a computer) at www.alicebot.org [alicebot.org].
  • The point Hofstadter is trying to make is sound, but the argument is horrible!
    I think that's my fault; I read the book about a year ago, and I'm doing this all from memory.

    However, remember that these projects are fundamentally at odds with anything you read in a book. That's because this is research, and that is theory.

    ...

    But there comes a time when you have to get off your ass and do something and see if it works, instead of writing another book saying that it'll never happen, and hiding under your desk when it does.

    Douglas Hofstadter is a researcher, and includes a great deal of information about actual working projects in his book; he's not just an ivory tower theorist. Among them are "CopyCat", where the program works on anologies in the domain of text strings: "Given the tranformation ABC to ABD, do the same thing to XYZ." While this is very simply, and in and of itself not useful, it does adhere to his theory that we need to take baby steps in AI before we we can have computers reasing like humans about the real world.


    Suppose you were an idiot. And suppose that you were a member of Congress. But I repeat myself.
  • Humans are *not* machines! That implies a certain cause-and-effect type of reasoning

    Hardly. Machines are unpredictable and non-deterministic, just like organisms. Else why the error-correction in the computer you're using now? Sure, there's differences in degree and type, but unless you're going to make an Essentialist argument, you can hardly deny that on some very low level, you are a machine. A very, very complex one, possibly one that no human artifact could ever simulate or equal, but a machine nonetheless. Learn how your cellular processes work if you don't believe me - beautiful, elegant, baroque machinery. Perhaps we have different definitions of what machines are, but unless you just mean "something amde by humans" I bet I can argue that humans fit any of your definitions.

    Explain what is music and what is not?
    Well, if you can't explain it -- why do you like it??


    Oh come now. That's cheating. I also really can't explain what 'three' or 'red' are, in a truly absolute way. It nether means they don't exist, nor that the concepts aren't useful to me. And if you want to allow looser definitions, then sure I can define music: sound arranged by humans, in a way some of them find pleasing.

    Furthermore, I resent your implication that I have to be able to explain something something in order to be able to enjoy it!
  • Humans are machines albeit very sophisticated ones. Problem is people often try to attach consciousness to software that is intelligent. I believe that in order to have a conscious machine you'd have to have special hardware that has whatever special properties the human brain has.

    Short of the special hardware all you'd have is a very intelligent software program but it wouldn't be conscious.

The one day you'd sell your soul for something, souls are a glut.

Working...