OpenCyc 1.0 Stutters Out of the Gates 195
moterizer writes "After some 20 years of work and five years behind schedule, OpenCyc 1.0 was finally released last month. Once touted on these pages as "Prepared to take Over World", the upstart arrived without the fanfare that many watchers had anticipated — its release wasn't even heralded with so much as an announcement on the OpenCyc news page. For those who don't recall: "OpenCyc is the open source version of the Cyc technology, the world's largest and most complete general knowledge base and commonsense reasoning engine." The Cyc ontology "contains hundreds of thousands of terms, along with millions of assertions relating the terms to each other, forming an upper ontology whose domain is all of human consensus reality." So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"
I Don't Get It (Score:3, Interesting)
Web games much better for collecting this info (Score:5, Interesting)
He's recently been working on a project called Verbosity, which uses such games to collect the same sort of common-sense data that Cyc has been trying to collect all these years. Cyc's ontology apparently contains "hundreds of thousands of terms, along with millions of assertions relating the terms to each other." If Verbosity is as popular as von Ahn's ESP Game [espgame.org], the game could probably construct a better database in a matter of weeks.
Here's the abstract from a research paper [cmu.edu] on the topic:
Verbosity: a game for collecting common-sense facts
We address the problem of collecting a database of ""common-sense facts"" using a computer game. Informally, a common-sense fact is a true statement about the world that is known to most humans: ""milk is white,"" ""touching hot metal hurts,"" etc. Several efforts have been devoted to collecting common-sense knowledge for the purpose of making computer programs more intelligent. Such efforts, however, have not succeeded in amassing enough data because the manual process of entering these facts is tedious. We therefore introduce Verbosity, a novel interactive system in the form of an enjoyable game. People play Verbosity because it is fun, and as a side effect of them playing, we collect accurate common-sense knowledge. Verbosity is an example of a game that not only brings people together for leisure, but also collects useful data for computer science.
A bit late... (Score:2, Interesting)
http://googleresearch.blogspot.com/2006/08/all-ou
AOL has released interesting data as well...
http://www.techcrunch.com/2006/08/06/aol-proudly-
Conflict of intent (Score:5, Interesting)
Put another way, any complex set of rules will inherently be unable to stay consistent because eventually the syntax complexity become able to state, "The following sentence is false. The previous sentence is true." This occurs regularly in data processing when a given field's syntax (datum value) bridges or is not defined by your context (schema).
The real crutch is that syntax is inductive, where we try to fit each word into a category; however, our context (use of language) is deductive, we all learn it through experience with a physical world. I have seen this problem over and over as people constantly modify the schema to overcome syntactic limitation. While Cyc is designed to be constantly expanded with new rules, they are still syntactical statements.
By Gödel's Theorem, syntactic systems are doomed to fail. Instead, Cyc should be allowed to learn through observation and deduce its own understanding of the world so that it is not bound by any particular syntax. While this could work, it fails the ultimate intent. We want a computer that can both learn and yet not be wrong.
The problem is you can't have that. You can either be syntactically correct, but simplify the model until it works (Physics). Or, you can allow deductions and have to work in the realm of probability (Humans).
Although, I would gladly accept a computer that erred like a human and yet didn't bitch about how it was someone else's fault.
Re:So is Cyclopedia (Score:3, Interesting)
Re:I Don't Get It (Score:5, Interesting)
Even if it could interpret your question correctly, it would most likely not have a local data store with enough ambiguous information to answer any arbitrary question. It could perhaps answer the question "Is a dog a mammal?" as "True", but not anything more complex. However, connected to the 'net and things like Wikipedia (if you trust that information), other encyclopedia's, dictionaries, Google (to come up with lesser known facts/infobits) you might possibly get it to some sort of rudimentary pseudo-AI which could possibly do as you mentioned in more general way.
Unfortunately, however this is still a long way from sentient AI. Something you could literally talk to and it would be correct in factual based questions 99% of the time and be able to think abstractly.
Re:Conflict of intent (Score:5, Interesting)
I've followed the Cyc project for a while, and this is something that they've dealt with from the very beginning. The solution is contextualization. The example that they give is "Dracula is a vampire. Vampires don't exist." The solution is what we do -- in this case, breaking apart the contradiction into the contexts of "reality" and "fiction."
How to make CYC more "human" (Score:5, Interesting)
those concepts interrelate. In other words, it emulates an aspect of the pure rational part of
human reasoning about the world.
But it's known that humans are not dispassionate rational agents. And indeed that there probably
is no such thing as a dispassionate rational agent. Commander Data and Spock are very ill-conceived
ideas of robot-like reasoners. Passion (emotion, affect) is the prioritizer of reasoning that allows
it to respond effectively (sometimes in real time) to the relevant aspects
of situations. Without the guidance of emotion, no common-sense reasoning engine would be powerful
enough, no matter how parallel it was, to process all of the ramifications of situations and
come up with relevant and useful and communicable and actionable conclusions.
So how do we give CYC passion? Or at least a simulation of it?
Well the key would seem to lie in measuring the level of human concern with each concept, and with
each type of situational relationship between pairs (and n-tuples) of concepts.
How could we do that? How about doing a latent semantic analysis from google search results. Something
similar to Google Trends, but which measures specifically the correlation strengths of pairs of
concepts (in human discourse, which Google indexes). The relative number of occurrences (and co-occurrences)
of concept terms in the web corpus should provide a concept weighting and a concept-relationship weighting.
If we then map that weighting on top of the CYC semantic network, we should have a nicely "concern"-weighted
common-sense knowledge base, which should be similar in some sense to a human's memory that supports
human-like comprehension of situations.
Combining a derivative of google search results with CYC is my suggestion for beginning to make an AI that can talk to
us in our terms, and understand our global stream of drivel.
I wish I had time to work on this.
Waste of Time and Money. Sorry. (Score:3, Interesting)
Is one to assume that the way to common sense logic in a machine is via linguistic/symbolic knowledge representation? How can this handwritten knowledge base be used to build a robot with the common sense required to carry a cup of coffee without spilling the coffee? And why is it that my pet dog has plenty of common sense even though it has very limited linguistic skills? I think it's about time that the GOFAI/symbol processing crowd realize that intelligence and common sense are founded exclusivley on the temporal/causal relationships between sensed events. It's time that they stop wasting everybody's time with their obsolete and bankrupt ideas of the last century. The AI world has moved on to better and greener pastures. Sorry.
Don't be alarmed. Be very, very frightened (Score:5, Interesting)
Don't be alarmed, Arthur Dent. Be very, very frightened.
Human thought is a rather complex thing, that don't always appear to follow logical patterns or rules. Or not the simple "if I want X, I must do Y" clear-cut rules that nerds everywhere expect. Human thought is a complex attempt at balancing the priority of not only "I want X", but also stuff like "but it would be socially bad to be seen doing Y", and "I could do Y1 instead, but that's way more effort than I can be arsed to do today", and "it would be nice to have time left to do Z too today, or the missus will blow a gasket", and quite often "actually I don't really want X, I want Z, but it would be uncool to admit that." It's not just following rules and logic, it's trying to fit it all in a complex scheme of priorities, social rituals, and whatnot, and most often boiling down to finding the least crappy compromise in that space.
In other words, whenever you find yourself thinking, "meh, people/men/women/engineers/PHBs/whatever are so stupid/illogical/whatever. If they want X, they should just do Y", chances are it's not them who are illogical. It's you who don't understand their personal version of that maze of priorities and rituals. Or what is the real Z they're after, when they say they want X.
Most of those things aren't even at a conscious level. Even if you poll people along the lines of "if you wanted X, would you do Y?", you'll get an answer that's most often useless. For starters it will be heavily skewed towards what they'd like to think of themselves, not what they'd actually do. Second, without providing a _lot_ of context, it will bypass most of those priorities and rituals that might override that in practice.
What's the point of this whole rant? That the first AIs trained by humans will inherently be a dud.
If you make an AI that functions by precise, inflexible rules, congratulations, you've just programmed OCPD. Literally.
Add a lack of perceptions of human reactions, feelings, body language, etc, and you've given it Autism too. Again, pretty literally.
I.e., I'd expect the first few AIs, or even generations of AIs to be... well, don't think the lovable R2D2 or the essentially human C3-PO, but an electronic equivalent of the most obnoxious socially-dysfunctional kind of geek.
If you want that as an overlord... I don't know, I hope I'm not around at least.
Re:Conflict of intent (Score:4, Interesting)
Goedel's theorem has nothing whatsoever to do with the practical workability of Cyc's own formal system: if it can prove a fact, it WILL prove a fact with ironclad logic and show you all the steps. That you might not be able to prove the proof itself is not relevant, though you certainly can check it against other systems. In the end it's down to consensus: "8 out of 10 formal systems agree, one didn't, and one just got confused and started babbling in the corner".
And of course whether it's sound or not is also not a given -- especially if it checks Wikipedia. Though come to think, it might be really good at spotting inconsistencies in Wikipedia articles.
self awareness (Score:3, Interesting)
How about putting that question to Opencyc?
Re:Mining Wikipedia? Yes, we are. (Score:2, Interesting)
We're also working on creating Semantic Web compatible URIs for the all of the Cyc terms.
Anyone who wants to join the Cyc Foundation can contact me: johndcyc at cycfoundation.org.
Check the schedule of Skypecasts at Skype.org. We can add you to the chat, but you probably won't be allowed to talk UNLESS you have a USB microphone or headset.
You can also listen in on our Skypecast tonight. It's every Thursday at 9:30pm EST, 8:30 CST.
business application (Score:3, Interesting)
So once it gets basic understanding of accounting, inventory, retailing, management, logistics, etc., you could easily build a natural language interface to it: "Three boxes arrived today from supplier X and we paid $90 for them". If there is ambiguity in the sentence, Cyc would ask natural language clarifying questions: "Was each box a line item on the invoice, or were there many line items?"
I think this would be much improved over the current data-interfaces we have today, which are basically graphical recapitulations of paper-based forms in the format of "field: [value]".
Another problem with modern apps is that they all contain their own internal, add-hoc ontologies. These ontologies are hard-coded, and usually aren't designed to intergrate with ontologies in apps from different domains -- e.g. logistics and accounting (unless they are from the same vendor). Cyc has a standardized, presumably well-thought-out and near comprehensive ontology. It can also grow its ontologies based on user input. So you have this automatic integration feature that's sorely lacking in the end-user computer world.
Re:AI needs a 3d environment to work (Score:3, Interesting)
Re:Unfairly... You're right! Join us! (Score:2, Interesting)
The Cyc Foundation is a new independent non-profit org. I worked at Cycorp for 7 years before forming the Foundation with a co-founder that has a totally outside perspective. We're very optimistic about the progress being made. We've got about 2 dozen people helping so far, and that's before we've made anything available (such as the Web game we're working on) that will allow for much broader involvement.
Listen in on our Skypecast tonight (every Thursday night) at 9:30pm EST. Look for it on the list of scheduled Skypecasts at skype.org. You can participate if you have a USB microphone or headset.
More than a database (Score:3, Interesting)
Re:Don't be alarmed. Be very, very frightened (Score:1, Interesting)
Good. An AI needs OCPD. A computer cannot be allowed to get bored; it spends most of it's time sitting around, waiting for humans to interact with it. An AI that doesn't like waiting is an AI that's fatally flawed.
Add a lack of perceptions of human reactions, feelings, body language, etc, and you've given it Autism too. Again, pretty literally.
Again, good. Austic people tend to be very precise, reliable and predictable, except when you trigger an accidental temper tantrum. Don't give the AI the means to have temper tantrums, and you've got a reliable person that's smart enough to understand what you want, and to do what it's told, but not fall prey to all the emotionalism that so strictly limits human potential.
You don't want a computer that can get bored or throw temper tantrums. But you do want one that can deal with unusual crisis situations in a calm and level-headed manner. An austist with OCPD linked to stellar job performance is exactly what you need for, say, an air traffic controller. Give the AI risk management and problem solving skills without all the panic and mental breakdowns that a human controller is subject to, and you'ld have something a lot better than what we have now.
Remember: we already have human thought. We don't need machines to think like humans; only machines that can understand and obey humans. We can make humans already.
Re:I Don't Get It (Score:5, Interesting)
Re:Natural Language Interface for Cyc (Score:2, Interesting)
Re:So is Cyclopedia (Score:3, Interesting)
Cyc (I don't know for openCyc) there is a natural language module, I never had the occasion to work on Cyc and they promised it for OpenCyc 1.0. The goal of it is to be able to feed from large text corpus exactly like the wikipedia, full of general knowledge.
The goal of Cyc is to be able to resolve conflicts between two apparent contradicting proposition. Example
* George W. Bush is the president of the USA.
* In 1790, George Washington is the president of the USA.
Cyc is built with a sense of context. Where a simple NL (natural language) parser would not understand it, Cyc has the following common sense knowledge or has inferred it
- a president is a living human being
- presidency is a mandate limited in time
- only one human being can be president of a given region at one time
- a human being can not live two hundred years
- "In 1790" denotes the past
It can know make a serie of hypotheses
GW Bush is a human being.
GW Bush is a living human being.
GW Bush is the current president of USA
GW Bush is was president of USA at an unspecified time
Georges Washington is still president of the USA (and "in 1790" must be interpreted in another way)
Georges Wasington and GW Bush are both president of the USA (and therefore we must be in an unknown context where the rule of the uniqueness of a president of the USA is false)
etc...
Given its knowledge, it can order its hypotheses from more probable to less probable given the cost of the assumptions it must make to maintain each of its hypotheses. So yes, I would say it can feed from the wikipedia to some extent.
A lesser known fact about CycCorp is that they work with the NSA on the Terrorrist Information Awareness program, already datamining GiBs of natural language datas.
as an overlord? (Score:3, Interesting)
And trust me, you _don't_ want an overlord that's inhumanly logical about it. It's that kind of thing that led to such logical solutions as "let's extermine the population of Poland until 1970 to make room for German settlers." Or such logical solutions as communism. Sure, on paper it's perfectly sound and logical, if you assume that you can change humans overnight. Maybe sometimes being able to understand humans actually helps, eh?
That said, most of the stellar job performance that OCPD cases claim exists only in their own mind.
They tend to never get a job done because it's not yet perfect, for example. I have one two rooms from me at the office, who's taken three fucking years just to get a build script done because everything wasn't perfect enough for him. No exaggeration. Literally. Well, in parallel with building a convoluted unit testing environment, because the existing one didn't satisfy his purist view of the matter. (The old tests had some functional testing too. So his perfect version actually tests less, but is _pure_ unit testing, by his own definitions of it.) Of course, he's convinced that he's done a stellar, uncompromising job, but for everyone else he's just wasted some time and didn't even achieve more than what we already had.
Do I really want that even in a computer? Nope, not really. _The_ problem with most programs nowadays is just that: that they're OCPD nutcases. Workflows that were a lot more flexible (even if not as fast) with a pen and paper, get shoehorned into some lobotomized set of rules that allows no exceptions. The problem is that most often the rules aren't actually what the user wants to do: e.g., you end up unable to save a new client's data until you know their fax number, whereas with a paper form you'd fill in the data you have and leave the rest for later. Often it's more annoyance for the users and more work in workarounds, than doing it without a computer in the first place. (Of course, the equally OCPD-ridden creator will then bitch and moan about "idiot lusers" and how everyone should change to fit his perfect tool, instead of his tool changing to do what the user actually needs done.)
No real qualms with autism on its own, though. They tend to be very good with a computer, or any kind of abstract problem for that matter. (If sometimes difficult to deal with in a team.)
Combine it with OCPD, though, and... well, let's just say that they mix like Ammonium Nitrate and Fuel Oil. You get some of the most obnoxious personalities that way, and it's no fun for anyone involved, not even the geek. The poor bugger can't even tell that he's the one who offended the whole room, and proceeds to imagine that he's the victim of unwarranted cruelty.
Is it just me? (Score:5, Interesting)
Open Cyc and the gate (Score:2, Interesting)
Re:Is it just me? (Score:1, Interesting)
(implies (and (children ?U ?X) (children ?U ?Y)) (or (equals ?X ?Y) (siblings ?X ?Y)))
("If two different children have the same parent, then they're siblings.") However, my understanding is that OpenCyc doesn't include the many thousands of Cyc rules (though ResearchCyc does). Try asserting this rule in your version of OpenCyc and asking the query again. Also, when asking this query, be sure to allow for a transformation step.