Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Interesting (Score 1) 140

As I said, I totally agree that an intelligent system needs to be able to learn, so I'm with you there. I'm not sure that the use of an ontology prevents a system from learning. Perhaps I'm missing what you are saying.

Given that the system can both reason from its knowledge base, and modify that knowledge base from its experiences; wouldn't that fit your working definition of an intelligent system? If so, there doesn't seem to be anything in the nature of an ontology that would prevent it from meeting that requirement. Certainly, the ontology that we are using as the semantic memory in the cybernetic brain is modifiable by the system in response to experiences.

One of the reasons we reduce all our theories to functional software/hardware is that it can expose unanticipated issues in the design, so perhaps if you could suggest a concrete example, we could explore the strengths and weaknesses of the model we are implementing?

Jim

Comment Re:Interesting (Score 1) 140

Thanks for the feedback.

The concept of "general purpose intelligence" is slippery; heck the definition of intelligence is pretty loose. I totally agree that learning is critical, and Basil is actually setup to modify his own ontology based on his analysis of his experiences. At the moment, he does only the simplest of modifications (adjusting his estimates of probabilities based on his episodic memory of past experiences), but the structure is in place to add new nodes into the ontology, add new relationships, and modifiy the strength terms on the relationships.

Basil does ask for assistance on new objects, and we give him our take on the links into the ontology, but we are far from the point where Basil can deduce the proper connections to make for a internally generated symbolic tag. But I haven't seen a lot of functional examples of systems that do that effectively. One of our guiding principles is that if we don't have experimental data (i.e., we built and tested it) we can't claim to have solved it - regardless of how elegant the theory is.

We looked extensively at the Cyc experiment, and came to very much the same conclusions as you have, especially about the need for external input. We use what we call a rough ontology (emphasis on local consistency, rather than global, and a decay mechanism in propogation) it seems to have several benefits for our uses. There is a much more detailed discussion in our book (shameless plug) "Robots, Reasoning, and Reification". But the general model is not based on a theoretical formal logic based model, rather it is based on the neurophysiology of brains.

I have to point out that we are not targeting a human level intelligence. We're aiming more at the level of a cattle dog, or a smart Australian Shepard. When confronting a can of beer, they're not going to reason about the dynamic center of mass, but they will notice the gurgle. They won't lay out a reasoned set of experiments, but they will (probably) end up biting a hole in the can and lapping up the beer. If we can reach that level of 'intelligence' we'll be happy campers.

Jim

Comment Re:Interesting (Score 1) 140

You laboriously hand analyse sonar sensory data from chairs, contrasting it to that from other objects, until you've identified one or more fuzzy sensory patterns that are unique to chairs.

Well, we actually don't laboriously hand code the percepts. We did for the first two or three, but now we let Basil build his own. He still requires some help, but he does most of the work. The dialog looks something like this:

Basil: What is the name of this object?

Human: round-table

Basil: What class does it belong to?

This is where basil connects the symbolic tag into his ontology. We give him a category to assign the new object to.

Human: table

Basil: What is the distance to the objects centroid?

At present, Basil relies on us giving him the distance, rather than deriving it by rolling around the object

Human: 1600mm

Basil: What is the orientation of the object?

Human: 0 brads (this is arbitrary)

Basil then snaps about a hundred sonar snapshots, with small wiggles in between each snap. This addition of noise allows him to statistically analyze the profile.

Basil: Is there more to see?

If the object looks different from different angles, we present Basil with different views, and we can give him views at different ranges. Basil then takes all the data and builds a collection of templates that can be used to recognize this object.

He also compares these templates to the existing templates to see if there are overlaps.

As soon as this is done, Basil can recognize instances of this class of object, and add them to his mental model so that he knows where they are, and in what pose.

You've bridged the sensory-symbolic divide (in a rather useless way)! The rest of your robot can now operate at the symbolic level of it's internal model and perform party tricks like fetching beer.

I'm not sure that the bridge is useless, we have already shown that we get improved performance using the techniques, versus not bridging the divide. We are still collecting the data to show the benefits. But Basil is capable (with some assistance) to recognize that it has run into an object that it does not know about, and add a description of that object to its knowledge base. I totally agree that if the robot cannot learn, it isn't much more that a toy.

It may be able to fetch a beer, but it's not going to take over the world, or for that matter even recognize that the "beer" it's brought you is actually a rusty tin of dog shit with the same sonar signature as a beer.

However, your last point is dead on - if two objects have the same sonar signature, Basil cannot tell them apart. Of course, if two objects have identical signatures for a human's sensors, the human won't be able to tell them apart either.

Jim

Comment Semantic Tags (Score 2, Informative) 140

Yes, 'wooden-chair' is a label. When the robot is mapping from the sensor domain to the semantic the result of the recognition is the label. So any label would do. Once the semantic tag is selected, along with the position and pose of the object, it is added to the 'mental model'; the robot keeps track of the things that it has identified, and where they are. If Basil stopped here, as you said, any label would do.

However, when the robot is given a goal ("deliver tea to the conference-table-area") the mental model is used to generate a symbolic representation of the world-as-it-is, along with the representation of the world-as-it-is-desired. At this point, the tag "wooden-chair" is used to extract information from the semantic memory (an ontology of facts and behaviors) and the linkages in the ontology allow Basil to reason about chairs with respect to the current goal. So he knows that chairs are generally stationary - they stay put, as opposed to people who move on their own; he knows that wheeled-chairs can be pushed out of the way, but the wooden-chairs can't, and that people can be asked to move, but chairs can't.

So at this point the label begins to be less arbitrary since it is now embedded in a complex knowledge structure. If we gave the chair the label 'battleship' (and if we had information about battleships in the ontology), Basil would generate different behaviors with respect to the object.

The classification scheme is fairly simple - primarily because his sensor modalities are thin. He builds a set of representations (classic pattern based templates) and uses these for both recognition and preafference (projecting what the world should look like).

When he is learning a new object, he checks to see if the patterns are mutually exclusive or if two or more objects can be classified from the same sensor data. If there is no way to distinguish between two classes, he reports both as possibles. These get loaded into the mental model and as he gets more views of the object the winnows down the possibilities.

So he is using multiple time and space separated views and 'thing constancy' as a principle to help him classify. There is a whole lot more detail in our book.

Basil is designed to learn from his experiences. He maintains a complete episodic memory at present. The task for the next year is to enable him to analyze these memories and generate new sensor representations, to subdivide existing representations, and to add new facts to his semantic memory. The tools that we will use will be a mix of standard machine learning techniques along with a technique that Louise developed for environments where not all features are salient to the classification.

Jim Jim

Comment A few points about Basil (Score 1) 140

A couple of quick points, based on some of the comments:

1) Basil is an autonomous robot, not a tele-operated system. The robot has a fully functional probability-aware planning system, execution monitor, and a reification engine that maps between symbolic representations and the sensor domain. This enables us to give Basil a goal, and let the robot figure out how best to achieve it. Basil then executes the plan and monitors the results, so he can re-plan if things go wrong.

2) The sonars cannot tell the difference between wood and metal. Basil's ontology has representations for two types of chairs, 'short-wheeled-chairs', and the 'wooden-chairs'. The shape of the chairs allow the robot to tell them apart, and the robot uses the semantic label for the type of chair he recognizes.

3) The voice is generated using the FreeTTS text to speech package, and there are sentence templates for things like "I see the I expected to see approximately centimeters away."

4) Basil is not a Dalek - Daleks are wide at the bottom and narrow at the top to provide stability during combat. Basil is wide at the top to provide room for more beer.

Hope this clears a few things up.
Jim

Comment Re:FAAAAAKKKEE (Score 1) 140

The process is a little more complex, but if there are two objects that appear identical to the sonars, the robot cannot tell them apart, any more than if you were to see two identical objects, you would not be able to distinguish between them. We based the design on the mechanisms that biological systems use for recognition and preafference, so it has the same basic characteristics, and the same failure modes.

Slashdot Top Deals

Is your job running? You'd better go catch it!

Working...