Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Do robots need behaviour laws. Of course. (Score 1) 129

There is quite a bit of bashing going on, so I'll start like this:

I am the Director of the Intractable Studies Institute, working on programming my mind into an android robot, 3 years into it in a 5 year Project Andros, and 50 other advanced projects that are cutting-edge. I am also a software engineer. Just wanted to make that clear because many comments above attack the author unless they're in AI or an engineer. I have defined Sentience for what I need because I found the standard definition unsatisfactory. I also created my own L8-IQ Scale, the standard ones didn't model what I needed for AI.

I think the the author Hutan Ashrafian identifies one of the problems correctly, how should robots behave. I don't necessarily think he should limit that to AI-AI behavior. I also don't think his extension of Asimovs 3 laws are the right way. But, it is a way. I don't agree with it, but IMHO it is a workable system, just not the way I would do it. I think those who attack Asimov's 3 laws need to consider this thoughtfully: Until you have implemented an alternative behavioral model for AI-Human interaction, how can you say for certain the 3 laws would not work? Just because you were not able to implement the 3 laws, doesn't mean they are un-implementable. As a general principle I don't think putting limits on advanced ideas because you have not made them work is a good approach, so I say keep an open mind.

The solution I am implementing for my mind in an android robot in Project Andros defines Sentience as I need it, then models life forms generically as individual and group, and then defines the L8-IQ Scale for peace. I propose the L8-IQ Scale is the right model that AIs and Human should aspire to. These following questions apply generically to AIs and human behavior:

1. Is selfish and greedy -vs- selfless and altruistic important for Humans? Robots?
2. Is gullible and not-to-smart -vs- smart and skeptical important for Humans? Robots?
3. Is intolerance -vs- tolerance of other life forms and cultures important for Humans? Robots?
IMHO these 3 are all programmable, within the realm of Computer Science.

I know that the field of AI (now AGI but I call that word play, I'm still using AI for it all) tends to prefer the AI that learns as it grows and that hard-coding rules like the above 3 are shunned upon. IMHO I would say that it it was too hard to crack that nut, to program such a robot with rules, and then the science declared it was impossible and moved on to other models like Neural Nets. But I say it is possible. My model will be a hybrid Network with Rules.

The Intractable Studies Institute has modeled Creativity itself, check it out, all these are at IntractableStudiesInstitute.org. At the Institute we've also solved the project problem of robots taking our jobs away from us in 14 Laws/Rules, a Utopia Androidian model of labor that as a side-effect obsoletes all forms of retirement income such as Social Security, Pensions, IRAs with your robot labor proxy pulling a full salary for you.

These 3 tests above in my opinion are the key to AI and Human interactions being peaceful.

Sorry this is a bit long-winded.

Patrick Rael, Director, Intractable Studies Institute. "When all else fails, come to the Institute!"

Slashdot Top Deals

I am more bored than you could ever possibly be. Go back to work.

Working...