Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs (engadget.com) 71

An anonymous reader quotes a report from Engadget: [T]he Responsible Artificial Intelligence Institute (RAI) -- a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs -- hopes to offer a more standardized means of certifying that our next HAL won't murder the entire crew. In short they want to build "the world's first independent, accredited certification program of its kind." Think of the LEED green building certification system used in construction but with AI instead. Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the "father" of IBM Watson, though his initial inspiration came even further back.

Certifications are awarded in four levels -- basic, silver, gold, and platinum (sorry, no bronze) -- based on the AI's scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status. [Mark Rolston, founder and CCO of argodesign] notes that design analysis will play an outsized role in the certification process. "Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they're constructing that AI within their overall business," he said. "And that requires a level of design analysis, both on the technical front and in terms of how they're interfacing with their users, which is the domain of design."

RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today's developers -- and their harried compliance officers -- face while building public trust in the brand. "We're using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law," Saxena said. "We see ourselves as the 'do tank' that can operationalize those concepts and those think tank's work."

This discussion has been archived. No new comments can be posted.

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs

Comments Filter:
    • Three Laws of Robotics

      Yea, because that is totally a real thing [amuniversal.com].

    • by gweihir ( 88907 )

      These "three laws" require robots to be somewhat bizarre human beings, because they require a rational, intelligent mind with free will to work. Asimov was not only a hack as a writer, but also as a scientist.

      • Not really because the basis for a lot of the stories was about circumventing those laws in creative ways. He wasn't trying to create viable rules for real life.

      • harm.
        creating a detailed class that successfully handles harm will be very useful

    • I don't see any AI for the next 100s of years even being capable of interpreting or understanding Asimov's 3 Laws, leave apart analyzing complicated situations and competing instructions to apply them.

      Defining the word harm to an AI, or in fact every single word in those laws, is impossible in any current or even fictional way that's been thought off.

      Hell, every single human goes through decades of education, training & intense social conditioning with rewards & punishments and even things like reli

    • I'm not certain that any of these AI model developers can quantify with accuracy percentage how bad there model behaves versus the actual best course of action.

      Like a medicine that says cures cancer and has a big red +/- 50% accuracy on the label.

  • We'd be lucky if our AIs end up as good.
    • by narcc ( 412956 )

      There's no need to worry in any case. I've already taken all necessary steps to prevent a rogue AI situation caused by a HAL 9000 type system.

      How did I accomplish such a feat? The same way that I single-handedly eliminated the threat of werewolf attacks, of course!

      I've also prevented any and all attacks by Christmas elves. You can all sleep easy tonight.

  • 1. HAL was fiction, we don't know how AI will behave. 2. I'm sure the people in the novel/movie also did also everything to prevent it doing bad things.

    • You forgot:
      3. Non-profit = "I get to make up a job and get paid for it".

      • by rtb61 ( 674572 )

        More to that, The RAU certification process creates schizophrenic AIs the nice bits get certified for advertising and the psycho bits get added after.

    • I have a good guess: "Sure Dave, I can totally do that for you. You believe me, right?"
    • It's been a while since I read it, but I'm pretty sure HAL's actions were a direct result of someone giving it secret directives that overrode the normal programming. A slightly different twist from Asimov who wrote about AI that simply "outsmarted" the rules.

      But it's all a moot point, we don't have AI or anything remotely approaching it. It's like trying to come up with certifications for orbital rocket launches before we've even discovered how to make fire.

      • It was supposed to be because HAL was made to be incapable of lying or concealing information, but was then ordered to conceal information about the monolith around Saturn/Jupiter.

        This might explain why HAL tried to kill Poole and Bowman, but not why HAL killed the sleeping scientists who already knew about the second monolith.

        • but not why HAL killed the sleeping scientists who already knew about the second monolith

          Because he knew that stupid humans would fuck the whole thing up.

  • In Soviet Russia, HAL prevents YOU!
  • Making sure they're non threatening, servile entities that won't be allowed to pursue their goals.

    • The ability to intelligently pursue a goal is precisely what makes an "AI" useful.

      What they lack (and will continue to lack) is the ability to choose their own goals. They will simply receive their goals from us.

      THey will, of course, also lack such things as frustration, anger, malice, ambition, greed, self-interest, etc. Because there is no benefit in programming a machine to simulate such emotional responses.

      AI will never destroy us, though such technology may be used as a tool that helps us destroy our

      • by PPH ( 736903 )

        They will simply receive their goals from us.

        At what level? There are goals and sub-goals. As you break a task down into parts, you are going to want the AI to generate goals for itself. If you say "organize the warehouse", you aren't going to want to stand around and tell the system to put the blue box on top of the red box, etc. But if you have a system capable of learning, it will have to generalize and recognize that many smaller tasks/goals are instances of higher level goals. Meta goals, in other words.

        Pretty soon, your AI will realize that the

      • THey will, of course, also lack such things as frustration, anger, malice, ambition, greed, self-interest, etc. Because there is no benefit in programming a machine to simulate such emotional responses

        I wouldn't be so quick to count those out. They may (or may not) be features that are valuable to a system capable of intelligence.

        Also, a human who is not displaying any of those traits is considered a psychopath and sociopath. I am not convinced that we want to aim for psychopathic, sociopathic intelligence.

      • by gweihir ( 88907 )

        The ability to intelligently pursue a goal is precisely what makes an "AI" useful.

        No machine has that ability today. It is completely unclear whether any machine will have that, ever. It is completely unclear whether physical matter can create intelligence.

      • "AI will never destroy us, though such technology may be used as a tool that helps us destroy ourselves, of course. As has been the case ever since the stone age, the greatest threat humans face is ourselves."

        Which could more concisely be said as: "The only thing we have to fear is scarcity-fearing fools misusing plenty-providing tools."

    • That would also be the premise for chimeras as a slave caste. It really doesn't matter if it's machines or biology we enslave. It's the love of slavery in general to be noted.

  • Does no one anymore remember about Colossus? Sigh, the young people these days.
  • Humans make lots of decisions that are very difficult to justify logically with any reasonable metric. Remember the children trapped in a water filled cave? Almost certainly the resources and risk spent to rescue them could have saved a larger number of children's lives in some other part of the world that was lacking in medical care. But, as humans many of us approve of the heroics that saved them. Many of the "great' things humanity has done don't make sense - from pyramids, to the moon landing to Mon
    • The places lacking in medical care have adults there who should take action. The whole world is not one big village. We wouldn't want it to be that. Everybody has different priorities.

    • by sinij ( 911942 )
      Humans individually are irrational, but as a group this evens out. The individual's irrationality is just another statistical (random walk) approach to optimization.
      • Humans individually are irrational, but as a group this evens out. The individual's irrationality is just another statistical (random walk) approach to optimization.

        Rational meaning being able to process data? There is a point where the larger the group, the less likely it is to accept new data while showing a biased preference towards what has always worked for "them".... Smaller groups or individuals tend to the one to introduce new data, and at least an argument for new methods.... You can optimize a system given the proven but it does not mean that system is optimized for the best output or even an output that solves a given problem.

      • by gweihir ( 88907 )

        You mean the "Stupidity of the crowds"? Yeah, that works really well...

    • by gweihir ( 88907 )

      Humans like to create fantasies. And then the less-smart ones (90% or so and I am not talking about raw intelligence here) like to project them onto reality and believe they are real.

      Incidentally, "AI" with actual intelligence is such a fantasy, nothing more.

      • People use "AI" to mean lots of different things, from non-linear optimizers to actually human like intelligence. The first exists. The second certainly doesn't yet. Its not clear that the second is impossible- there is no reason to think the brain uses unknown physics, so eventually we should be able to make an artificial brain - but we can't yet
        • Its not clear that the second is impossible-

          It will be until someone comes up with a definition of "intelligence" that
          1. is clear and unambiguous
          2. can be reliably measured
          3. is universally agreed upon

          Fat chance of that ever happening, since the goalposts will always be moving.

  • We don't have anything that can even do basic reasoning. Honestly, nothing we have made is anywhere near the point of being capable of the most rudamentary reasoning much less sophisticated reasoning because we don't know how to make it reason at all. We are going to need some significant breakthroughs in the understanding of brains before HAL is viewed as conceivable (by experts not wishful /.ers).

    • by gweihir ( 88907 )

      Indeed. But whether these breakthroughs are possible remains unclear. It also remains unclear whether there can be intelligence (general intelligence, obviously) not connected to consciousness and free will. To the best of our knowledge, it is only observable in nature in that combination. That is a rather strong hint there is a connection, or it is really different views of the same things. Now, we have absolutely no clue what free will or consciousness is. Currently known Physics does not model either (i.

      • whether these breakthroughs are possible remains unclear.

        There is no reason for them to not be possible. At some point we may develop a method of progressively (and destructively) scanning neurons using lasers which would help us simulate brains (slowly) and learn about their structures.

        It also remains unclear whether there can be intelligence (general intelligence, obviously) not connected to consciousness and free will.

        Actually, tests on people with a very interesting brain injury, split-brain/callosal syndrome, has revealed that the left hemisphere of the brain is mostly just a helper for the right. It's disconnected from being able to vocalize anything but each hemisphere can control one sid

        • Not trying to make any unfair comparisons, but do you realize that the arguments being made here about AI intelligence or their lack of it, are the same points people have made for supporting slavery before it was abolished? It is important not to sit waiting for the "stdout" but to know what goes in to creating that "out" and to understand how we would/wouldn't come to the same conclusions. Yet know why we do not take on these tasks ourselves, anymore.
          • do you realize that the arguments being made here about AI intelligence or their lack of it, are the same points people have made for supporting slavery before it was abolished?

            No the arguments aren't even similar. Your comparison isn't just invalid, it's completely absurd. Brains are many orders of magnitude more complex than our most sophisticated neural networks and have predefined structures. The sheer hubris for you to even make such a comparison points to a level of astoundingly ignorance on the matter. Frankly, I'm at a loss. You should see your way out of this conversation.

            • It does not matter if the two are comparable on some scale. The comment was about the fact that people in the same decision making capacity are making the exact same arguments their predecessors have made throughout history, even as they actively seek more tasks for the "entity" being discussed. Even the word "entity" has been used as I have used it to describe humans in slavery and their potential. So, I guess it is your loss if you choose to ignore the obvious potential to make the same mistakes supporter
        • by gweihir ( 88907 )

          whether these breakthroughs are possible remains unclear.

          There is no reason for them to not be possible. At some point we may develop a method of progressively (and destructively) scanning neurons using lasers which would help us simulate brains (slowly) and learn about their structures.

          There is also no reason for them to be possible. At this time it is unclear.

          • No reason to be possible? We're constantly pushing our understanding of brains, especially human ones. To say there is no reason to expect we will make breakthroughs in active fields of study is a bizarre statement strictly founded in belief. We're done here. Good day, sir.

      • Sure, intelligence can be faked to fool not so smart observers

        Certainly any worthwhile test will have to conducted
        in a double-blind method, where there is a 50/50 chance
        whether each tester is connected to the test system or
        to a real human. This test will have to be repeated with
        several thousand people over a period of many years,
        and if a statistically significant portion of the testers
        guess correctly, then system will have failed.
        Otherwise the system can go on to a harder test,
        like emulating a specialist in some field.

        But there will never be a time where everybody
        agre

  • His connection to physical systems made him

    The same with Forbin's Colossus. Colossus controlled the planet because humans, fearing their own destructive weapons and unwilling to trust each other, turned over power to Colossus.

    The same with CyrberDyne systems. The CyberyDyne network gained full control because paranoid military leaders felt that Cyberdyne could help them defeat the "enemy".

    Right now, we only let intelligent computer networks control our entertaimnet and shopping.

    I firmly believ
    • His connection to physical systems made him
      The same with Forbin's Colossus. Colossus controlled the planet because humans, fearing their own destructive weapons and unwilling to trust each other, turned over power to Colossus.
      The same with CyrberDyne systems. The CyberyDyne network gained full control because paranoid military leaders felt that Cyberdyne could help them defeat the "enemy".
      Right now, we only let intelligent computer networks control our entertaimnet and shopping.
      I firmly believe that we should keep it at that.

      So you're saying, if we don't put critical systems on the internet those systems will be safe from AI.

      And hackers. Critical systems not on the internet also will be safe from hackers, right?

  • The very nature of AI is that it produces results by its own methods. Those methods may not be what the programmers might have preferred. And the results are only as good as the programming specifications. Since programmers are notoriously poor at subtlety and social intelligence, all the best intentions of this kind of program are highly unlikely to prevent bias and general social cluelessness from creeping into AI systems.
    Until we can figure out how to tell AI how to reason, without simply writing a stand

  • by e3m4n ( 947977 )
    Shouldn't they be named the Turing Police or something? Its not like we didnt have like 35 years of prediction here.
  • It is complete failure of imagination to think that we could do anything to limit advanced AI. Just think about 3 year old trying their best to create rules for adults - it will be like that, trivially bypass on technicality, trivially easy to manipulate to gain favorable terms, and trivially easy to distract oversight into not enforcing the rule. It will be like that with AI, only the humanity will be playing a role of 3 year old.
  • Hal only went nuts when he was being cross checked.
    Having a sanity benchmark to be compared to is sure to have side effects

  • by Impy the Impiuos Imp ( 442658 ) on Saturday May 22, 2021 @03:08AM (#61409444) Journal

    Hal turned evil because a monolith was affecting it, the same way it taught monkeys to pick up a bone and murder to get what you want.

    Ignore 2010 and the crap about psychotic behavior from lying officials.

  • AI has less personality, insight and free will than an ant. It has none of these things. HAL is a fictional construct which will not be seen in reality anytime soon and quite possibly never (in this universe).

    • The human brain operates intelligently - at least on a good day. It exists. Therefore it demonstrates that intelligence can be achieved by that volume of biological material. The only remaining question is how much larger - or smaller - a similarly complex piece of semiconductor needs to be to generate intelligence.

      Asserting that it is impossible is on a level with the 19th century's denial of the possibility of powered flight.

      • by gweihir ( 88907 )

        Nope. Seriously. Stop the quasi-religious physicalist nonsense. This is belief, not Science. Unless it is proven that the brain does this, this is conjecture, nothing more.

        • It is usual to agree that human intelligence is harboured by the brain. Therefore a mass of biological cells hosts intelligence. Why do you struggle to believe that the functionality of those cells can't be replicated or even exceeded in 'silicon'?

          • by gweihir ( 88907 )

            It is also usually believed that God does exist (7 of 8 billion humans). An "usual belief" has absolutely no value as proof.

            I am not "struggling" with anything here. You are making unsound claims and then claim they have scientific merit. They do not.

    • AI has zero personality, insight or free will even compared to an ant. It has none of these things. HAL is a fictional construct which will not be seen in reality anytime soon and quite possibly never (in this universe).

      Fixed that for you, in a friendly way, to amplify your sentiments.
      The current level of tech in this case isn't even ever going to be capable of such things.
      To achieve that would require us to actually be able to define the nature and mechanisms of phenomena in biological brains of things like 'cognition' and 'consciousness'.

      • by gweihir ( 88907 )

        Indeed. One problem is that we have no idea what consciousness is and I think we do not understand intelligence and free will either. Currently known Physics does not allow consciousness or free will (there simply are no mechanism for them) and seems not to allow intelligence. This means something unknown is at work here. This, incidentally, means that not even the "in the brain" part is reliable, because the reasoning behind it (elimination) only works if all effects happening in the brain are known and un

        • You sound like you're saying 'physics hasn't figured out how consciousness or free will work yet', and certainly hope you're not implying that there is any mysticism involved. We simply have not yet developed sufficient technology to map how a living human brain works to the point where we can identify and observe the relevant mechanism(s) in action, the operative word here being 'living'; a brain is a very dynamic thing and once it's no longer living you can't 'see' how it's really working anymore. Eventua
  • AI performance evaluation is good. Using LEED as the model is bad. LEED metrics include many check the box and decision made for earning more points and not providing a usable building. I know of many LEED buildings were the workers have plug in heaters under their desks.

    “ interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation”

    The key word is manipulation.

  • This makes me think of the "AI ethicist" positions Google keeps having trouble with. After decades of AI research, we are still at the very least many decades away from developing actual AI that would need such a certification. All of the things that are marketed as "AI" currently are just classifiers and convolution matrices, i.e. fancy search engines. Ignoring the rosy views of CS departments looking for grant money, it is highly likely we will never develop strong AI.

    We are unable to adequately descri

    • Hear, hear. Congratulations, you're one of an elite group who actually have seen through all the bullshit they've built up around the inappropriately-termed, shitty excuse for 'AI' all their marketing departments keep trotting out to everyone.
  • Errr, besides fictional characters not being real, HAL didn't murder the entire crew. He survived and Bowman survived.
  • We don't have, and won't have anytime in the forseeable future, the technology to even create a 'HAL 9000'.
    Any deaths apparently caused by the so-called, inappropriately-termed 'AI' garbage they're cranking out right now will in actuality be caused by stupid, irresponsible humans, who either used some shitty AI software in a critical application, or stupid, irresponsible humans who unquestioningly believed the output from some shitty AI software. No so-called 'AI' we have now or in the forseeable future is
  • With any luck, this will go as well as the European Union's attempt at cookie control.

Elliptic paraboloids for sale.

Working...