writes: This is a call for you to take a stand on AI ethics. Its an epochal time for both your rights and AI technology.
Humanoid AI (I’ll just call it AI from now) will bring immeasurable benefits to the world as it gets rolled out in the very near future. One easy example is the crash last week of the ExoMars projects Schiaparelli EDM lander, after years of effort and journeying thousands of miles; no doubt, a craft piloted by AI would have had much better chances.
However, there are a lot of alarmist tales (some factual, others nonsensical) about how AI would ruin us and make the Earth a desolate waste. My counterargument is that AI will only be as dangerous as we make it, we all can do something about it if we engage concerns the right way.
So, lets save the world from apocalyptic visions of crazed AI bot overlords mumbling torrid curses in C++, while chasing tearful old ladies down dark alleys, neon eyes flashing in binary rage due to code gone raving mad.
Robust standards can be the difference between routine and danger. Take for instance the filling of a car with petrol. Although petrol is a highly explosive, volatile fuel that exists as napalm in its most violent form, there are a billion plus cars on earth which regularly drive into petrol stations for refuelling without incident.
Robust safety standards have turned service at the millions of fuel station accidents on earth into a mere routine. The same can happen with AI – it depends on us all to make that so.
The only robust way to keep AI safe, now and in the far future, is to provide an effective mix of constraints and exclusions to AI interactions that protect the public, while not cutting down on AI capabilities. To this end, a set of proposals is enunciated in a nascent AI ethics effort, The Creed (https://github.com/Grand-Axe/The-Creed/blob/master/Creed.md), its central thrust being:
- making AI access to a network as close to impossible as is possible,
- ensuring that an AI agent is never in charge of its power switch,
- providing a prominently positioned kill switch,
- ensuring that an AI agent is never put in charge of a contraption of any sort that can move faster along a path than a toddler can run,
- guaranteeing that AI will not be employed to breach privacy,
- ownership must be unambiguous.
- The Creed is open source; this ensures that it is democratic and that none of its tenets can be hidden under malicious legalistic trickery.
One of the principles behind The Creed is to create a hardened opaque box in which a useful AI agent can live in its own virtual world, which we in turn can manipulate because, we will be in full control of its communications and its sensors.
The logic is that it is impossible to react in a sustained and proper way to phenomena whose simplest components one has never previously experienced. If you live in a hardened opaque box, you can’t see outside that box, neither can you break out of it.
Therefore, by keeping to the tenets of The Creed, we can stop AI agents from fully experiencing our world and learning how to operate autonomously within it to the extent that they can become a threat to us; by the same token, we can buy reaction time for ourselves if they do.
The constraints that are applicable to AI packaging (as listed in The Creed) will also create barrier between AI and any actuator as well as slow down AI self- transportation speed to that of a toddler.
The other principle behind The Creed is community ownership. The Creed is open source, so contributions to fix pressing safety (and other) concerns can be made by the public in timely fashion; and because the you will be directly affected, the solutions you contribute are guaranteed to be sound, satisfactory and effective.
Community ownership of The Creed will make certain that AI never ever becomes an agent of dystopia and that it remains a tool to enhance the average person, rather than one by which the strong can forever subjugate the weak.
How is The Creed Managed? What Next?
I’m currently the sole manager of The Creed, which is a conflict of interest. Therefore, I wish for third parties to take over, preferably individuals and (or) organisations that have already proven (emphasis on proven) themselves to be advocates of the public good. This is because such parties would be unlikely to spring nasty surprises as trojans for big business. I did exchange emails with one such organisation, the Free Software Foundation (https://en.wikipedia.org/wiki/Free_Software_Foundation), unfortunately although they were kind, The Creed fell outside the scope of their work.
It would be great to have a credible third party take over management of The Creed. It could be you or your organisation.
There are a few efforts at AI ethics, but they all seem lumbering beasts that share a foggy concept of AI.
The most notable is the Partnership on Artificial Intelligence to Benefit People and Society (lets call it PAIBPS), an alliance of Google, Facebook, Amazon, IBM and Microsoft — most of which have been fined for breaching privacy, all of which are involved in IOT which is about pervasive data collection. You’d sooner leave a chicken’s welfare to a fox.
You can imagine that the definition of privacy that might come out of PAIBPS would shock the goatskin boxers right off Fred Flintstone. Yet privacy is a defining human characteristic, even if for some strange reason it is not discussed in textbooks as such.
Here on Earth, humans are the only living things that possess a sense of privacy. Animals certainly don’t care about it; your average libidinous pup would unceremoniously use your knee in the market square with scant thought for #TheOtherRoom. Not even chimps have invented loin cloths.
Your privacy is your humanity, without privacy, you are debased. This is a rather unhealthy perception of humans to give to AI agents.
Privacy is not the only area in which the concept of The Creed is superior to PAIBPS, just as important is speed of reaction to pressing concerns. As an example, The Creed was posted to GitHub on May 3, 2016, complete with the tenet that all AI agents must have a kill switch. From June 8th 2016 (a month later) news reports began to appear that Google was researching into building a kill switch (http://www.bbc.co.uk/news/technology-36472140) and had even engaged academics (from one of the worlds top universities) who had produced a paper on how to code a kill switch Erm, its just a switch, Joe!
Putting aside the appearance of a timing coincidence, Google has been involved in AI far longer than my sub two year foray. So why wasn’t the need for a kill switch obvious to them long ago? It is most likely that conflicting business interests blind sided them. Anyway, at least Google made an effort, Microsoft on the other hand simply unleashed the despicable “kush loving” Tay https://en.wikipedia.org/wiki/... on Twitter, thus enabling the worlds population of innocent little kids with hideous new vocabulary. Oh well!
Because conflicting big business interests will always win against public good, the best we can expect from PAIBPS is long winded bureaucratic waffle of the alarming kind. The only way forward is a community owned effort, which The Creed is.
Thanks for your time, please share your thoughts.
The Creed can be found at https://github.com/Grand-Axe/T...
My name is Asame Imoni Obiomah, I’m a pioneer of humanoid AI with plans to make the worlds first ever humanoid AI, Okeuvo, available to the buying public this Xmas.
The address of my website is http://www.mindmutiny.com./