Can We Build Ethics Into Automated Decision-Making? (oreilly.com) 190
"Machines will need to make ethical decisions, and we will be responsible for those decisions," argues Mike Loukides, O'Reilly Media's vice president of content strategy:
We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I've suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated... The sheer number of decisions that need to be made means that we can't expect humans to make those decisions. Every time data moves from one site to another, from one context to another, from one intent to another, there is an action that requires some kind of ethical decision...
Ethical problems arise when a company's interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via "engagement"; in recommendations that steer customers to Amazon's own products, rather than other products on their platform. The customer's interest must always come before the company's. That applies to recommendations in a news feed or on a shopping site, but also how the customer's data is used and where it's shipped. Facebook believes deeply that "bringing the world closer together" is a social good but, as Mary Gray said on Twitter, when we say that something is a "social good," we need to ask: "good for whom?" Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren't all the same, and depend deeply on who's connected and how....
It's time to start building the systems that will truly assist us to manage our data.
The article argues that spam filters provide a surprisingly good set of first design principles. They work in the background without interfering with users, but always allow users to revoke their decisions, and proactively seek out user input in ambiguous or unclear situations.
But in the real world beyond our inboxes, "machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule."
Ethical problems arise when a company's interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via "engagement"; in recommendations that steer customers to Amazon's own products, rather than other products on their platform. The customer's interest must always come before the company's. That applies to recommendations in a news feed or on a shopping site, but also how the customer's data is used and where it's shipped. Facebook believes deeply that "bringing the world closer together" is a social good but, as Mary Gray said on Twitter, when we say that something is a "social good," we need to ask: "good for whom?" Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren't all the same, and depend deeply on who's connected and how....
It's time to start building the systems that will truly assist us to manage our data.
The article argues that spam filters provide a surprisingly good set of first design principles. They work in the background without interfering with users, but always allow users to revoke their decisions, and proactively seek out user input in ambiguous or unclear situations.
But in the real world beyond our inboxes, "machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule."
Not really "Automated" if directed (Score:4, Insightful)
I'm not sure I consider the Amazon directing you to Amazon products as a very good example of "Automation", since that has a giant bias plugged into the engine by Amazon. You are trying to ascribe ethics to a system where humans are obviously in firm and direct control over results.
To me considering ethics and automation is more of a general concern where the automation is making derived choices that are pretty far removed from human directive. I think you can build in ethics to try and be kind to people, it's not impossible - but even the choice to try and include some kind of ethical directive, is still really at the mercy of humans and how much time and effort they are willing to put into such things...
Perhaps the most effective solution is for some company to come up with a really kick-ass ethical choice helper for automation, that becomes so popular that companies are clamoring to include it. Otherwise it will get placed in the asme leaky lifeboat that Accessibility is always placed in.
Re: (Score:3)
Trains are "automated" in the sense that you don't need people dragging carriages across the landscape. They are also "directed" in the sense that they only go where the tracks go.
Re: (Score:2)
I don't think it is even legal to do it without a driver.
Re: (Score:2)
In Europe we have plenty of trains without drivers in the subway systems ...
Re: (Score:2)
Look up "the exception that proves the rule."
Instead of phrasing your comment as a gotcha that you think proves your point, you should learn to recognize that it is an exception that proves that the other person is correct in the normal case. Because once you realize that, you realize that your statement would have the same meaning if you merely said, "Yeah, that's true, it is pretty much only a few subways and toy trains that are automated."
Ethics have to be directed (Score:5, Insightful)
I think you can build in ethics to try and be kind to people
Ethics is not about being kind to people it is about doing the right thing. For example, a system to spot cheating on an exam is not going to be particularly kind to the people it catches and it would be highly unethical for it to be kind by ignoring the cheating. Since doing the "right thing" is subjective and extremely contextual any ethics in automated decision making is going to have to be directed by a human and, since people may vary on what they believe is ethical, very hard to get right.
Even something very basic like not killing people is not going to be easy to implement e.g. should an automated car prioritize the lives of the occupants over others or vice versa? It's made even harder by the fact that computer algorithms do not comprehend the ethical consequences of their choices: all the programmer does is tweak the parameters to make it behave in a way that they believe is ethical which ultimately means the "ethics" will be determined by large corporations or, if regulated, governments which is frankly rather a depressing thought givengovernments' and companies' past records on making ethical choices.
Re: Ethics have to be directed (Score:2)
Re: (Score:2)
The question is what is the "right" thing? In different situations, different people see the "thing" differently. Some may see it the right thing but some may see it the opposite. Ethics are not as easy topic as some people think. They involve a lot of reasoning, the time the occurrence, the consequence of the decision, and the impact on others (and/or self). To some people, an event may be unethical when it occurs at a certain time and no impact on those people. However, when the event has a direct impact
Re: (Score:2)
Re: (Score:2)
Right, but the import follow-up questions are:
Who is "we?"
Whoever that is, what do "they" actually want?
"Can" we build ethics into expert systems, yes. Can we actively try to make automatically derived systems have controlled ethics, sure. We can try. We can do better. And if currently we don't even worry about ethics in automated systems, then it is easy to meet the bar of merely having begun to consider it.
But the fact is, Scrooge McDuck has different ethics than many of us,
Re: (Score:2)
Amazon is a good example of where the system is working against the interests of the user. We need to educate people about that so that they understand that when Amazon suggests something it's not the same as Google trying to be helpful with search results, it's more like slimy sales staff trying to steer them towards the most profitable products.
It might be obvious to us but a lot of people don't seem to realize this.
Re: (Score:2)
Nope. Too much concentration of power. Won't work. Next plan.
Communism was a terrible idea, but it is only one terrible idea among many. All 'command economies' suck, as they require power to be concentrated, where it will corrupt.
Can't be fixed. Suggest something self organizing, like capitalism.
sigh... (Score:2)
I suppose it's likely on the order of Monday not being the best day of our next week... ethical decisions made by artificial intelligence will not be above reproach.
Though, perhaps, like the standard realists have for automated vehicular piloting, all AI ethical decisions have to do to pass muster is exceed the effectiveness of decisions that would've been made by their biological inventors.
Fortunately for the future of the robot overlords, we haven't set this bar that high.
Re: (Score:2)
You just slap on a post-processor. If the AI engine recommends that blacks should be denied bail at twice the rate of whites, the post-processor just makes a race-aware adjustment to the recommendation to give the result society is comfortable with. Problem solved.
The important thing is that you adjust the outputs, not the inputs.
'Law' without 'mercy' is not 'justice' (Score:3)
Like 'mercy', 'ethics' requires understanding of human beings and human-related matters.
Since the poor, weak excuse for 'AI' they keep slinging around lately cannot 'think', and therefore is entirely incapable of understanding humans, they are also incapable of being 'ethical'.
Someone will now attempt to argue that 'ethics' is just a set of rules to follow -- or perhaps I should say 'laws' -- and there are always exceptions to rules and laws where there are humans and human lives to consider. Therefore: machines should not be involved in making decisions requiring 'ethics', they are entirely unqualified to do so by their very nature.
Furthermore: all so-called 'AIs' should be supervised by humans at all times; no 'autonomy'. There always needs to be at least one human being there to allow or disallow what any of these machines does.
Re: 'Law' without 'mercy' is not 'justice' (Score:1)
Good job. Now explain to the rest of us how humans will "scale" to supervise all automated decision making.
Re: (Score:2)
These companies who have invested tens or even hundreds of millions developing these shitty 'AIs' are realizing that they're garbage, won't get over the finish line, and are now desperately moving their own goalposts trying to make their shitty AIs appear better than they really are. Meanwhile their legal counsel are telling them that the profits outweigh the potential liabilities so go ahead and just settle
Re: (Score:2)
That's what's got us into pretty much all the messes we're all currently in!
Re: (Score:2)
WTF world are you living in?
Everything has to come down to POWER. That's what got us into pretty much all the messes we're all currently in.
The most dangerous power is of course, owning money printing presses. But that doesn't change the basic fact. Money is only one example of a thing that brings power. Power is the problem, particularly concentration of power into few hands.
Re: (Score:2)
Only one way (Score:5, Interesting)
Oh hell no! Don't even try! (Score:1)
Unethical animals programming ethics into a dumb machine?? Dog help us all!
Confusing Maths with Ethics (Score:3)
Automated systems that apply 'known' and accepted rules equally, is maths. Those formulation of those rules is ethics and is fine as long as the majority are aware of the rules and approve of them. The more impactful the rules, the more it could favour one against the other, the greater majority required to apply that rule but the starting point should always be more that 50% of the eligible citizenry and the upper limit, depends which you favour mathematically fractions or decimal places, it make a difference, as in 2 out of 3, probably the upper limit of foolish majority restraint, (decimals is more a choice between 60% and 70% for whole numbers sake but doesn't read as well as 2 out of 3 or 2 to 1).
The difference between AI and straight maths, of course corrupted AI which makes unethical decisions to favour it's programmers, versus a simple spreadsheet, which applies the rules, that everyone is aware of and the majority have agreed to.
The silly bullshit waffling around about mob rule, what a crock of shit, who complains about mob rule, the 1% who consider the entirely of the 99% the mob, the opposite of mob rule (majority rule, which is just emptily slandered by calling it a mob), is entirely corrupt elite rule, who inevitable govern to suit themselves at the expense of the mob and fear the mob, the majority, will hole the elite, the extreme minority accountable for their corruption, avarice and very venal and abusive natures.
Yeah, I want automated decision making, fuck AI and fuck the cunts who propose it, you arseholes are just totally full of shit (AI as a layer of bullshit to hide decision making that favours a tiny minority at the expense of the majority, a new layer of bullshit added to the old elite lies). I want those maths rules to be open and clear and up for debate and affirmation or rejection by the majority, maybe a super majority in some cases 2 out of 3 or 2 for and 1 against.
Can we build ethics into Human decision-making? (Score:5, Insightful)
Can we build ethics into Human decision-making? Only once we have done this do we have any hope of building it into AI.
So, define "ethics" for this case. (Score:3)
Seriously, how do you define "ethics" so that it would be an acceptable definition to, well, everyone?
Because it won't be accepted as "ethical" unless its decisions agree with you (for all values of "you", including "me").
Re: (Score:2)
You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.
It only works if all human lives are considered equal, making its implementation problematic for the most influential holders of those lives.
Re: (Score:2)
You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.
So your proposed solution is consequentialist. That won't please the virtue ethicists.
Re: (Score:2)
That which produces a "good outcome", or consequence, is entirely in the eyes of the beholder, good and bad being learned concepts.
Much less subjective is the measurable benefit/harm quotient a not so complex algorithm can administer when evaluating a single organism.
As for the virtue ethicists, encode their considerations, but rate their results. Trust but verify, like any sensible operations management system.
Re: (Score:2)
Personal (not social) utilitarian ethics.
The outcome with the most utility (for me) is the ethical one. Easiest way to get a rough approximation for my personal utility is the amount of money and pussy it brings to me.
Which happens to be the morals of the modern world. So double plus good.
Re: (Score:2)
Harm no human unless an equivalent or greater harm comes to 2+ humans.
But the entire devil is in the details of how "equivalent or greater harm" gets calculated across a tremendous range of diverse scenarios on the continuum between life and death. To even take a crack at it is to load the algorithms with the value judgments of the programmers.
Re: (Score:2)
You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.
This would allow for killing of innocent bystanders as long as their organs can be harvested to save 2+ others.
Re: (Score:2)
You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.
This would allow for killing of innocent bystanders as long as their organs can be harvested to save 2+ others.
Talk about a fatal exception.
Re: So, define "ethics" for this case. (Score:1)
Harm(a) == Harm(b).
Duh.
Re: (Score:2)
Harm(a) == Harm(b).
Duh.
Curses. I should've known I'd run across a crack coder, before long, on this site.
Re: (Score:2)
That is an absolutely moronic idea. Ethics is not something that can be mathematically defined. Friggin' define "equivalent harm" mathematically.
An eye for an eye, 2.25 fingers for a thumb, one leg > one arm... you can plug in values you believe are accurate and just, and then tweak the system after enough use cases occur that imply causation.
Some systemic value for ethical conduct towards humans must be ingrained in the artificial intelligence we cede decision-making to, or two clowns like me and you won't be around to have this discussion, before long.
Include "social good" when building targets/goals (Score:1)
Basically we need to include the goal in any automated decision making process, probably including result measurements after the fact (hopefully in a check/fix loop). And to do that we need to define the result we want.
The issue is when optimizing for one area makes another break in "interesting" ways. Like the old Chinese curse of "May you live in interesting times." Those only exist because of relationships we don't expect or measure through optimization.
I mean why would you measure the people you can'
Re: (Score:1)
Re: (Score:2)
Seriously, how do you define "ethics" so that it would be an acceptable definition to, well, everyone?
Simple answer: just make them follow the law, and, if necessary, change the law when problems are found.
Re: (Score:1)
Ethics to me means not stealing
Except you have bragged before about not paying your employees. Is that not stealing from them when you take their time and do not give them what you promised them in return?
supporting private property rights
Unless of course you want their property. That is yours for the taking.
Define "ethics" first (Score:2)
and then you can start thinking about how to code it.
Re: (Score:2)
We can't define it, but we could start by crowdsourcing it [mit.edu].
Re:Define "ethics" first (Score:5, Insightful)
I paid for the car. I expect it to protect my life first.
Re: (Score:2)
I paid for the car. I expect it to protect my life first.
This sounds like the standard sort of entitlement of your average car driver:
I decided to take the risk of driving a high owered lump of metal around to save a bit of time and I exect it to have consequences for other people if something goes wrong.
Re: (Score:2)
I paid for the car. I expect it to protect my life first.
This sounds like the standard sort of entitlement of your average car driver:
I decided to take the risk of driving a high owered lump of metal around to save a bit of time and I exect it to have consequences for other people if something goes wrong.
Well, yes. The owner is the one who paid for the car. It is a reasonable expectation that a priority be given to ensuring the safety of the owner. The second car has airbags and crumple zones and other functions which help limit the damage, because that's what they paid for.
Obviously, the underlying assumption is that overall damage be minimized, but self-preservation is expected on both sides. Though it may be "most ethical" to follow Spock's "the needs of the many outweigh the needs of the few", there are
Re: (Score:2)
Well, yes. The owner is the one who paid for the car. It is a reasonable expectation that a priority be given to ensuring the safety of the owner.
Yes, they paid MONEY so their lives take priority. Because money.
Re: (Score:2)
Yes, they paid MONEY so their lives take priority. Because money.
Not simply "because money". It would be patently absurd to go to a restaurant and pay for a meal, only to find out that the restaurant gave the food to someone else because the restaurant felt the other person needed it more. It would be ridiculous to hire someone to clean my house, only for that person to go to someone else's house and clean it because they decided it was dirtier.
If I buy a car, I expect its safety features to keep me safe.Yes, ideally, it would absolutely keep both of us safe...but if the
Re: (Score:2)
I expect the car to follow the rules of the road, and within those rules try to protect its passengers as much as possible.
Re: (Score:2)
I paid for the car. I expect it to protect my life first.
So you would like the machines to be selfish, rather than ethical on your behalf?
As for autonomous vehicles, ethics does not come into it. A car wont know if the person next to it is a Nobel laureate or meth dealer. It'll be programmed to minimise damage, we already have good rules for this which most drivers ignore, are ignorant of or just too silly to use them. One of the classic mistakes is swerving, if you're going to his something head on, don't swerve, if you swerve you risk rolling the car or hitt
Re: (Score:2)
As for autonomous vehicles, ethics does not come into it. A car wont know if the person next to it is a Nobel laureate or meth dealer.
That's not ethics. Human life is human life. The AI won't be checking your party affiliation, skin color or membership in a religious cult when deciding how to brake and steer. The only weighting factor is who owns the car. And that is as much in the self interest of the AI and its creators as the vehicle's occupants. Because if AI starts killing its occupants preferentially, nobody will buy it anymore. And those that have it will pull the AI fuse and steer themselves.
Re: (Score:2)
Ugh, a car "protecting you" and "doing what's best for everyone involved" is always going to be the same damn thing: In an oh-shit scenario, slow down, and try to come to a stop. No swerving, no bridges full of nuns, no trolly problem. Engineers are making this thing, not philosophers. That whole debate is bullshit technophiles wanking themselves.
Re: (Score:2)
I followed the basic rules that I would expect a machine to follow.
a) Protect the passengers.
b) Try to avoid collisions if possible.
c) If not possible, then stay in lane.
Following those 3 rules, according to MIT, I hate grannies.
What a joke (Score:5, Interesting)
Re: What a joke (Score:2)
Re: (Score:2)
Re: (Score:2)
Things that were ethical
Wow. No. You've very obviously confused "commonplace" for "ethical". The two are in no way related. Just because something is commonplace does not mean that it is ethical. Not at all.
For example, while gang violence was common in Prohibition-era Chicago, that doesn't mean it was ever ethical. Nor is it ethical to engage in graft, even if the practice is common, perhaps even accepted, in your culture. Likewise, to pull an item from your list, while bullying was tolerated to a greater degree in past decades,
Re: (Score:2)
Re: (Score:2)
if you were a member of a Chicago gang, violence is not only common place but ethical. It really doesn't matter how outsiders judge you, you live by your own group.
Not really. While groups may have ethics (e.g. I spent three semesters in grad school as a TA for my university's senior-level engineering ethics course, helping teach about 2000 engineering students during that time), those group ethics only work inasmuch as they hold their members to a higher standard without placing any burden on outsiders. Anything else falls apart the moment its group members interact with anything outside their group.
As such, while living by your group's code is a number of things, it
What ethics? (Score:4, Insightful)
Currently the various expert systems and automation have zero morals and ethics. Their only criteria are maximize profit, minimize risk. If someone ends up dying in screaming agony, meh.
This is just another extension of the principle that the only people you can get on the phone are people with no power to say yes. Their job isn't to make an ethical decision, their job is to make sure the people with authority to make a decision don't have to personally feel the consequences of tossing ethics out the window.
Same with the software. The programmers are just following orders, it's not like they're using the software on actual people. The people using it are just following orders, it's not like they wrote the software to make those decisions. The people at the top just specified software and ordered it's use. It's not actual people, just boring statistical data on a quarterly report.
Of course, in reality the software is an extension of those who give the orders. They just want people to blame "the computer" for as long as possible, just like in the '70s when, according to the CSR on the phone, the computer was infallible.
Which customer's good are we talking here? (Score:3)
"The customer's interest must always come before the company's."
Which customer? Is the company thriving, expanding, and being able to take advantage of economies of scale to provide better and cheaper product to future customers considered in customer's interest or not? Or are we talking about customers who can't afford the product, so we should just give them the product for free and bankrupt the company, since customer interest comes first? Or are the workers at the company also customers, or is it ethical to exploit them just to provide cheaper products?
The above are hyperbole's, but herein lies the problem, if you want to put "customer" interest above the company, you must specify "which customers".
Re: (Score:2)
Re: (Score:2)
It is not in the interest of future customers for the company to give away product to today's customer and go under. Hence my original question "which customer are we talking about here?" - what's best for one customer may not be best for the other.
Re: (Score:2)
Thank you! This has always been my issue with companies that say, "Customers are our highest priority." I am pretty sure that staying in business is any company's highest priority.
Betteridge's law.. (Score:2)
No.
Because there are powerful vested interested with a desire to prevent such a thing.
profits come first - it's the law (Score:2)
"Ethical problems arise when a company's interest in profit comes before the interests of the users"
Unfortunately that's the law in the US. Any company that neglects opportunities for profit is subject to lawsuits from shareholders. A corporation's sole responsibility is to look out for the interests of shareholders.
Neglecting customers/users might ultimately reduce profits, thus must be considered. But ethics? Where in the hierarchy of concerns is ethics? For each management organization that will differ.
T
We can automate racism. Why not ethics? (Score:2)
Re: (Score:3)
How do you train them without using content created by humans?
It takes you all the way back to expert systems. You have to throw out modern AI techniques for any systems that make decisions that affect humans. What else do humans use automation for, but things that affect humans?
Re: (Score:2)
no we can't (Score:3)
A small part of ethics is in the form of rules that we can express and follow. Even ignoring that these rules constantly change and adapt, they are only a small part of the whole.
Most of ethics happens with at most very general, unspecific rules. Basically "don't be an asshole". Good luck expressing that in a programming language. Most of this requires you to be and feel like a human and to use empathy - by putting ourselves into another persons position in our imagination, we can deduce which behaviour we would find acceptable and which not if the roles were reversed. We are light years away from such a thing in AI.
Re: (Score:2)
Good luck expressing that in a programming language.
Machine learning is not done by expressing rules in a programming language. It is taught by example. In theory, all you need is collect a bunch of examples, or have a way to correct the AI when it's making a mistake.
Re: (Score:2)
I know how AI works. But here's the problem: Ethics isn't taught by example only. A large part of ethics is putting yourself in the others place and ask how you'd think about the situation if you were them.
At this time, an AI can do nothing of that, not the smallest part.
Re: (Score:2)
What happens when robots query our ethics? (Score:2)
Who's ethics? (Score:4, Insightful)
Re: (Score:2)
Jihad.
Can we? (Score:2)
The only way to build ethics into anything, including human decision making, is to start with facts. These days, stupid people have a loud voice, amplified by social media, and use it to claim "alternative facts" which aren't facts at all, but they are stupid and don't understand what a fact actually is.
Are you going to base ethical decisions on "alternative facts"? I don't want to be around for that, but here I am.
How are you going to eliminate politics from the programming process that will train these
If we can program in Ethics... (Score:2)
... we will also build in Prejudice, Racism, Sexism and any number of other "unwanted" elements.
The creation of an AI instruction set that manages to be without prejudice will be nearly impossible, given that whoever makes the list will be skewing the process with their own priorities:
Human Life vs Overall Health of the Planet vs Quality of Life vs Sustainability, etc
Governments will want their say, while special interest groups (like the Rich) will also want to build in their own influence to further their
No you can't.... (Score:2)
No (Score:1)
Yes, with Transparency (Score:2)
Can We Build Ethics Into Automated Decision-Making?
Yes, if the algorithm is transparent, publicly published, and more or less straight-forward. Which is to say, if they're NOT ethical, people will bitch up a storm until it's fixed. Because democracy works. The exact same thing happens with, say, police department's policies on arresting people. There's a host of honestly vague and confusing algorithms set in law about when it's legal to arrest someone. We as the public have an equally vague sense of what those laws/rules/algorithms are and if a case comes
No way to leave ethics out (Score:2)
Every decision any program makes is based on an underlying ethical code. Retrieving data without corrupting it, for example. Faithfully reproducing and transmitting what you type. Retrieving the information that your requested. Making robocalls. These all have ethical underpinnings--either for good or bad.
The operation of software is an expression of the ethics of its programmer. You can't leave out ethics, good or bad, it's baked into the fabric of the code by the programmer.
Re: Asimov: you missed the point of his 3 laws (Score:5, Informative)
Asimov was NOT saying here are 3 simple laws you can program your robots with and everything will be hunky dory.
He was saying the Exact Opposite!
If you actually read and understood any of his robot stories, the theme was consistent: robots are no better than we program them to be and they can never be as good/smart/ethical as we are, period.
The 3 laws robot short stories are all well written (Asimov, after all) and tell compelling and educational stories about robotic potential when given decision making ability outside a well defined prescribed rules (such as in manufacturing where there are no real decisions to be made, just a script blindly followed).
I know none of you will actually read them and you will keep referring to them and getting it entirely backwards, but at least I tried.
Re: Asimov: you missed the point of his 3 laws (Score:2)
Re: (Score:3)
I'd read the Foundation Trilogy [wikipedia.org] first, or at least the first book, before reading the robot novels. The robot novels are mysteries, but the Foundation Series is a must-read about the cyclic rise and fall of civilization
Re: (Score:2)
True for the first part, but...
If you actually read and understood any of his robot stories, the theme was consistent: robots are no better than we program them to be and they can never be as good/smart/ethical as we are, period.
Read Robots and Empire.
Re: Asimov: you missed the point of his 3 laws (Score:1)
I did In effect he was saying if your robot brain is vastly superior to a humans then it will be able to interpret the laws in a human way. However, that robot was unique in all the universe and was essentially a new species. The core point stands. Robots can never be better than us.
Re: (Score:2)
Re: (Score:2)
If you create a "perfect" creation, then it wouldn't have free will. The moment you insert free will, then you'll create the thing with an imperfection, the ability to choose wrong (or right). Unless you value free will above being perfect, then choosing wrongly is something that can be included in a perfect design.
There are a number of various stories about how choosing perfection leads to less than optimal results.
Re: (Score:2)
Also the humans in the stories seemed to be naive, believing that it was impossible to have the rules broken.
Re: Asimov: you missed the point of his 3 laws (Score:2)
Liar! ;)
Re: (Score:1)
does not exonerate the President in any way.
This is the new NPC mantra. "I can't prove he's guilty but I can't say he's not guilty". In America we believe in innocent until proven guilty. He doesn't NEED to be exonerated in any way, faggot.
Re: (Score:1)
I would just like to know this:
When he isn't President any more, is he guilty in New York State? That's what I care about.
People blathering about Federal blah blah, they're simply missing the point.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
But that's no true scotsman report!
Re: (Score:2)
You'd need a consolidated moral center.
Allahu Akbar!
Re: (Score:2)
https://cdn.lolwot.com/wp-cont... [lolwot.com]