Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Can We Build Ethics Into Automated Decision-Making? (oreilly.com) 190

"Machines will need to make ethical decisions, and we will be responsible for those decisions," argues Mike Loukides, O'Reilly Media's vice president of content strategy: We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I've suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated... The sheer number of decisions that need to be made means that we can't expect humans to make those decisions. Every time data moves from one site to another, from one context to another, from one intent to another, there is an action that requires some kind of ethical decision...

Ethical problems arise when a company's interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via "engagement"; in recommendations that steer customers to Amazon's own products, rather than other products on their platform. The customer's interest must always come before the company's. That applies to recommendations in a news feed or on a shopping site, but also how the customer's data is used and where it's shipped. Facebook believes deeply that "bringing the world closer together" is a social good but, as Mary Gray said on Twitter, when we say that something is a "social good," we need to ask: "good for whom?" Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren't all the same, and depend deeply on who's connected and how....

It's time to start building the systems that will truly assist us to manage our data.

The article argues that spam filters provide a surprisingly good set of first design principles. They work in the background without interfering with users, but always allow users to revoke their decisions, and proactively seek out user input in ambiguous or unclear situations.

But in the real world beyond our inboxes, "machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule."
This discussion has been archived. No new comments can be posted.

Can We Build Ethics Into Automated Decision-Making?

Comments Filter:
  • by SuperKendall ( 25149 ) on Sunday March 24, 2019 @08:43PM (#58328012)

    I'm not sure I consider the Amazon directing you to Amazon products as a very good example of "Automation", since that has a giant bias plugged into the engine by Amazon. You are trying to ascribe ethics to a system where humans are obviously in firm and direct control over results.

    To me considering ethics and automation is more of a general concern where the automation is making derived choices that are pretty far removed from human directive. I think you can build in ethics to try and be kind to people, it's not impossible - but even the choice to try and include some kind of ethical directive, is still really at the mercy of humans and how much time and effort they are willing to put into such things...

    Perhaps the most effective solution is for some company to come up with a really kick-ass ethical choice helper for automation, that becomes so popular that companies are clamoring to include it. Otherwise it will get placed in the asme leaky lifeboat that Accessibility is always placed in.

    • Trains are "automated" in the sense that you don't need people dragging carriages across the landscape. They are also "directed" in the sense that they only go where the tracks go.

      • I don't think it is even legal to do it without a driver.

        • In Europe we have plenty of trains without drivers in the subway systems ...

          • Look up "the exception that proves the rule."

            Instead of phrasing your comment as a gotcha that you think proves your point, you should learn to recognize that it is an exception that proves that the other person is correct in the normal case. Because once you realize that, you realize that your statement would have the same meaning if you merely said, "Yeah, that's true, it is pretty much only a few subways and toy trains that are automated."

    • by Roger W Moore ( 538166 ) on Sunday March 24, 2019 @10:34PM (#58328456) Journal

      I think you can build in ethics to try and be kind to people

      Ethics is not about being kind to people it is about doing the right thing. For example, a system to spot cheating on an exam is not going to be particularly kind to the people it catches and it would be highly unethical for it to be kind by ignoring the cheating. Since doing the "right thing" is subjective and extremely contextual any ethics in automated decision making is going to have to be directed by a human and, since people may vary on what they believe is ethical, very hard to get right.

      Even something very basic like not killing people is not going to be easy to implement e.g. should an automated car prioritize the lives of the occupants over others or vice versa? It's made even harder by the fact that computer algorithms do not comprehend the ethical consequences of their choices: all the programmer does is tweak the parameters to make it behave in a way that they believe is ethical which ultimately means the "ethics" will be determined by large corporations or, if regulated, governments which is frankly rather a depressing thought givengovernments' and companies' past records on making ethical choices.

      • If you are concerned about governments prescribing ethics to people, that train left the station several thousand years ago, when the first code of laws was established.
      • The question is what is the "right" thing? In different situations, different people see the "thing" differently. Some may see it the right thing but some may see it the opposite. Ethics are not as easy topic as some people think. They involve a lot of reasoning, the time the occurrence, the consequence of the decision, and the impact on others (and/or self). To some people, an event may be unethical when it occurs at a certain time and no impact on those people. However, when the event has a direct impact

      • Comment removed based on user account deletion
    • Right, but the import follow-up questions are:
      Who is "we?"
      Whoever that is, what do "they" actually want?

      "Can" we build ethics into expert systems, yes. Can we actively try to make automatically derived systems have controlled ethics, sure. We can try. We can do better. And if currently we don't even worry about ethics in automated systems, then it is easy to meet the bar of merely having begun to consider it.

      But the fact is, Scrooge McDuck has different ethics than many of us,

    • by AmiMoJo ( 196126 )

      Amazon is a good example of where the system is working against the interests of the user. We need to educate people about that so that they understand that when Amazon suggests something it's not the same as Google trying to be helpful with search results, it's more like slimy sales staff trying to steer them towards the most profitable products.

      It might be obvious to us but a lot of people don't seem to realize this.

  • I suppose it's likely on the order of Monday not being the best day of our next week... ethical decisions made by artificial intelligence will not be above reproach.

    Though, perhaps, like the standard realists have for automated vehicular piloting, all AI ethical decisions have to do to pass muster is exceed the effectiveness of decisions that would've been made by their biological inventors.

    Fortunately for the future of the robot overlords, we haven't set this bar that high.

    • You just slap on a post-processor. If the AI engine recommends that blacks should be denied bail at twice the rate of whites, the post-processor just makes a race-aware adjustment to the recommendation to give the result society is comfortable with. Problem solved.

      The important thing is that you adjust the outputs, not the inputs.

  • I wrote that subject line as a lead-in:
    Like 'mercy', 'ethics' requires understanding of human beings and human-related matters.
    Since the poor, weak excuse for 'AI' they keep slinging around lately cannot 'think', and therefore is entirely incapable of understanding humans, they are also incapable of being 'ethical'.
    Someone will now attempt to argue that 'ethics' is just a set of rules to follow -- or perhaps I should say 'laws' -- and there are always exceptions to rules and laws where there are humans and human lives to consider. Therefore: machines should not be involved in making decisions requiring 'ethics', they are entirely unqualified to do so by their very nature.

    Furthermore: all so-called 'AIs' should be supervised by humans at all times; no 'autonomy'. There always needs to be at least one human being there to allow or disallow what any of these machines does.
    • by Anonymous Coward

      Good job. Now explain to the rest of us how humans will "scale" to supervise all automated decision making.

      • We don't. We don't need all this shitty 'AI' all over the place, we need educated competent people.
        These companies who have invested tens or even hundreds of millions developing these shitty 'AIs' are realizing that they're garbage, won't get over the finish line, and are now desperately moving their own goalposts trying to make their shitty AIs appear better than they really are. Meanwhile their legal counsel are telling them that the profits outweigh the potential liabilities so go ahead and just settle
  • Only one way (Score:5, Interesting)

    by LynnwoodRooster ( 966895 ) on Sunday March 24, 2019 @08:57PM (#58328098) Journal
    The programmer is ethical.
  • by Anonymous Coward

    Unethical animals programming ethics into a dumb machine?? Dog help us all!

  • by rtb61 ( 674572 ) on Sunday March 24, 2019 @09:00PM (#58328108) Homepage

    Automated systems that apply 'known' and accepted rules equally, is maths. Those formulation of those rules is ethics and is fine as long as the majority are aware of the rules and approve of them. The more impactful the rules, the more it could favour one against the other, the greater majority required to apply that rule but the starting point should always be more that 50% of the eligible citizenry and the upper limit, depends which you favour mathematically fractions or decimal places, it make a difference, as in 2 out of 3, probably the upper limit of foolish majority restraint, (decimals is more a choice between 60% and 70% for whole numbers sake but doesn't read as well as 2 out of 3 or 2 to 1).

    The difference between AI and straight maths, of course corrupted AI which makes unethical decisions to favour it's programmers, versus a simple spreadsheet, which applies the rules, that everyone is aware of and the majority have agreed to.

    The silly bullshit waffling around about mob rule, what a crock of shit, who complains about mob rule, the 1% who consider the entirely of the 99% the mob, the opposite of mob rule (majority rule, which is just emptily slandered by calling it a mob), is entirely corrupt elite rule, who inevitable govern to suit themselves at the expense of the mob and fear the mob, the majority, will hole the elite, the extreme minority accountable for their corruption, avarice and very venal and abusive natures.

    Yeah, I want automated decision making, fuck AI and fuck the cunts who propose it, you arseholes are just totally full of shit (AI as a layer of bullshit to hide decision making that favours a tiny minority at the expense of the majority, a new layer of bullshit added to the old elite lies). I want those maths rules to be open and clear and up for debate and affirmation or rejection by the majority, maybe a super majority in some cases 2 out of 3 or 2 for and 1 against.

  • Can we build ethics into Human decision-making? Only once we have done this do we have any hope of building it into AI.

  • by CrimsonAvenger ( 580665 ) on Sunday March 24, 2019 @09:09PM (#58328146)

    Seriously, how do you define "ethics" so that it would be an acceptable definition to, well, everyone?

    Because it won't be accepted as "ethical" unless its decisions agree with you (for all values of "you", including "me").

    • You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.

      It only works if all human lives are considered equal, making its implementation problematic for the most influential holders of those lives.

      • You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.

        So your proposed solution is consequentialist. That won't please the virtue ethicists.

        • That which produces a "good outcome", or consequence, is entirely in the eyes of the beholder, good and bad being learned concepts.

          Much less subjective is the measurable benefit/harm quotient a not so complex algorithm can administer when evaluating a single organism.

          As for the virtue ethicists, encode their considerations, but rate their results. Trust but verify, like any sensible operations management system.

      • Harm no human unless an equivalent or greater harm comes to 2+ humans.

        But the entire devil is in the details of how "equivalent or greater harm" gets calculated across a tremendous range of diverse scenarios on the continuum between life and death. To even take a crack at it is to load the algorithms with the value judgments of the programmers.

      • You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.

        This would allow for killing of innocent bystanders as long as their organs can be harvested to save 2+ others.

        • You already know you cannot please everyone, so you leave it to the maths. Harm no human unless an equivalent or greater harm comes to 2+ humans.

          This would allow for killing of innocent bystanders as long as their organs can be harvested to save 2+ others.

          Talk about a fatal exception.

    • Basically we need to include the goal in any automated decision making process, probably including result measurements after the fact (hopefully in a check/fix loop). And to do that we need to define the result we want.

      The issue is when optimizing for one area makes another break in "interesting" ways. Like the old Chinese curse of "May you live in interesting times." Those only exist because of relationships we don't expect or measure through optimization.

      I mean why would you measure the people you can'

    • by AHuxley ( 892839 )
      Think SJW adding extra virtue signalling to a complex computer project.
    • Seriously, how do you define "ethics" so that it would be an acceptable definition to, well, everyone?

      Simple answer: just make them follow the law, and, if necessary, change the law when problems are found.

  • and then you can start thinking about how to code it.

    • We can't define it, but we could start by crowdsourcing it [mit.edu].

      • by PPH ( 736903 ) on Sunday March 24, 2019 @10:29PM (#58328426)

        I paid for the car. I expect it to protect my life first.

        • I paid for the car. I expect it to protect my life first.

          This sounds like the standard sort of entitlement of your average car driver:

          I decided to take the risk of driving a high owered lump of metal around to save a bit of time and I exect it to have consequences for other people if something goes wrong.

          • I paid for the car. I expect it to protect my life first.

            This sounds like the standard sort of entitlement of your average car driver:

            I decided to take the risk of driving a high owered lump of metal around to save a bit of time and I exect it to have consequences for other people if something goes wrong.

            Well, yes. The owner is the one who paid for the car. It is a reasonable expectation that a priority be given to ensuring the safety of the owner. The second car has airbags and crumple zones and other functions which help limit the damage, because that's what they paid for.

            Obviously, the underlying assumption is that overall damage be minimized, but self-preservation is expected on both sides. Though it may be "most ethical" to follow Spock's "the needs of the many outweigh the needs of the few", there are

            • Well, yes. The owner is the one who paid for the car. It is a reasonable expectation that a priority be given to ensuring the safety of the owner.

              Yes, they paid MONEY so their lives take priority. Because money.

              • Yes, they paid MONEY so their lives take priority. Because money.

                Not simply "because money". It would be patently absurd to go to a restaurant and pay for a meal, only to find out that the restaurant gave the food to someone else because the restaurant felt the other person needed it more. It would be ridiculous to hire someone to clean my house, only for that person to go to someone else's house and clean it because they decided it was dirtier.

                If I buy a car, I expect its safety features to keep me safe.Yes, ideally, it would absolutely keep both of us safe...but if the

        • I expect the car to follow the rules of the road, and within those rules try to protect its passengers as much as possible.

        • by mjwx ( 966435 )

          I paid for the car. I expect it to protect my life first.

          So you would like the machines to be selfish, rather than ethical on your behalf?

          As for autonomous vehicles, ethics does not come into it. A car wont know if the person next to it is a Nobel laureate or meth dealer. It'll be programmed to minimise damage, we already have good rules for this which most drivers ignore, are ignorant of or just too silly to use them. One of the classic mistakes is swerving, if you're going to his something head on, don't swerve, if you swerve you risk rolling the car or hitt

          • by PPH ( 736903 )

            As for autonomous vehicles, ethics does not come into it. A car wont know if the person next to it is a Nobel laureate or meth dealer.

            That's not ethics. Human life is human life. The AI won't be checking your party affiliation, skin color or membership in a religious cult when deciding how to brake and steer. The only weighting factor is who owns the car. And that is as much in the self interest of the AI and its creators as the vehicle's occupants. Because if AI starts killing its occupants preferentially, nobody will buy it anymore. And those that have it will pull the AI fuse and steer themselves.

        • Ugh, a car "protecting you" and "doing what's best for everyone involved" is always going to be the same damn thing: In an oh-shit scenario, slow down, and try to come to a stop. No swerving, no bridges full of nuns, no trolly problem. Engineers are making this thing, not philosophers. That whole debate is bullshit technophiles wanking themselves.

      • by Rande ( 255599 )

        I followed the basic rules that I would expect a machine to follow.
        a) Protect the passengers.
        b) Try to avoid collisions if possible.
        c) If not possible, then stay in lane.

        Following those 3 rules, according to MIT, I hate grannies.

  • What a joke (Score:5, Interesting)

    by sdinfoserv ( 1793266 ) on Sunday March 24, 2019 @09:23PM (#58328202)
    Things that were ethical when I was a kid would mortify today's people - bringing guns to school, designated smoking areas for students, nude swim class, bullying as a way life... no, ethics is a variable based on time, race, age, and income.... Build "ethics" into any algorithm and the masses will find it unethical.
    • I guess you are based in the US, since those things would have raised eyebrows in the UK, even in the '70s!
    • Things that were ethical

      Wow. No. You've very obviously confused "commonplace" for "ethical". The two are in no way related. Just because something is commonplace does not mean that it is ethical. Not at all.

      For example, while gang violence was common in Prohibition-era Chicago, that doesn't mean it was ever ethical. Nor is it ethical to engage in graft, even if the practice is common, perhaps even accepted, in your culture. Likewise, to pull an item from your list, while bullying was tolerated to a greater degree in past decades,

      • Quite the contrary - You seem to not realize that ethics is determined by your group affiliation. For example, cannibalism is completely ethical (and expected) if you're an Aghori Monk in Varanasi, India. Or using your own example (BTW, I was born in Chicago), if you were a member of a Chicago gang, violence is not only common place but ethical. It really doesn't matter how outsiders judge you, you live by your own group. I'm willing to bet many things you do today will be judged unethical in 100 or 2
        • if you were a member of a Chicago gang, violence is not only common place but ethical. It really doesn't matter how outsiders judge you, you live by your own group.

          Not really. While groups may have ethics (e.g. I spent three semesters in grad school as a TA for my university's senior-level engineering ethics course, helping teach about 2000 engineering students during that time), those group ethics only work inasmuch as they hold their members to a higher standard without placing any burden on outsiders. Anything else falls apart the moment its group members interact with anything outside their group.

          As such, while living by your group's code is a number of things, it

  • What ethics? (Score:4, Insightful)

    by sjames ( 1099 ) on Sunday March 24, 2019 @10:12PM (#58328360) Homepage Journal

    Currently the various expert systems and automation have zero morals and ethics. Their only criteria are maximize profit, minimize risk. If someone ends up dying in screaming agony, meh.

    This is just another extension of the principle that the only people you can get on the phone are people with no power to say yes. Their job isn't to make an ethical decision, their job is to make sure the people with authority to make a decision don't have to personally feel the consequences of tossing ethics out the window.

    Same with the software. The programmers are just following orders, it's not like they're using the software on actual people. The people using it are just following orders, it's not like they wrote the software to make those decisions. The people at the top just specified software and ordered it's use. It's not actual people, just boring statistical data on a quarterly report.

    Of course, in reality the software is an extension of those who give the orders. They just want people to blame "the computer" for as long as possible, just like in the '70s when, according to the CSR on the phone, the computer was infallible.

  • by misnohmer ( 1636461 ) on Sunday March 24, 2019 @10:22PM (#58328394)

    "The customer's interest must always come before the company's."
    Which customer? Is the company thriving, expanding, and being able to take advantage of economies of scale to provide better and cheaper product to future customers considered in customer's interest or not? Or are we talking about customers who can't afford the product, so we should just give them the product for free and bankrupt the company, since customer interest comes first? Or are the workers at the company also customers, or is it ethical to exploit them just to provide cheaper products?

    The above are hyperbole's, but herein lies the problem, if you want to put "customer" interest above the company, you must specify "which customers".

    • If the customer's interest ALWAYS comes first, the company will soon fail and we will all go back to subsistence gathering. It is very much in the customer's interest to pay nothing and receive everything.
      • It is not in the interest of future customers for the company to give away product to today's customer and go under. Hence my original question "which customer are we talking about here?" - what's best for one customer may not be best for the other.

      • Thank you! This has always been my issue with companies that say, "Customers are our highest priority." I am pretty sure that staying in business is any company's highest priority.

  • No.

    Because there are powerful vested interested with a desire to prevent such a thing.

  • "Ethical problems arise when a company's interest in profit comes before the interests of the users"

    Unfortunately that's the law in the US. Any company that neglects opportunities for profit is subject to lawsuits from shareholders. A corporation's sole responsibility is to look out for the interests of shareholders.

    Neglecting customers/users might ultimately reduce profits, thus must be considered. But ethics? Where in the hierarchy of concerns is ethics? For each management organization that will differ.

    T

  • See Turns Out Algorithms Are Racist [newrepublic.com]. And don't forget that time Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day [theverge.com]. If we can do the opposite and make sure our algorithms aren't racist unintentionally and that our machine learning assistants don't become racist, you'll have won half the battle of making them ethical. This is going to be a hard problem to solve. It's not enough for programmers to be ethical they need to have outside observers helping them, especially peopl
    • How do you train them without using content created by humans?

      It takes you all the way back to expert systems. You have to throw out modern AI techniques for any systems that make decisions that affect humans. What else do humans use automation for, but things that affect humans?

      • by Ranger ( 1783 )
        You wouldn't train them without content from humans. I said it was a hard problem.
  • by Tom ( 822 ) on Monday March 25, 2019 @01:54AM (#58329110) Homepage Journal

    A small part of ethics is in the form of rules that we can express and follow. Even ignoring that these rules constantly change and adapt, they are only a small part of the whole.

    Most of ethics happens with at most very general, unspecific rules. Basically "don't be an asshole". Good luck expressing that in a programming language. Most of this requires you to be and feel like a human and to use empathy - by putting ourselves into another persons position in our imagination, we can deduce which behaviour we would find acceptable and which not if the roles were reversed. We are light years away from such a thing in AI.

    • Good luck expressing that in a programming language.

      Machine learning is not done by expressing rules in a programming language. It is taught by example. In theory, all you need is collect a bunch of examples, or have a way to correct the AI when it's making a mistake.

      • by Tom ( 822 )

        I know how AI works. But here's the problem: Ethics isn't taught by example only. A large part of ethics is putting yourself in the others place and ask how you'd think about the situation if you were them.

        At this time, an AI can do nothing of that, not the smallest part.

    • Comment removed based on user account deletion
  • Article makes it sound like Humans have ethics all worked out.... hilarious! How about when robots start asking about CEO pay, outsourcing, leveraged buyouts, wiretapping, alcohol legality etc etc etc!
  • Who's ethics? (Score:4, Insightful)

    by TheRealQuestor ( 1750940 ) on Monday March 25, 2019 @04:43AM (#58329498)
    Who gets to decide who's ethics to follow.
  • The only way to build ethics into anything, including human decision making, is to start with facts. These days, stupid people have a loud voice, amplified by social media, and use it to claim "alternative facts" which aren't facts at all, but they are stupid and don't understand what a fact actually is.

    Are you going to base ethical decisions on "alternative facts"? I don't want to be around for that, but here I am.

    How are you going to eliminate politics from the programming process that will train these

  • ... we will also build in Prejudice, Racism, Sexism and any number of other "unwanted" elements.

    The creation of an AI instruction set that manages to be without prejudice will be nearly impossible, given that whoever makes the list will be skewing the process with their own priorities:

    Human Life vs Overall Health of the Planet vs Quality of Life vs Sustainability, etc

    Governments will want their say, while special interest groups (like the Rich) will also want to build in their own influence to further their

  • It's simple. as ethics are always in the eye of the beholder, you cannot build in real ethics as it's always the view of one person/group.
  • stop asking.
  • Can We Build Ethics Into Automated Decision-Making?

    Yes, if the algorithm is transparent, publicly published, and more or less straight-forward. Which is to say, if they're NOT ethical, people will bitch up a storm until it's fixed. Because democracy works. The exact same thing happens with, say, police department's policies on arresting people. There's a host of honestly vague and confusing algorithms set in law about when it's legal to arrest someone. We as the public have an equally vague sense of what those laws/rules/algorithms are and if a case comes

  • Every decision any program makes is based on an underlying ethical code. Retrieving data without corrupting it, for example. Faithfully reproducing and transmitting what you type. Retrieving the information that your requested. Making robocalls. These all have ethical underpinnings--either for good or bad.

    The operation of software is an expression of the ethics of its programmer. You can't leave out ethics, good or bad, it's baked into the fabric of the code by the programmer.

Technology is dominated by those who manage what they do not understand.

Working...