Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Will Machine Learning Build Up Dangerous 'Intellectual Debt'? (newyorker.com) 206

Long-time Slashdot reader JonZittrain is an international law professor at Harvard Law School, and an EFF board member. Wednesday he contacted us to share his new article in the New Yorker: I've been thinking about what happens when AI gives us seemingly correct answers that we wouldn't have thought of ourselves, without any theory to explain them. These answers are a form of "intellectual debt" that we figure we'll repay -- but too often we never get around to it, or even know where it's accruing.

A more detailed (and unpaywalled) version of the essay draws a little from how and when it makes sense to pile up technical debt to ask the same questions about intellectual debt.

The first article argues that new AI techniques "increase our collective intellectual credit line," adding that "A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when."

And the second article has a great title. "Intellectual Debt: With Great Power Comes Great Ignorance." It argues that machine learning "at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball -- except they appear to be consistently right." And it ultimately raises the prospect that humanity "will build models dependent on, and in turn creating, underlying logic so far beyond our grasp that they defy meaningful discussion and intervention..."
This discussion has been archived. No new comments can be posted.

Will Machine Learning Build Up Dangerous 'Intellectual Debt'?

Comments Filter:
  • Silly. (Score:3, Insightful)

    by gurps_npc ( 621217 ) on Saturday July 27, 2019 @10:43AM (#58997102) Homepage

    I can use a car without understanding how an Internal combustion engine works.

    The majority of people already get answers from computers without understanding the answer. Many people even don't understand how to do simple math, so a calculator does this.

    We do not suffer from this, there is no such thing as "Intellectual debt".

    • Re:Silly. (Score:5, Insightful)

      by xonen ( 774419 ) on Saturday July 27, 2019 @10:54AM (#58997140) Journal

      A great example of such ignorance are papertrail-less voting computers. A lot of people are in favor of them as they view it as something that easifies life by shortening the voting-boot queues and delivering faster results. Only computer experts claim better not to use them. The average layman has no clue why and argues that if computer is good at counting, hence a good device to aid elections.

      Moral of the story: don't believe anyone or anything, especially not when they say it's good for you or in your own interest, AI or other automated systems included, and always -at least try to- think for yourself and stay well-informed. With politicians by now we learned to not to trust them. With computers we will, sooner or later, the easy way or the hard way.

      • Re: Silly. (Score:3, Informative)

        by Anonymous Coward

        We had a whistleblower come out and say he was asked to write code that could surreptitiously change votes. This aint a matter of subject matter expertise. This is a matter of widespread, pervasive gullibility, stupidity and authoritarian mindset. And people wonder why Im so nihilistic.

        • by shanen ( 462549 )

          Having accidentally seen this AC comment, I have to respond "No, we do NOT wonder anything about a nameless cretin." I rather think it's being generous or even polite to assume the use of AC to conceal the cretinous status.

          And I'm sorry about the accident.

      • Re: (Score:3, Insightful)

        by thegarbz ( 1787294 )

        Only computer experts claim better not to use them. The average layman has no clue why and argues that if computer is good at counting, hence a good device to aid elections.

        Your post is a good example of the complexities of the discussion. You have given us an answer, generalised on a topic without any specifics through which people may understand and better themselves. You jump to "paperless voting" as "shortening queues" but that isn't true at all, "electronic voting" shortens queues and can come with a paper trail.

        By generalising and not calling out specifics everyone who has read your post and hasn't gone to research the topic has just built up intellectual debt.

        • "electronic voting" shortens queues

          You mean: we require fewer polling stations to keep the queues at the same acceptable length.

        • by Shaitan ( 22585 )

          ""electronic voting" shortens queues and can come with a paper trail"

          True, it simply lacks the transparency required to actually audit the validity of that paper trail. And no, I don't mean for "trusted" third party voting officials to audit but the actual voters. There is absolutely no reason you can't have private voting machines, a real time displayed tally which shows updates to that tally with anonymized numbers associated with updates you can validate against your paper trail and then a randomized upd

      • by Shaitan ( 22585 )

        "With politicians by now we learned to not to trust them.With politicians by now we learned to not to trust them."

        If only. People pay lip service to the concept and then go right on trusting them.

        It is quite simple, your energy on politics should spent realizing that the politicians YOU SUPPORT are lying, don't have whatever goals they've stated, and can't be trusted. Stop wasting energy digging into juicy reasons the other side is evil. That is exactly what the politicians duping you want you to do.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Many people even don't understand how to do simple math...

      Right, Floridians even have trouble counting.

    • by Anonymous Coward on Saturday July 27, 2019 @11:01AM (#58997156)

      His concern is not that you don't know why the answer is right. What happens when no one knows? What happens when AI/ML can spot patterns and trends than no human being can understand. Sure, you don't need to know how to design an internal combustion engine to drive a Toyota, but someone at Toyota does. What happens when machines derive answers that no one know how they got them, only that the appear right. What happens when we get more answers than we get research bandwidth to explore?

          Is it an issue? I honestly don't know, but it is an interesting question.

      • How difficult can it be to have an AI "show its work" in the form of a log?

        • It's very easy. Just time-jump to the 1980s and ditch neural networks for good old rule-based expert systems. ;)
        • by religionofpeas ( 4511805 ) on Saturday July 27, 2019 @12:14PM (#58997382)

          With a neural net, there's no "log". Data goes in, insane amount of computation is done, and data comes out.

          • Re: (Score:3, Insightful)

            With a neural net, there's no "log". Data goes in, insane amount of computation is done, and data comes out.

            The same is true with a human brain.

            If I hand you 10 photographs, and ask you to pick out the one that is a photo of your mom, it is very likely you can pick it out.

            Can you explain how you did it?

            • ask you to pick out the one that is a photo of your mom, it is very likely you can pick it out.

              Yeah, the biggest one.

      • by jhecht ( 143058 )
        The problem is you don't know IF the answer is right for all cases because the AI/ML cannot explain why. All you know is that nobody with the power to change it has not discovered a mistake. If you train the system with input data that identifies all people with mustaches and black hats as villains, it will identify them as villains. Usually the biases are more subtle than that, but that way they can go unrecognized longer.
        • The problem is you don't know IF the answer is right for all cases because the AI/ML cannot explain why.

          Does it really matter?

          If a medical AI reaches a correct diagnosis 90% of the time, while human doctors are correct 80% of the time, but the humans can explain their reasoning while the AI cannot, should we just accept the additional errors?

          This is not a rhetorical question, because computer based radiology really is better than humans, and so far the answer is Yes, we accept the economic and human cost of the additional misdiagnoses, not because it leads to better care (the care is worse) but because it is

          • by sjames ( 1099 )

            Ideally, when the human and AI disagree, another human should double check.

            • Why not another AI?

              • Why not another AI?

                This is called Boosting [wikipedia.org] when done when two AIs disagree.

                You train two AIs to solve the problem. Then you train a third on only the cases where the first two disagree. So the "tiebreaker" network learns to specialize on the ambiguous cases.

                There is no reason that this couldn't work when a human and AI disagree (other than volume of training data).

                • I'm just sitting here waiting for someone to train a neural network to train and choose neural networks. Got a problem you want solved? Give it to the meta-AI and let it figure out which neural network schema works best and then train it for you.

                  • I'm just sitting here waiting for someone to train a neural network to train and choose neural networks.

                    This is called Hyperparameter Optimization [wikipedia.org], and automating the process is a hot area for research.

                    It is far from a solved problem.

                • by jythie ( 914043 )
                  A better solution is a multi-model approach. The problem with boosting is you are just using more copies of the same underlying model, so systemic problems could be duplicated too. Using multiple model types, ideally ones with strengths and weaknesses that complement each other, is generally preferred.. but also more expensive. Boosting on the other hand is a dirt cheap solution.
                  • A better solution is a multi-model approach.

                    You still need some way of resolving disagreements. Having 3+ models, and resolving disagreements with majority voting often is not a good solution.

                  • " The problem with boosting is you are just using more copies of the same underlying model": As I understand it the multiple systems in boosting *are* using different features, and in some cases different learning methods.

              • by sjames ( 1099 )

                Because then we get no opportunity to improve our articulable knowledge or train the human to do it better.

                It would also be interesting to see how much of human error is simple mistakes that can be corrected.

                • Because then we get no opportunity to improve our articulable knowledge or train the human to do it better.

                  So we should accept answers that are likely wrong over answers that are likely correct, because we don't understand why they are wrong? Maybe we should just accept the correct answers, and work on understanding why they are correct later.

                  If you were a brain surgery patient, would you rather have a human surgeon with a 50% success rate, or a robot with a 99% success rate, but nobody is quite sure how the robot works?

                  I'll take the robot, especially since nobody is quite sure how the human works either.

                  • by sjames ( 1099 )

                    Where did I say that?

            • Ideally, when the human and AI disagree, another human should double check.

              What if, when they disagree, the AI is alway correct? Should we accept the wrong answer from the human because the human can explain the thought process that led to the incorrect answer, while there is no explanation for the correct answer?

              At least in radiology, this is what we currently do.

          • by jythie ( 914043 )
            My general take is that ML is great for things where the answer doesn't matter. Search, shopping, stock picking, things where no lives are on the line. But as soon as you start getting into medical, legal, or military decisions, ML has a place in metamodeling but not a great one. When you know how something works and why it makes certain mistakes you can compensate for it. Take that 90%/80% example. With the humans, you can take that 20% failure rate, pick it apart, and spend extra time on those case
      • by jythie ( 914043 )
        As someone who works in non-ML AI, yes, it is an issue. We see a cycle of people loving ML because it produces a very low effort answer, then as they try to USE the output slamming into the problem that no one understands it or how changing things might impact it, then they come to us because our models are ones we can actually explain, BUT they have become so used to the fast turn around for ML that they think our stuff is too slow and expensive.

        ML really runs into problems when its output becomes part o
      • by Shaitan ( 22585 )

        "What happens when machines derive answers that no one know how they got them, only that the appear right."... and then subsequently start failing or even exploding for reasons we don't understand.

    • Re:Silly. (Score:5, Interesting)

      by Anonymous Coward on Saturday July 27, 2019 @11:04AM (#58997164)

      You may not understand how an engine works, but the engineers who created and the mechanics who work on it deeply understand it. For example, when the fuel type changes (say from gas to ethanol), people who understand it can modify different parts to keep the combustion working. With deep learning, the algorithm only finds a complex association between the inputs and the outputs; what happens if we depend on hundreds of these associations, and the algorithms give the wrong result. Since we never really understood the system, what do we do next? This is the intellectual debt the article is referring to.

    • Re: (Score:3, Insightful)

      by JoeyDot ( 5981942 )
      The car consists of static machinery made by people who *do* understand it. Abstract concepts don't actually map down to specifics which is why the human brain and AI can amount to garbage with a large build up of intellectual debt.

      Machine learning is very different from using a calculator or a car as it relies on evolution rather than intelligent design. The latter is how your car and calculators are made.

      What you end up with, with machine learning is if not careful a bunch of little artificially evo
    • by hackertourist ( 2202674 ) on Saturday July 27, 2019 @12:15PM (#58997386)

      I can use a car without understanding how an Internal combustion engine works.

      But there are plenty of people who do understand how an internal combustion engine works, even if designing one is now a job that involves dozens to hundreds of people.

      The danger with machine learning is the potential for having nobody who really understands why the answers given by the machine are correct.

      If everybody takes the answer at face value and something bad happens as a result, you're too late.

      • But there are plenty of people who do understand how an internal combustion engine works

        But even then, there are areas where even experts don't know all the details. For instance, understanding how the flame front propagates inside the cylinder. For a long time, that's just been a matter of trial and error, and the 'master's intuition'.

    • On one hand, you're right, people use tools without understanding how they work.

      On the other hand, someone understands how those tools work. No one understands how complex ML models work. They just eat data, and poop out answers.

      On the gripping hand, these things are inscrutable enough that they'll be as difficult to sabotage as they are to understand, so they're not a real danger until we invent hard AI. Maybe by then we'll have tools for understanding what's going on inside of them.

      • by sjames ( 1099 )

        On the gripping hand, these things are inscrutable enough that they'll be as difficult to sabotage as they are to understand, so they're not a real danger until we invent hard AI.

        Unless someone else applies an AI to create input that screws up the deployed AI. They won't understand why their input data provides the (wrong) answer they want, but it will produce it.

    • by ceoyoyo ( 59147 )

      You may not understand how a car works, but lots of people do. In particular, the people who work to build better cars understand how they work.

      We typically have made use of things we (collectively) didn't understand, but have generally filled in the blanks later, and those discoveries have usually led to further improvements and more discoveries.

    • You're not thinking far enough ahead to realize what he's talking about. What happens when, in a future day, we not only get most of our answers from so-called 'AI', and we don't question it, but we also don't even have the capability anymore, intellectually, to falsify an AI's answers (as you would in science to a theory)? More to the point, what happens when you can't falsify/verify what an AI's answer is anymore and it's dead wrong? That's the real danger here. It's easy to say in this moment "Oh well w
    • by sjames ( 1099 )

      Yes, but SOMEBODY knows how it works (many people, in fact). If it fails, you can take it to one of those people to get it fixed.

      Imagine if literally nobody in the world knew how it worked. They knew you had to add gasoline to the tank, but no idea why and not a clue as to where the gas you added last week went.

    • Step 1: Search Slashdot discussion for funny comments
      Step 2: Search for obvious keywords
      Step 3: Look at "insightful" comments

      Step 4: NO PROFIT!

      Commentary on each step:

      1: I do think the loss of humor most saddens me, but that might be circular reasoning.

      2: In this case, the obvious keyword is "freedom", though Zittrain is approaching the problem sideways. See my sig for the course correction, but based on one of his books that I read some years ago, I think he tends to do that.

      3: This first insightful commen

    • by Shaitan ( 22585 )

      "Many people even don't understand how to do simple math, so a calculator does this."

      And our grandfathers were highly alarmed by this prompting all the 3 R's stuff in their demand for education reform away from no child left behind. They were characterized as just being cheap and ignored and now a new condition arises, "many people even don't understand how to do simple math."

      This is an example of what is being discussed not a counterpoint. But the big alarm being raised here is that AI isn't just coming up

    • The majority of people already get answers from computers without understanding the answer.

      Hell, many people get answers, or otherwise use the product(s) of, other people without understanding anything more than surface use of same. People pitch baseballs without understanding how to solve the simultaneous equations that would represent the action mathematically. The baseball flies right just the same. We all live in a constant state of what this person is calling "intellectual debt."

      It's meaningless. Knowl

    • by rknop ( 240417 )

      But *somebody* understands how the car works. There's a key difference here.

      There *is* a theory of how and why a car works the way it does. You don't have to know it to use it... but the fact that we (as a society) know that allows us to make cars in the first place, and also allows us to figure out how to fix them, etc.

      Understanding really is important. And "intellectual debt", as described in the article, is real.

  • I've been thinking about what happens when AI gives us seemingly correct answers that we wouldn't have thought of ourselves, without any theory to explain them

    If you believe an "answer" that was spit out by someone's computer program without understanding why it was the answer, then you have bigger problems.

    • lol yep, and we have a lot of folks that believe just about anything the politicians say, despite the fact that politicians have been proven time and again to be lying.

    • If you believe an "answer" that was spit out by someone's computer program without understanding why it was the answer, then you have bigger problems.

      Why ? For instance, if you can train a machine to recognize tumors on an x-ray better than a human, without understanding why, then what is the "bigger problem" ? As long as you have a method to verify the quality of the machine, you can figure out a risk matrix, and decide whether to trust the machine is a good choice.

      • by ceoyoyo ( 59147 )

        To use your example, pointing to an image and saying "yup, there's a tumor there" is useful for individual patients. But the details of *why* a bit of tissue looks like a tumor are often important clues to more general knowledge about tumors, that can lead to, for example, better treatments.

        • But the details of *why* a bit of tissue looks like a tumor are often important clues to more general knowledge about tumors, that can lead to, for example, better treatments.

          If you want to know what's the best treatment, you could train a different network to suggest treatments. This would probably give you better results, while still not understand the first (recognition) problem, nor understanding why certain treatments are recommended.

          • by ceoyoyo ( 59147 )

            I didn't say choose the best treatment. I said "lead to." As in, discover. We didn't discover most cancer treatments by dosing up patients with random poisons and hoping some wouldn't die. Most cancer treatments are based on the observation that tumours are normal tissue that grows out of control. That's the kind of thing you might miss if you just use a data mining approach. It's not unique to machine learning either. Medicine, and other fields, spent a long time using very superficial this-is-associated-

      • by tomhath ( 637240 )

        if you can train a machine to...

        I doubt you could train a machine to identify tumors without knowing why it's identifying them, but maybe.

        • I doubt you could train a machine to identify tumors without knowing why it's identifying them

          Step 1: tell the machine "This is what a tumor looks like".
          Step 2: Show it ten million Xrays of tumors
          Step 3: tell the machine "This is what a tumor does not look like".
          Step 4: show it ten million Xrays of non-tumors

          That's it. No knowledge involved.

    • by hey! ( 33014 ) on Saturday July 27, 2019 @12:33PM (#58997416) Homepage Journal

      Understanding how my GPS works doesn't prevent it from degrading my map reading skills. Any skill you don't practice gets rusty.

      And that's really the best case. As Dunning-Kruger tells us, the more sure we are about our understanding the less understanding we are likely to have. AI that tells us what we already think to be true may well be simply crystallizing our biases in a non-disputable form. For example there was an article a few months back in Technology Review about using AI to (a) decide which convicted criminals to incarcerate based on an AI-generated "recidivism score", and (b) the use of AI to direct police to places where crime is most likely. This is a perfect set up for generating self-fulfilling prophecies.

      Take marijuana use. Based on arrest records, we "know" that marijuana use is very common among blacks but extremely rare among whites. <ironyTag>Chances are you've never even met a white person who uses weed</ironyTag>, because you're unlikely to have met any who've been arrested for it. Therefore AI directs police to look for weed use in minority communities, where, wonder of wonders, it is found. This in turn prompts our convict classification AI flag people from those communities as likely to be arrested in the future.

      Now suppose some well-meaning person says, "We'll just eliminate race from our training data." That makes the training data race blind, but not race neutral, because proxies for race are still there. A weed user in a 90% minority census tract is roughly 4x more likely to be arrested than a weed user in a 90% white tract. The problem is that the data we need to train our AI to be fair doesn't exist. Any AI generated classification that has such a huge impact on someone's life needs to be tempered by human judgment.

      And there we run into a snag: the software courts and police are relying on is likely proprietary, and the actual calculation of something like a "recidvism score" is probably a trade secret. You can't challenge the software's assertion you're twice as likely to use weed in the future as that other guy, because nobody outside the company knows why the software is saying that. As for the courts, their understanding of crime and recidivism is essentially irrelevant if they're using magic numbers spit out by software to make decisions, and even if that understanding is good now it won't remain so for long.

      • We shouldnâ(TM)t expect algorithms to eliminate our biases and correct social issues. Algorithms can only reflect what we tell them or input into them. If we tell an algorithm that black peoples are more likely to smoke weed and then ask it to find weed smokers, it will largely finger black people.

        • by jiriw ( 444695 )

          The you haven't understood the point 'hey!' tried to make. Even if you try carefully to not 'input' into that AI that black people are more likely to smoke weed (for example by removing the 'race' trait from the test data), that result will still shine through in the resulting AI through its use of seemingly unrelated but eventually race-binding traits. For example the neighborhood you live in - that's an easy one to understand and maybe you should filter that bit as well. But due to the black-box nature of

          • by vakuona ( 788200 )

            I understood the point well enough. I wasn't referring to intent to either bias the algorithm, or avoid bias. The bias is present in society, either because of the way policing works or some other issue.

            At the end of the day, algorithms need inputs, and if those inputs have been affected by bias, then the outputs will show those biases again.

            So, for example, if lots of people in Washington DC are arrested for possession of weed, and you ask the algorithm to find weed smokers, it might return weed smokers, a

      • Instead of simply eliminating race from the training data, in fact, the same ML techniques can "prove" that the remaining data contains significant proxy data for race. Prove this by training a NN to predict race from the remaining data, and if that NN accurately predicts race, you can reasonably conclude that eliminating race from the training data will not create a race-blind decision.

        But even the above experiment doesn't tell you what to do. There are probably multiple ways to proceed, including:

        (1) deve

  • Nope! (Score:5, Insightful)

    by SirAstral ( 1349985 ) on Saturday July 27, 2019 @10:49AM (#58997120)

    Machine Learning is just a tool like a hammer and nails.

    It would sound stupid to say that builders building with hammers and nails are less knowledgeable than builders building with Saws, joinery, and dowels.

    And this article sounds stupid for the same reasons.

    And I would like to point out something else.

    ""A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when.""

    We have always had a world like this, there have always been stupid, ignorant, or lazy humans that need others to tell them what to do. There is a reason groups of people like to get someone from the group and make them a leader. If one person is making all the decisions it's just easier to go with the flow at that point and do what you know, because if the leader is at least not a complete moron they will have to perform work that is suited to what you know.

    • It would sound stupid to say that builders building with hammers and nails are less knowledgeable than builders building with Saws, joinery, and dowels.

      In general they are, though. Joinery is tough. (You still usually need a saw if you're framing a house with nails in the normal way).

    • Except machine learning isn't like hammer and nails, it's more like pre-fabricated walls. And yes a builder using them exclusively would quickly forget the nuances of how to support a window frame and standards for spacing of joists etc.
      There's a reason you're not allowed to use a calculator to perform basic addition in primary school and the answer is not because "calculators are like nails".

    • Re:Nope! (Score:5, Insightful)

      by ceoyoyo ( 59147 ) on Saturday July 27, 2019 @01:25PM (#58997574)

      Machine learning is a tool, yes. But what people today call "machine learning" is mostly what we used to call "data mining." You take some data, run it through an ML algorithm, get some model that appears useful, and you're done.

      Data mining used to be a dirty phrase in science for good reason. It's not scientific. It can be extremely useful in an engineering context, it's good at solving problems, but it isn't science because it doesn't provide generalizable principles.

      Fast easy engineering solutions are valuable. But so is scientific understanding. The author's concept of intellectual debt points out that when it's extremely easy to generate the former, the latter falls behind.

    • We have always had a world like this, there have always been stupid, ignorant, or lazy humans that need others to tell them what to do.

      You are contradicting yourself, or you are not seeing that you are supporting the point of the article. The fact is that we always had people who understood the science and technology behind complex relationships, phenomena, outcomes, etc. And those people could rely on published and peer-reviewed science, could call upon the knowledge of other individuals more versed in a particular aspect of the field of interest. We always had an epistemological tree - indeed, the Tree of Science, as it is often called,

      • Nah, the parent is pretty clearly suggesting that we should put the machines in charge. Apparently we need black boxes which will tell us what to do and how to live for ineffable reasons. This retreat from curiosity is nothing to worry about: we lived like this for a very long time (some of us still do), therefore it's fine.
    • by doom ( 14564 )

      A few years back, in an interview where someone actually asked him about linguistics, Chomsky commented there was a peculiar trend where someone would find they could make a prediction-- say train up some software to tell you after you see words like this you're likely to see words like that-- they would announce this as though it were some great discovery. But since there's no underlying theory for this prediction to confirm or deny, it doesn't really tell you anything. He compared it to physicists trying

  • by Empiric ( 675968 ) on Saturday July 27, 2019 @10:57AM (#58997150)

    We have parallels to these opaque machine learning layers with human "instinct", "intuition", and even "religious experiences", which tend to be subordinated to more quantifiable methodologies. Now, like QM concluding as the finalization of physics that "literally anything can happen" as a scientific matter, where in the intermediate stages of science the equivalent of "miracles" were dismissed, we have the unanalyzable as the -conclusion- of the technology effort, where previously we assumed everything to be reducible to definable, understandable algorithms.

    I would say that for a large percentage of the population, their actual reasoning is equally opaque. They assert what they do because it's accepted as "consensus" second hand.

    The distinction with AI being that the influence of unclear chains of reasoning is potentially so much more fast moving and not subject to debate or discussion, particularly when plugged into the economic and political systems we've created.

    And, I have little doubt that a large percentage of the population will not question this "oracle" and those that do will be derided as "anti-science".

    It will be a good time to enhance an awareness of logical fallacies, and the human-centered aspects of philosophy. Since an AI could be excellent at avoiding the former, but would not have any intrinsic capability of the latter. Then we get a systematic application of "better" according to some opaque metric, and it does make a great difference what that metric is, say, whether that follows some systematic application of Chinese "social credits" or a maximization of individual liberties.

    Strange days until the final chapters.

    • by mspohr ( 589790 )

      Climate change deniers fall into this category. They don't like what the science says so they just make up shit to believe.

    • Yeap, most of civilization is ad-hoc stuck together with duck tape. It's not just engineering.
    • We have parallels to these opaque machine learning layers with human "instinct", "intuition", and even "religious experiences"

      I think a better analogy would be the laws we institute to govern collective behavior - like markets, and democracy. We use these because they seem to work pretty well, although not compared to any absolute standard, but only compared to other things people have tried. We study them and have noticed some regularities that seem kind of like 'rules' to the collective behavior that

    • If you think that AI will be expert at detecting logical fallacies, you're suffering from a logical fallacy. AI is subject to the same mistakes as other devices and humans - garbage in, garbage out.

      People will train AI, but who will train the trainers?

  • He's illustrating a really good point, but we need to be clear on one thing here: when we're talking about complex AI, good AI, strong AI, "magic" AI, whatever you want to call it... if we want to understand the machine's output, we have to do it on our own. We're never going to be able to follow along with the machine's rationale. If we want the "debt" repaid, that's always going to involve a separate human endeavor. We can't fix AI to be more easily comprehensible to us.

    Maybe this is too obvious of a point to be making, but good AI necessarily involves too many rules to for a human mind to ever intuitively grasp. There's a really good example here from basic linear regression 101: Principle component analysis [wikipedia.org]. All other factors being equal, PCA is the superior way to do statistical prediction using linear regression. For prediction, it's simply better than every other standard method, because it seeks to eliminate *all* of the correlations between the predictors... however, it does this by creating new predictors that are combinations of the initial inputs. These new predictors are orthogonal to each other (thus are independent), and this allows you to forecast your dependent variable (the thing that depends on the inputs) much more accurately.

    However, for *explanation purposes*, PCA sucks. The new predictors you've created don't *intuitively* mean anything any more. In achieving higher accuracy, you've sacrificed intuitive understanding. For this reason, PCA often isn't used whenever people are trying to *explain* what the numbers actually mean, how individual factors influence the result. (E.g. "being raised by a single parent increases the risk of teen drug use by X%" or whatever.) Less accurate methods are used, even though these methods don't fully correct for the correlation between the different independent variables (inputs), because they allow people to quantify (albeit less accurately) how much a specific input affects the output.

    It's a pet peeve of mine that people don't differentiate between the two very different roles of statistics: prediction and explanation. These things are not only very different endeavors (most notably, prediction doesn't have to care about the actual causal relationships), but they can actually be at odds with each other.

    And this is just linear regression 101 I'm talking about here. Not neural nets, not Bayesian networks, not stochastic calculus. Things aren't going to become more intuitive when these tools are added. When you're trying to build a mathematical or computational system to predict or "understand" something, there will always be a fundamental tradeoff between being able to intuitively follow along with what it's doing and why, and allowing it to perform maximally.

    What does that mean for the intellectual debt? Well, we need automated sanity checks plus human reviews. Obviously. But actual understanding for everything probably is a realistic goal. It'll happen sometimes--a human looking at the machine's output will find a new pattern that makes sense to human minds. (But that pattern won't be the same pattern the machine used.) And we can't count on those human-decipherable patterns showing up. Indeed, presumably the better AI gets, the more elusive these intuitive patterns will become.

    Asimov wrote a short story [wikipedia.org] on this about computers (which obeyed his laws of robotics [wikipedia.org]) that were basically in charge all of human civilization, including economic output. They started subtly messing things up, creating minor conflicts. Moral at the end of the story was that humans needed some conflict to grow, to not stagnate, so the machines were providing that. That's all well and good, interesting stuff, but as geeks I think it's our job to ensure no one puts machines in positions of authority like that (
    • Here's an analogy for you. To the ignorant masses, academia is a black box that spits out results for which the methodology is inaccessible to them. Some adopt a submissive stance and accept their methods seem to work better than theirs. Others are defiant and will favour methods accessible to them. For example if they have a friend of a friend who took homeopathic medicine and felt better then they could see the data collection and how it was used and will not be told otherwise by scientists using big

  • Since invented buzzwordy phrases like "Intellectual debt" are created to attract underserved attention to an otherwise under-researched and thinly-developed thesis, prime examples of the intellectual debt he attributes our reliance on AI towards.
  • This is really not any different from accepting a working answer that originated as someone's intuition. There isn't logic behind it, at least not explicitly, but it might still be the correct answer.

    However there is a difference with machine learning, because, while not perfect, humans have an intuition about patterns that are coincidental and patterns that might reflect an underlying cause and effect relationship.

    So if software detects that a certain target population can be predicted because they have t

    • by rknop ( 240417 )

      The point isn't machine learning (or other kinds of pattern-finding) vs. human intuition. The point is stuff that seems to be working but we don't know why, vs. stuff that works and we have a general theory that explains why it works.

      The Asprin example in the article he gives is a good one.

      Lots of discoveries start with a correlation we don't understand, or with intuition. In the best cases, those then later get generalized to a theory that allows us to understand a general category of phenomena, and then

  • Isaac Asimov's short story "The Feeling of Power" [wikipedia.org] about this was published just over 60 years ago...

  • by whitelabrat ( 469237 ) on Saturday July 27, 2019 @12:04PM (#58997348)

    If you think about our society, imagine if all of a sudden we didn't have computers, electricity, or even 20th century machining skills? How good would any of us be at doing complex math on paper? Or how about making your furniture, or lighting your home a night? How would you get a round or make food. Good luck getting a horse, let alone taking care of one, and knowing how to use animals for labor. All those country rubes, and amish folks wouild all of a sudden be the advanced society and the rest of us would be cheap manual labor willing to do anything for food.

    • Good luck getting a horse, let alone taking care of one, and knowing how to use animals for labor.

      Never mind the fact that we don't understand why a horse does the things it does, very much like the AI.

      • Horses poop and piss because they eat and drink, they eat sometimes because they're hungry and drink when thirsty. They sleep when they are too tired, and fuck when horny and given opportunity.

        That's 99% of a horse's behavior explained, give me my Nobel prize please.

  • We don't know why the electron has the mass it does, and exactly the same charge as proton but opposite sign, our best model of their interactions, a part of QED, has all kinds of questionable math operatios that mathematicians say isn't valid...but it works...

    Science is full of this, our accepted social norms are, our laws, our upbringing....

    • by rknop ( 240417 )

      On the other hand, QED is itself a theoretical framework (of exactly the kind that the author of the article wants to see for as much knowledge as possible) that allows us to systematize and predict lots of behavior.

      Yeah, there are input parameters into QED that we don't have an explanation for. (Why is the mass of the electron what it is? Etc.) However, it's a theory that provides a general description of the interaction of charged particles and photons, and from that we can model a wide range of behavio

  • Just imagine installing a simple game in Windows 3 or after. You run the EXE/MSI, and a screen pops up saying you're about to install a game, with an OK and Cancel button. You hit OK. Now you go thru a lot of other screens, until you finally reach the "Install" button. You hit it, something happens, and you reach a "Completed!" button. This is an extremely simply "Wizard" that's been built. You want to do something (install in this case), and you manage a bunch of GUI screens that does it for you. Great
  • I'm astounded that no one (including, it would appear, Jonathan Zittrain) has mentioned that DARPA has a project intended to (help) answer this exact question, "Explainable Artificial Intelligence": https://www.darpa.mil/program/... [darpa.mil]. Brief synopsis:
    -------------
    The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:

    Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...