Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Businesses

AI Experts Say Some Advances Should Be Kept Secret (technologyreview.com) 114

AI could reboot industries and make the economy more productive; it's already infusing many of the products we use daily. But a new report [PDF] by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments -- so much so that some AI research may need to be kept secret. From a report: [...] The study is less sure of how to counter such threats. It recommends more research and debate on the risks of AI and suggests that AI researchers need a strong code of ethics. But it also says they should explore ways of restricting potentially dangerous information, in the way that research into other "dual use" technologies with weapons potential is sometimes controlled.
This discussion has been archived. No new comments can be posted.

AI Experts Say Some Advances Should Be Kept Secret

Comments Filter:
  • by Anonymous Coward on Wednesday February 21, 2018 @11:27AM (#56163391)

    As if criminals, political operatives, and oppressive governments won't get hold of the required information regardless. Just publish everything and give the rest of us a fighting chance to figure out what's going on and defend ourselves.

    • All Technology has its benefits and its problems. And not always do they equal out. But over the long haul, technology has benefited mankind. Or else, we'd have a lot more Luddites.

      • No, the luddites mostly starved and died of exposure. At least the ones that were not killed by the army.

        They were right- they needed training on the new technology (training that was denied to them).

        In the long run, things may be fine but in the short run, you could be dead of starvation and exposure, beaten by police and told to move on down the road (because it's not even profitable to arrest and jail you- as has happened already in the u.s. commonly during the great depression), denied any food assistan

      • by ranton ( 36917 ) on Wednesday February 21, 2018 @12:28PM (#56163775)

        All Technology has its benefits and its problems. And not always do they equal out. But over the long haul, technology has benefited mankind. Or else, we'd have a lot more Luddites.

        It doesn't really matter whether technology advancements are a net benefit or hindrance to society as a whole. If there is at least a small segment of humanity which can benefit, the technology will be developed. Might as well have the benefits of technological advancements for everyone if we will need to deal with the negative side anyway.

        There are some technologies which require an immense financial investment that can be delayed for a while, such as nuclear weapons. But even that has its limits as we have seen in North Korea. AI on the other hand will not be expensive. If we couldn't keep nuclear weapons out of the hands of North Korea, what hope do we have of keeping AI advancements which fit on a phone away from bad faith actors?

      • by cstacy ( 534252 )

        All Technology has its benefits and its problems. And not always do they equal out. But over the long haul, technology has benefited mankind.

        If it's a benefit, it's not my problem.

        (Now, where did I leave those memory engrams, I'm always forgetting them...)

    • by Sloppy ( 14984 ) on Wednesday February 21, 2018 @12:18PM (#56163697) Homepage Journal

      This seems pretty common sensy and it's also what we've determined to always be best in the computer world. So far.

      But this approach seems flawed if we think about, say, nuclear weapons engineering. The more everyone (including you) knows about how to make a nuclear weapon detonate correctly, the more dangerous other people become but you don't really get to apply any of that to your defense. It's not like your bomb shelter will get better because of you finally figured out how to get the imploder timed right. It's not like your political efforts to limit nuclear proliferation benefit from proliferation of the engineering knowledge. It's not like your coping-with-horror-by-using-fatalistic-nihilism-and-humor will benefit from th-- wait, ok, so it does happen to help that one defense, but that's an unusual case.

      For the most part, nuclear weapon engineering proliferation is bad for everyone, in a way that completely contrasts with, say, knowing that fingerd has an exploitable buffer overflow bug.

      Are there some conditions where software tech crosses over into being more like nuclear weapons and less like other software tech? More to the point: what are the general conditions where tech knowledge proliferation is bad rather than good, such that buffer overflows get categorized one way and nukes the other? The condition isn't really "software good, hardware bad," no way.

      That some people think some software tech is crossing over or soon may, makes me wonder WTF they figured out how to do!

      (BTW, for some reason I actually like that they used movie plot threats in the guise of latching onto Black Mirror trendiness. Let's face it, everyone: movie plot threats are fun to think, about and I don't care what The Almighty Bruce says!)

      • knowing that fingerd has an exploitable buffer overflow bug.

        OH MY GOSH, time to stop using finger, and migrate to Facebook instead!

      • the more dangerous other people become

        citation needed

        You are trying to make your point based on your emotions and absolutely no rationale. You have a fundamental misunderstanding of reality.
        It's like you're funneling gun control rhetoric and North Korea propaganda directly into this issue and coupling it with your hare-brained knee-jerk personality.

        It's called DECENTRALIZATION OF POWER and it is the only way you have stability in any situation. Ups and downs are smoothed out as each body maneuvers independently. It really is infuriating to have

    • by bigpat ( 158134 )

      As if criminals, political operatives, and oppressive governments won't get hold of the required information regardless. Just publish everything and give the rest of us a fighting chance to figure out what's going on and defend ourselves.

      Agreed. The idea that oppressive governments won't get a hold of these tools and "only the good guys" keeping technology a secret is good for society is a dangerously naive notion. I mean if you guys want to invite me to your good guy cabal meetings and have some good bonus, stock options and profit sharing then count me in.

    • Pretty much. Good luck keeping it from the Chinese Government, NSA, or skilled unaffiliated adversaries.
    • moreover in this day and age labelling it as digital forbidden fruit is sure to get the attention of Neo-Eve ... i wholeheartedly agree with Brother Lal, free flow of information is the only valid option. Considering most standard humans wouldnt know what to do with it and its only a handful trying to blow up the world its probably safer if the savvy paladins or neutrals have a chance to know what its about and crowdsource defense, after all. The greatest talents rarely go for the job ... the environment pr
  • by Anonymous Coward on Wednesday February 21, 2018 @11:31AM (#56163403)

    The biggest danger is secrecy, not technology. We should never grant the state any advantage. If we don't fight back, we are doomed... DOOMED!

  • throughout history. It's a matter of educating and letting people make up their own minds based on that.

    For myself, I'm pretty happy following Asimov's three laws with a heaping helping of the zeroth law on top. Yeah, your answers to them can get pretty abstract when you pursue them into special cases but on the whole? Sound, very sound reasoning.

  • Open Source (Score:5, Informative)

    by randomErr ( 172078 ) <.ervin.kosch. .at. .gmail.com.> on Wednesday February 21, 2018 @11:45AM (#56163473) Journal

    Isn't that what they said what about OSS? To be honest, aren't the bad guys just going to use last generation's AI to crack the current generation and then make it available on black market? Look at how long it tool to crack DVD and Bluray keys? It was meant to be the most advanced of it's time.

  • explore ways of restricting potentially dangerous information

    Yeah! Down with the antiquated notions of information seeking to be free [wikipedia.org], and let us all welcome the concept of security through obscurity.

    "dual use" technologies with weapons potential is sometimes controlled.

    Right! And let's reimpose limits on exporting strong encryption [slashdot.org], while we are it.

    • Hey! At least if they suppress it, there is absolutely no way that bad actors in other countries get the technology.

      Nuclear technology is equally dangerous and they've successfully kept a lid on that!

      Some of this stuff is going to be so cheap and easy to do in one more decade.

  • by DrTJ ( 4014489 ) on Wednesday February 21, 2018 @11:48AM (#56163497)
    How do you prevent people, good or bad, from evolving the technology or science? They only fool-proof way of keeping something secret is to not find out from the beginning. This sounds like an effort to stop the wind from blowing. A lot of people seems to be afraid of AI, but I fear the stupidity of people more than I do the intelligence of machines. Maybe the real threat is that we seem to accelerate our own stupidity.
    • It's not the intelligence of machines directly we need to fear - there doesn't seem to be any immediate threat of true autonomy / free will any time soon. It's the fact that their limited intelligence has no ethical limitations, and can be harnessed to easily enable things that would be prohibitively expensive to do any other way.

      As one example: cameras on every street corner - several countries have done that already, and the results are a bit unsettling to anybody who has ever read 1984, but the actual po

  • Now billions of dollars of stocks are traded each day by AI. Businesses will use AI for the competitive advantage. Governments will use AI for economic superiority. In the future, it will be AI against AI in competition. The poor person working for minimum wage are feeling the effects of AI. Business is using AI to determine the most efficient usage of labor, what is the optimal price of that product they are selling and the logistics of manufacturing. How can the average person compete with the money and
    • Re: (Score:2, Insightful)

      None of that is "AI". If you nutters redefine "analytics" as "AI" then the term is completely meaningless.
      • None of that is "AI". If you nutters redefine "analytics" as "AI" then the term is completely meaningless.

        Analytics, models, and the simulations will get more advance to be more efficient. The first lessons when learning machine learning are in analytic analysis, like regression, gradient decent. Can you tell me a definitive line between practical analytics and machine learning?

        • by Dog-Cow ( 21281 )

          Here's a definition of AI that should work for everyone: a system that can contrive a solution to a stated problem, using nothing more than its own programming and its own choice of inputs.

  • all science is neutral , a gallon of gas can be used to power a car, or used in a fire bomb. as AI gets better, some low skilled jobs will be lost, and some high skill jobs created
    • some low skilled jobs will be lost, and some high skill jobs created

      The underlying problem is that 50% of the population is of below average intelligence. They will all be hungry, and, in America at least, have guns and 4x4s.

      In my view, poor people with guns and 4x4s are more dangerous than Nerds with access to Sourceforge and Github put together.

      In the view of the US government, access to Pornhub is more dangerous than idiots with guns.

      Maybe I should ask /. - which is more dangerous out of the above

      • while I can understand a very small bit about the guns, 4x4? "The underlying problem is that 50% of the population is of below average intelligence" I am reminded that 47% of the time all statistics are made up. in the 60s the same was said about people from Africa. and used to deny them voting. if you can give me CREDITABLE studies, I will look at them.
        • People of below average intelligence using guns is how we get muggings and carjackings. But what's really dangerous and prevalent these days is crazy people wandering around loose and being able to use guns. This is of course why taking guns away from everybody will magically cause crazy people to become sane and stop annoying the rest us in any way. San Francisco will no longer have to take its BART stations offline on a regular basis to clean human excrement out of the escalators.

      • by Lab Rat Jason ( 2495638 ) on Wednesday February 21, 2018 @01:46PM (#56164513)

        This is so eerie... I have guns, and a 4x4... AND I have access to sourceforge and git hub... I'm so conflicted right now!!!!!

  • by ceoyoyo ( 59147 ) on Wednesday February 21, 2018 @11:53AM (#56163523)

    When the US was restricting export of public key cryptography, geeks used to print the equation on t-shirts. The only technology that's even been kind of successfully restricted is nuclear, and that's mostly worked by restricting physical equipment rather than knowledge.

    • I remember years ago a student at Wright State University senor project was a paper on how to build a Nuclear bomb. the government came in and after a court case, classified his paper. The Student used all non-classified information. It even caused a movie loosely based on it.
      • by ceoyoyo ( 59147 )

        People not in the know always think technological "secrets" are harder to figure out than they are. Once you know something is possible and have some hints about how it's done, the "secrets" usually don't stay secret very long.

        Now you can find all kinds of videos on YouTube illustrating how to build nuclear bombs. Complete with things like "the interstage material, FOGBANK, is classified, but based on available documents it is likely a type of aerogel...."

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday February 21, 2018 @11:55AM (#56163527)
    Comment removed based on user account deletion
  • Right-o (Score:4, Informative)

    by Artem S. Tashkinov ( 764309 ) on Wednesday February 21, 2018 @11:57AM (#56163541) Homepage

    Like the fact that there's no AI whatsoever. There are limited-purpose algorithms for very narrow tasks which work a lot like the calculator: i.e. they far surpass what the average human being is capable of (most people cannot compute in their heads), yet they cannot reason (which is why image recognition systems can be easily fooled), think (which is why proper translation is a pipe dream) or invent anything (which is why they cannot come up with new ideas). The hype about AI is so strong, people actually fear will be enslaved by robot overlords soon yet we are not close to general AI than we were 50 years ago, we just have much better hardware to use to train those algorithms.

    Even image recognition AI which is touted as a breakthrough is largely incomplete because animals intelligence doesn't require petabytes of data and petaflops of compute power to recognize objects in all their embodiments maybe because we've deciphered only the outmost layer of the nervous system.

    In short there's no AI to speak of. Up to this day we've just been automating the most primitive tasks which don't require intelligence per se. They require statistics, lots of data and lots of compute power.

    Oh, and these algorithms are almost completely opaque, so we cannot understand them, properly tune them or even expect proper answers from them all the time.

    • (most people cannot compute in their heads)

      And even those prodigy calculators who can do not stand a chance against a pocket calculator, much less a computer.

  • by petes_PoV ( 912422 ) on Wednesday February 21, 2018 @12:07PM (#56163621)

    ... so much so that some AI research may need to be kept secret.

    So a research group thinks there should be more research into AI. But they think the topics to be researched should be kept secret.

    I wonder if their AI driven grant application writer came up with that one.

  • Option 1:

    Make the premise of the current EU Data Protection Act a component of the US Constitution and the subject of a UN treaty that applies to all nations whether they sign it or not. The US is the only important nation in this, because it's the only nation where information and infrastructure only exist for the very rich. In other countries, either nobody has either or everybody has both.

    The US then needs a law that provides strong privacy to everybody, under all circumstances, where the individual has

    • by Anonymous Coward

      Pretty sure option 2 was used by many communist and fascist countries in last 100 years or so. Not sure it was that much better.

  • by DNS-and-BIND ( 461968 ) on Wednesday February 21, 2018 @12:27PM (#56163763) Homepage
    This was written in 1853 by Charles Tomlinson, and is only an excerpt of the the treatise, but it shows that people recognized that 'security' trough obscurity was not really security at all, way before the digital age. A commercial, and in some respects a social, doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and already know much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lockpicking long before locksmiths discussed it among themselves, as they have lately done. If a lock -- let it have been made in whatever country, or by whatever maker -- is not so inviolable as it has hitherto been deemed to be, surely it is in the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged, that an acquintance with real facts will, in the end, be better for all parties. Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear -- milkmen knew all about it before, whether they practiced it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased. ...The unscrupulous have the command of much of this kind of knowledge without our aid; and there is moral and commercial justice in placing on their guard those who might possibly suffer therefrom. We employ these stray expressions concerning adulteration, debasement, roguery, and so forth, simply as a mode of illustrating a principle -- the advantage of publicity. In respect to lock-making, there can scarcely be such a thing as dishonesty of intention: the inventor produces a lock which he honestly thinks will posess such and such qualities; and he declares his belief to the world. If others differ from him in opinion concerning those qualities, it is open to them to say so; and the discussion, truthfully conducted, must lead to public advantage: the discussion stimulates curiosity, and curiosity stimulates invention. Nothing but a partial and limited view of the question could lead to the opinion that harm can result: if there be harm, it will be much more than counterbalanced by good.
    • Thank you - a very interesting read. Unfortunately, too many will get bogged down in the archaic verbiage, but it will still provide some historical enlightenment to those who are willing to wade through the whole article.

  • Comment removed based on user account deletion
  • The only scenario in the attached article that would worry me are the assassin robots.
    -spammers using AI, a smart marketing person can do this now.
    -ai to find holes in OS or softtware, a university student can do this now
    -the big brother stuff: we already have that right now and it is a false narrative the way they put it.

  • This was written in 1853 by Charles Tomlinson, and is only an excerpt of the the treatise, but it shows that people recognized that 'security' trough obscurity was not really security at all, way before the digital age. (sorry for the double post, screwed up the formatting and it was a wall of text)

    A commercial, and in some respects a social, doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons

  • Like the plans to bulldoze hypothetically failing nuclear disasters into the hills, we must be prepared to hide to all men what a pitiful state AI research is in.

    Cf. the Turing criteria
  • What other technologies are members of our society keeping quiet. Are we already in space?

  • You can't keep that a secret. We couldn't keep the atomic bomb a secret. The only reason every criminal doesn't have one in his garage is because you need massive industrial processing to refine the uranium and make a bomb. Secrecy has nothing to do with it. A paper on how to build the A-bomb was published by a high school student in the 80s, and that ruffled quite a few feathers. Experts said it would work; but of course the kid was missing the key ingredient and wasn't likely to ever get it.

    AI is not

    • by gweihir ( 88907 )

      And that is just it. You cannot keep it secret, but trying to do so will prevent being prepared.

    • The other reason criminals don't have nukes is that they're useless for most criminal activity. They're no good for taking out the guy on the block who's not paying protection money. It's hard to hold up someone with a nuke. They aren't good for defense in gunfights, because you can't take out someone in handgun or spray-and-pray range without harming yourself.

      Consider the 1982 Falklands/Malvinas war. The British had nukes and delivery systems. It didn't make a bit of difference in the war or the ne

  • You can keep a secret from a large part of the general public, but not anybody else who is motivated to obtain that "secret".

    • by godrik ( 1287354 )

      I don't see how you can keep these secret now. We have MS degree in AI all over the country and the world. We are talking about thousands, maybe tens of thousands of MS graduates in that particular topic a year. That does not count the hundreds or thousands of PhD graduate in AI a year. Or the millions of computer and math savvy graduates at all levels a year who can just pick up a 20 years old book and train themselves.

      Too many people know already, the barrier of entry of also not too high. It seems really

  • Hello Skynet.

  • by Anonymous Coward

    What if Hillary and the DNC had some (artificial) intelligence? Maybe they would have found something on Trump in two years.

    • ROFL - - - sorry, but THIS one really twigged my funny bone. I've been (more or less) a voting Democrat all my life, but I just HAVE to post this :

      Hillary and the DNC _NEED_ some (artificial) intelligence, since they don't seem to have any REAL intelligence!

  • Whatever AI is, it's the tool not the goal. A search-and-rescue bot and search-and-destroy bot will be 99% the same and once can probably be converted into the other with an AK-47 and duct tape. It's a glorified version of trying to make a version of MS Project that won't let you plan the Holocaust. The software doesn't know what it's doing, it only knows estimates and dependencies and lead times and resource constraints. Same with supply chain planning, production planning etc. The AlphaGo team is now work

  • AI is like a Hammer.
    Before AI, er, I mean hammers, we didn't have nails.
    Once we had hammers all the world looked like a nail.
    Later came screw drivers and WOW! power screw drivers.
    Now we use screws where once we used nails, before that pegs and before that vines to bind.
    Can you envision what AI will let us do?
    Maybe you have some limited ideas.
    But before nails and screws people didn't foresee all the things we do with them.
    Now they're standard tools of the trade.
    Before screwdrivers and hammers we didn't have

  • If you've discovered some new application or approach in AI, there's no reason to believe that the same work couldn't be done by someone else. Because of that, disclosure of what is possible is valuable to the public at large. This isn't nuclear proliferation, where the work required to develop a weapon is necessarily large scale. This is highly scalable inference, and it is inevitable.

    And if you uncork Skynet, well, publish and perish, I guess...

  • Not to oversimplify (which is what any "solution" that involved classifying AI would be), most of the sample problems posed, other than the Minority Report government, would be best addressed by more AI. In fact, a somewhat toned down version of the Minority Report scenario probably is the solution. Some years ago, Stephen Hawking asked how we could survive another century of our own technology. He suggested getting off planet, but the solution for the rest of us is that we can eventually expect to be watch
  • Codex Entry 471:

    Only Devs of the Circle, with proper Templar oversight, may use code. Else they may be claimed by the Fade, and become h@xOrs, spawning no end of bots and chaos. Though this will cause a certain amount of ill will in those who have given their life to code, the risk is simply too high. The King must reign, the Templars enforce, and the Devs obey.

  • It was also AI researchers (top ones, like Minsky) who confidently claimed that human intelligence level were going to be here 30 years ago. These people seem to have forgotten their history, and are about to repeat the same mistakes which made them an academic laughingstock.
  • The real fun doesn't start till we start coding ais in dna. Now who's up for a game of CRISPR?
  • but hell no.

    A true* AI is an achievement level on par with the wheel, the discovery of fire, writing, mathematics, nano-technology and nuclear power.
    The moment a true AI is born, for better or worse, the world will never be the same. Future generations will note the past as Pre or Post-AI.

    Any and all research needs to be fully transparent and transcend all our petty disagreements we have with each other.

    *True AI = Fully sentient artificial life. Not the bullshit plethora of if / then statements we have to

    • by gweihir ( 88907 )

      Oh, this is about true AI? So no problem, that will not happen anytime soon and possibly not ever.

  • Very often in research, the time is ripe for specific developments and others are just a few years (or months) behind the leading ones. If the first to find it then keep their results secret, they rob everybody of time to prepare for misuse.

    This is an exceptionally stupid idea. Not that it is new.

  • I support the right of every American to download, print, keep, and bear kill-bots.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...