AI Experts Say Some Advances Should Be Kept Secret (technologyreview.com) 114
AI could reboot industries and make the economy more productive; it's already infusing many of the products we use daily. But a new report [PDF] by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments -- so much so that some AI research may need to be kept secret. From a report: [...] The study is less sure of how to counter such threats. It recommends more research and debate on the risks of AI and suggests that AI researchers need a strong code of ethics. But it also says they should explore ways of restricting potentially dangerous information, in the way that research into other "dual use" technologies with weapons potential is sometimes controlled.
Thanks but no thanks. (Score:5, Insightful)
As if criminals, political operatives, and oppressive governments won't get hold of the required information regardless. Just publish everything and give the rest of us a fighting chance to figure out what's going on and defend ourselves.
Re: (Score:2)
All Technology has its benefits and its problems. And not always do they equal out. But over the long haul, technology has benefited mankind. Or else, we'd have a lot more Luddites.
Re: (Score:2)
No, the luddites mostly starved and died of exposure. At least the ones that were not killed by the army.
They were right- they needed training on the new technology (training that was denied to them).
In the long run, things may be fine but in the short run, you could be dead of starvation and exposure, beaten by police and told to move on down the road (because it's not even profitable to arrest and jail you- as has happened already in the u.s. commonly during the great depression), denied any food assistan
Re:Thanks but no thanks. (Score:4, Insightful)
All Technology has its benefits and its problems. And not always do they equal out. But over the long haul, technology has benefited mankind. Or else, we'd have a lot more Luddites.
It doesn't really matter whether technology advancements are a net benefit or hindrance to society as a whole. If there is at least a small segment of humanity which can benefit, the technology will be developed. Might as well have the benefits of technological advancements for everyone if we will need to deal with the negative side anyway.
There are some technologies which require an immense financial investment that can be delayed for a while, such as nuclear weapons. But even that has its limits as we have seen in North Korea. AI on the other hand will not be expensive. If we couldn't keep nuclear weapons out of the hands of North Korea, what hope do we have of keeping AI advancements which fit on a phone away from bad faith actors?
Re: (Score:2)
All Technology has its benefits and its problems. And not always do they equal out. But over the long haul, technology has benefited mankind.
If it's a benefit, it's not my problem.
(Now, where did I leave those memory engrams, I'm always forgetting them...)
Re: (Score:2)
It's sad to say, but the only solution to a governmental panopticon may not be restricting government to needing a warrant (which is failing miserably in the US, with Constitutional protections, to say nothing about 1984-like abuse well underway in Russia, China -- the boot stamping on a human face, forever, is already firmly lodged on half the world's population) but rather making the knowledge available to all, especially so The People can track their politicians who would abuse it to maintain their power
Fighting chance to defend ourselves? (Score:5, Insightful)
This seems pretty common sensy and it's also what we've determined to always be best in the computer world. So far.
But this approach seems flawed if we think about, say, nuclear weapons engineering. The more everyone (including you) knows about how to make a nuclear weapon detonate correctly, the more dangerous other people become but you don't really get to apply any of that to your defense. It's not like your bomb shelter will get better because of you finally figured out how to get the imploder timed right. It's not like your political efforts to limit nuclear proliferation benefit from proliferation of the engineering knowledge. It's not like your coping-with-horror-by-using-fatalistic-nihilism-and-humor will benefit from th-- wait, ok, so it does happen to help that one defense, but that's an unusual case.
For the most part, nuclear weapon engineering proliferation is bad for everyone, in a way that completely contrasts with, say, knowing that fingerd has an exploitable buffer overflow bug.
Are there some conditions where software tech crosses over into being more like nuclear weapons and less like other software tech? More to the point: what are the general conditions where tech knowledge proliferation is bad rather than good, such that buffer overflows get categorized one way and nukes the other? The condition isn't really "software good, hardware bad," no way.
That some people think some software tech is crossing over or soon may, makes me wonder WTF they figured out how to do!
(BTW, for some reason I actually like that they used movie plot threats in the guise of latching onto Black Mirror trendiness. Let's face it, everyone: movie plot threats are fun to think, about and I don't care what The Almighty Bruce says!)
Re: (Score:3)
knowing that fingerd has an exploitable buffer overflow bug.
OH MY GOSH, time to stop using finger, and migrate to Facebook instead!
Re: (Score:2)
the more dangerous other people become
citation needed
You are trying to make your point based on your emotions and absolutely no rationale. You have a fundamental misunderstanding of reality.
It's like you're funneling gun control rhetoric and North Korea propaganda directly into this issue and coupling it with your hare-brained knee-jerk personality.
It's called DECENTRALIZATION OF POWER and it is the only way you have stability in any situation. Ups and downs are smoothed out as each body maneuvers independently. It really is infuriating to have
Re: (Score:3)
As if criminals, political operatives, and oppressive governments won't get hold of the required information regardless. Just publish everything and give the rest of us a fighting chance to figure out what's going on and defend ourselves.
Agreed. The idea that oppressive governments won't get a hold of these tools and "only the good guys" keeping technology a secret is good for society is a dangerously naive notion. I mean if you guys want to invite me to your good guy cabal meetings and have some good bonus, stock options and profit sharing then count me in.
Re: (Score:2)
Re: (Score:1)
More Authoritarian Bullshit (Score:4, Informative)
The biggest danger is secrecy, not technology. We should never grant the state any advantage. If we don't fight back, we are doomed... DOOMED!
Nukes: (Score:3)
The problem is... that with a few small hints as to how the technique works, a scientist can intuit the rest. It's not like someone gave Kim Jong the details on how to create a nuke... he started with what was publicly available, and stole what he could, and worked out the rest (or his scientists did)... it's no different with AI. Keeping a certain method or technology is only useful if that tech can't be reverse engineered, intuited, etc... I personally think the cat's out of the bag on this one.
The ethics have been debated (Score:2)
throughout history. It's a matter of educating and letting people make up their own minds based on that.
For myself, I'm pretty happy following Asimov's three laws with a heaping helping of the zeroth law on top. Yeah, your answers to them can get pretty abstract when you pursue them into special cases but on the whole? Sound, very sound reasoning.
Re: (Score:1)
Open Source (Score:5, Informative)
Isn't that what they said what about OSS? To be honest, aren't the bad guys just going to use last generation's AI to crack the current generation and then make it available on black market? Look at how long it tool to crack DVD and Bluray keys? It was meant to be the most advanced of it's time.
Information no longer wants to be free? (Score:2)
Yeah! Down with the antiquated notions of information seeking to be free [wikipedia.org], and let us all welcome the concept of security through obscurity.
Right! And let's reimpose limits on exporting strong encryption [slashdot.org], while we are it.
Re: (Score:1)
Hey! At least if they suppress it, there is absolutely no way that bad actors in other countries get the technology.
Nuclear technology is equally dangerous and they've successfully kept a lid on that!
Some of this stuff is going to be so cheap and easy to do in one more decade.
The Genie and the Bottle (Score:4, Insightful)
Re: (Score:2)
It's not the intelligence of machines directly we need to fear - there doesn't seem to be any immediate threat of true autonomy / free will any time soon. It's the fact that their limited intelligence has no ethical limitations, and can be harnessed to easily enable things that would be prohibitively expensive to do any other way.
As one example: cameras on every street corner - several countries have done that already, and the results are a bit unsettling to anybody who has ever read 1984, but the actual po
Re: (Score:2)
>acquaintance with real facts will, in the end, be better for all parties.
Hogwash. The public's knowledge of the ease and methods of adulterating milk dealt a heavy blow to the profits of unscrupulous milkmen, it was certainly not better for them.
And unlike milk, the potential for abuse of AI is disproportionately focused on the most wealthy and powerful individuals who can afford the hardware necessary to leverage it most effectively. How dare you suggest that such upstanding individuals, the very bac
Re: (Score:3)
Sure FPGA based stuff might be faster, but, in the end, its either intelligent, or its not. And in 100% of cases I have examined, its not
There is, in fact, no evidence at all of actually "intelligence" in any of the stuff sold as AI at present.
There is good evidence for the ability to solve certain specific problems (Playing GO or Chess), and a lot of evidence
Re: (Score:2)
"Actual" or "General" Intelligence is irrelevant. As is "understanding", "comprehension", "self awareness" and any other hurdles you put in the definition of "True AI".
What is relevant, is that we are making machines capable of displaying sophisticated "identify a square" levels of pseudo-"intellectual" achievement. We can now offload new and ever-growing classes of task that used to require human intelligence to a machine, where it can be accelerated and parallelized to a degree completely financially in
Re: (Score:1)
There is, in fact, no evidence at all of actually "intelligence" in any of the stuff sold as AI at present.
I'm still waiting for actual evidence of intelligence in humans.
People will be pawns to business AI (Score:1)
Re: (Score:2, Insightful)
Re: (Score:2)
Artificial Intelligence is the not yet found solution to a problem that has not been solved. Once an algorithm exists, it is no longer considered AI because it is obvious that there is nothing but a dumb routine doing easily explainable stuff, and therefore the magic is gone.
In the early days of AI research, we established chess as the criterion for true machine intelligence. When computer chess was cracked, it was "obviously" not true AI, se we get Go as the criterion. Now that a computer can play Go, we "know" that the heuristics and algorithms used are not AI.
Catch your breath, grab those goalposts once again, and let's move them still farther down the field.
Re: (Score:1)
None of that is "AI". If you nutters redefine "analytics" as "AI" then the term is completely meaningless.
Analytics, models, and the simulations will get more advance to be more efficient. The first lessons when learning machine learning are in analytic analysis, like regression, gradient decent. Can you tell me a definitive line between practical analytics and machine learning?
Re: (Score:2)
Here's a definition of AI that should work for everyone: a system that can contrive a solution to a stated problem, using nothing more than its own programming and its own choice of inputs.
Science is neutral (Score:1)
Re: (Score:1)
The underlying problem is that 50% of the population is of below average intelligence. They will all be hungry, and, in America at least, have guns and 4x4s.
In my view, poor people with guns and 4x4s are more dangerous than Nerds with access to Sourceforge and Github put together.
In the view of the US government, access to Pornhub is more dangerous than idiots with guns.
Maybe I should ask /. - which is more dangerous out of the above
Re: (Score:1)
Re: (Score:2)
People of below average intelligence using guns is how we get muggings and carjackings. But what's really dangerous and prevalent these days is crazy people wandering around loose and being able to use guns. This is of course why taking guns away from everybody will magically cause crazy people to become sane and stop annoying the rest us in any way. San Francisco will no longer have to take its BART stations offline on a regular basis to clean human excrement out of the escalators.
Re:Science is neutral (Score:4, Funny)
This is so eerie... I have guns, and a 4x4... AND I have access to sourceforge and git hub... I'm so conflicted right now!!!!!
Good luck with that (Score:5, Insightful)
When the US was restricting export of public key cryptography, geeks used to print the equation on t-shirts. The only technology that's even been kind of successfully restricted is nuclear, and that's mostly worked by restricting physical equipment rather than knowledge.
Re: (Score:1)
Re: (Score:2)
People not in the know always think technological "secrets" are harder to figure out than they are. Once you know something is possible and have some hints about how it's done, the "secrets" usually don't stay secret very long.
Now you can find all kinds of videos on YouTube illustrating how to build nuclear bombs. Complete with things like "the interstage material, FOGBANK, is classified, but based on available documents it is likely a type of aerogel...."
Comment removed (Score:5, Insightful)
Re:So only criminals will have the knowledge (Score:4, Insightful)
Actually, the only REAL way to keep a secret is to NEVER mention that you HAVE a secret !
Right-o (Score:4, Informative)
Like the fact that there's no AI whatsoever. There are limited-purpose algorithms for very narrow tasks which work a lot like the calculator: i.e. they far surpass what the average human being is capable of (most people cannot compute in their heads), yet they cannot reason (which is why image recognition systems can be easily fooled), think (which is why proper translation is a pipe dream) or invent anything (which is why they cannot come up with new ideas). The hype about AI is so strong, people actually fear will be enslaved by robot overlords soon yet we are not close to general AI than we were 50 years ago, we just have much better hardware to use to train those algorithms.
Even image recognition AI which is touted as a breakthrough is largely incomplete because animals intelligence doesn't require petabytes of data and petaflops of compute power to recognize objects in all their embodiments maybe because we've deciphered only the outmost layer of the nervous system.
In short there's no AI to speak of. Up to this day we've just been automating the most primitive tasks which don't require intelligence per se. They require statistics, lots of data and lots of compute power.
Oh, and these algorithms are almost completely opaque, so we cannot understand them, properly tune them or even expect proper answers from them all the time.
Re: (Score:2)
(most people cannot compute in their heads)
And even those prodigy calculators who can do not stand a chance against a pocket calculator, much less a computer.
Trust me, I'm a scientist (Score:5, Funny)
So a research group thinks there should be more research into AI. But they think the topics to be researched should be kept secret.
I wonder if their AI driven grant application writer came up with that one.
Better solutions. (Score:2)
Option 1:
Make the premise of the current EU Data Protection Act a component of the US Constitution and the subject of a UN treaty that applies to all nations whether they sign it or not. The US is the only important nation in this, because it's the only nation where information and infrastructure only exist for the very rich. In other countries, either nobody has either or everybody has both.
The US then needs a law that provides strong privacy to everybody, under all circumstances, where the individual has
Re: (Score:1)
Pretty sure option 2 was used by many communist and fascist countries in last 100 years or so. Not sure it was that much better.
Rudimentary Treatise on the Construction of Locks (Score:5, Informative)
Re: (Score:2)
Thank you - a very interesting read. Unfortunately, too many will get bogged down in the archaic verbiage, but it will still provide some historical enlightenment to those who are willing to wade through the whole article.
Re: (Score:2)
The linked article is ill conceived (Score:1)
The only scenario in the attached article that would worry me are the assassin robots.
-spammers using AI, a smart marketing person can do this now.
-ai to find holes in OS or softtware, a university student can do this now
-the big brother stuff: we already have that right now and it is a false narrative the way they put it.
Rudimentary Treatise on the Construction of Locks (Score:2)
This was written in 1853 by Charles Tomlinson, and is only an excerpt of the the treatise, but it shows that people recognized that 'security' trough obscurity was not really security at all, way before the digital age. (sorry for the double post, screwed up the formatting and it was a wall of text)
A commercial, and in some respects a social, doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons
like what an embarrassment it is to CS (Score:1)
Cf. the Turing criteria
The Real Question (Score:1)
What other technologies are members of our society keeping quiet. Are we already in space?
Dream on (Score:2)
You can't keep that a secret. We couldn't keep the atomic bomb a secret. The only reason every criminal doesn't have one in his garage is because you need massive industrial processing to refine the uranium and make a bomb. Secrecy has nothing to do with it. A paper on how to build the A-bomb was published by a high school student in the 80s, and that ruffled quite a few feathers. Experts said it would work; but of course the kid was missing the key ingredient and wasn't likely to ever get it.
AI is not
Re: (Score:2)
And that is just it. You cannot keep it secret, but trying to do so will prevent being prepared.
Re: (Score:2)
The other reason criminals don't have nukes is that they're useless for most criminal activity. They're no good for taking out the guy on the block who's not paying protection money. It's hard to hold up someone with a nuke. They aren't good for defense in gunfights, because you can't take out someone in handgun or spray-and-pray range without harming yourself.
Consider the 1982 Falklands/Malvinas war. The British had nukes and delivery systems. It didn't make a bit of difference in the war or the ne
Secret from who, exactly? (Score:2)
You can keep a secret from a large part of the general public, but not anybody else who is motivated to obtain that "secret".
Re: (Score:2)
I don't see how you can keep these secret now. We have MS degree in AI all over the country and the world. We are talking about thousands, maybe tens of thousands of MS graduates in that particular topic a year. That does not count the hundreds or thousands of PhD graduate in AI a year. Or the millions of computer and math savvy graduates at all levels a year who can just pick up a 20 years old book and train themselves.
Too many people know already, the barrier of entry of also not too high. It seems really
Hello Secrecy (Score:2)
Hello Skynet.
What if (Score:1)
What if Hillary and the DNC had some (artificial) intelligence? Maybe they would have found something on Trump in two years.
Re: (Score:2)
ROFL - - - sorry, but THIS one really twigged my funny bone. I've been (more or less) a voting Democrat all my life, but I just HAVE to post this :
Hillary and the DNC _NEED_ some (artificial) intelligence, since they don't seem to have any REAL intelligence!
Good luck with that.... (Score:2)
Whatever AI is, it's the tool not the goal. A search-and-rescue bot and search-and-destroy bot will be 99% the same and once can probably be converted into the other with an AK-47 and duct tape. It's a glorified version of trying to make a version of MS Project that won't let you plan the Holocaust. The software doesn't know what it's doing, it only knows estimates and dependencies and lead times and resource constraints. Same with supply chain planning, production planning etc. The AlphaGo team is now work
AI is like a Hammer (Score:2)
AI is like a Hammer.
Before AI, er, I mean hammers, we didn't have nails.
Once we had hammers all the world looked like a nail.
Later came screw drivers and WOW! power screw drivers.
Now we use screws where once we used nails, before that pegs and before that vines to bind.
Can you envision what AI will let us do?
Maybe you have some limited ideas.
But before nails and screws people didn't foresee all the things we do with them.
Now they're standard tools of the trade.
Before screwdrivers and hammers we didn't have
Re: (Score:2)
Once we had hammers all the world looked like a thumb.
FTFY.
And that's why they're not security experts... (Score:1)
If you've discovered some new application or approach in AI, there's no reason to believe that the same work couldn't be done by someone else. Because of that, disclosure of what is possible is valuable to the public at large. This isn't nuclear proliferation, where the work required to develop a weapon is necessarily large scale. This is highly scalable inference, and it is inevitable.
And if you uncork Skynet, well, publish and perish, I guess...
Fire with Fire (Score:1)
Only Devs of The Circle May Use Code (Score:2)
Codex Entry 471:
Only Devs of the Circle, with proper Templar oversight, may use code. Else they may be claimed by the Fade, and become h@xOrs, spawning no end of bots and chaos. Though this will cause a certain amount of ill will in those who have given their life to code, the risk is simply too high. The King must reign, the Templars enforce, and the Devs obey.
You don't say (Score:2)
Re: (Score:2)
To anybody with a clue, Minsky was at best an incompetent AI fanboi.
Meh.. (Score:1)
Not only no (Score:2)
but hell no.
A true* AI is an achievement level on par with the wheel, the discovery of fire, writing, mathematics, nano-technology and nuclear power.
The moment a true AI is born, for better or worse, the world will never be the same. Future generations will note the past as Pre or Post-AI.
Any and all research needs to be fully transparent and transcend all our petty disagreements we have with each other.
*True AI = Fully sentient artificial life. Not the bullshit plethora of if / then statements we have to
Re: (Score:2)
Oh, this is about true AI? So no problem, that will not happen anytime soon and possibly not ever.
Seems they are lacking in actual intelligence (Score:2)
Very often in research, the time is ripe for specific developments and others are just a few years (or months) behind the leading ones. If the first to find it then keep their results secret, they rob everybody of time to prepare for misuse.
This is an exceptionally stupid idea. Not that it is new.
Transparency + Originalism = (Score:1)
I support the right of every American to download, print, keep, and bear kill-bots.
Re: (Score:2)
like the worlds most imfamous pedo who was just captured
and turing
this school pumps out deviants
why listen to them
Gee, we don't see many Taisho-era haiku these days. The perfect icebreaker for your next Nazi warlord gathering.