Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment: Re:The solution is simple (Score 1) 227

by jrincayc (#48828549) Attached to: An Open Letter To Everyone Tricked Into Fearing AI

That is harder than you might think. From Smarter than us ( https://drive.google.com/file/... ):

"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvantage.
Little by little (or blindingly quickly, depending on how the game
plays out), humans will be compelled to turn more and more of their
decision making over to the AI. Inevitably, the humans will be out of
the loop for all but a few key decisions.

Moreover, humans may no longer be able to make sensible de-
cisions, because they will no longer understand the forces at their
disposal. Since their role is so reduced, they will no longer compre-
hend what their decisions really entail. This has already happened
with automatic pilots and automated stock-trading algorithms: these
programs occasionally encounter unexpected situations where hu-
mans must override, correct, or rewrite them. But these overseers,
who haven’t been following the intricacies of the algorithm’s decision
process and who don’t have hands-on experience of the situation, are
often at a complete loss as to what to do—and the plane or the stock
market crashes. "

"Consider an AI that is tasked with enhancing shareholder value
for a company, but whose every decision must be ratified by the (hu-
man) CEO. The AI naturally believes that its own plans are the most
effective way of increasing the value of the company. (If it didn’t be-
lieve that, it would search for other plans.) Therefore, from its per-
spective, shareholder value is enhanced by the CEO agreeing to what-
ever the AI wants to do. Thus it will be compelled, by its own pro-
gramming, to present its plans in such a way as to ensure maximum
likelihood of CEO agreement. It will do all it can do to seduce, trick,
or influence the CEO into agreement. Ensuring that it does not do so
brings us right back to the problem of precisely constructing the right
goals for the AI, so that it doesn’t simply find a loophole in whatever
security mechanisms we’ve come up with."

Comment: Re:Fear (Score 1) 227

by jrincayc (#48828471) Attached to: An Open Letter To Everyone Tricked Into Fearing AI

>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
So?
>If you're smart it's better to uphold the law and not hurt others.
Why?

A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice to other people are only relevant if we have reasonably similar amounts of power. If you want me to not worry about AI argue that it is reasonable to be kind to ants, because that will be the level of power difference.

Personally, I think it is more important that we concentrate on the AIs being ethical in general, than doing exactly what we want.

Comment: Smarter than us (Score 1) 227

by jrincayc (#48826763) Attached to: An Open Letter To Everyone Tricked Into Fearing AI

I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from https://intelligence.org/smart... or since it is CC BY-NC-SA 3.0, you can also just download it https://drive.google.com/file/...

The book contains the following summary:

1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.
3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).
4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.
5. The relevant experts do not seem poised to solve this problem.
6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.

The only one of those statements I have much doubt about is 4. Even if the AIs are safe, they still probably will not be under human control.

AI

An Open Letter To Everyone Tricked Into Fearing AI 227

Posted by timothy
from the robot-is-making-me-post-this dept.
malachiorion writes If you're into robots, AI, you've probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message? I spoke to the organization that released letter, and to one of the AI researchers who contributed to it. As is often the case with AI, tech reporters are getting this one wrong on purpose. Here's my analysis for Popular Science. Or, for the TL;DR crowd: "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."
The Military

How the Pentagon's Robots Would Automate War 117

Posted by Soulskill
from the peace-reigns-when-the-war-servers-are-down-for-scheduled-maintenance dept.
rossgneumann writes: Pentagon officials are worried that the U.S. military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top—including creating robots capable of becoming fighting machines. A 72-page document throws detailed light on the far-reaching implications of the Pentagon's plan to monopolize imminent "transformational advances" in biotechnology, robotics and artificial intelligence, information technology, nanotechnology, and energy.
Graphics

NVIDIA Begins Requiring Signed GPU Firmware Images 192

Posted by Soulskill
from the always-looking-out-for-the-little-guy dept.
An anonymous reader writes: In a blow to those working on open-source drivers, soft-mods for enhancing graphics cards, and the Chinese knock-offs of graphics cards, NVIDIA has begun signing and validating GPU firmware images. With the latest-generation Maxwell GPUs, not all engine functionality is being exposed unless the hardware detects the firmware image was signed by NVIDIA. This is a setback to the open-source Nouveau Linux graphics driver but they're working towards a solution where NVIDIA can provide signed, closed-source firmware images to the driver project for redistribution. Initially the lack of a signed firmware image will prevent some thermal-related bits from being programmed but with future hardware the list of requirements is expected to rise.
Google

Google Testing Drone Delivery System: 'Project Wing' 52

Posted by Soulskill
from the ok-google-bring-me-a-pizza dept.
rtoz writes: Google's research division, Google X, is developing a fleet of drones to deliver goods. This drone delivery system is called "Project Wing," and Google X has been developing it in secret for the past two years. During a recent test in Australia, drones successfully delivered a first aid kit, candy bars, dog treats, and water to a couple of Australian farmers. The self-flying vehicle uses four electrically-driven propellers to get around, and it has a wingspan of about five feet. It weighs just under 19 pounds and can take off and land without a runway. Google's long-term goal is to develop drones that could be used for disaster relief by delivering aid to isolated areas.
Cloud

IBM Opens Up Its Watson Supercomputer To Researchers 28

Posted by samzenpus
from the try-it-out dept.
An anonymous reader writes IBM has announced the "Watson Discovery Advisor" a cloud-based tool that will let researchers comb through massive troves of data, looking for insights and connections. The company says it's a major expansion in capabilities for the Watson Group, which IBM seeded with a $1 billion investment. "Scientific discovery takes us to a different level as a learning system," said Steve Gold, vice president of the Watson Group. "Watson can provide insights into the information independent of the question. The ability to connect the dots opens up a new world of possibilities."
Security

Securing the US Electrical Grid 117

Posted by samzenpus
from the locking-things-down dept.
An anonymous reader writes The Center for the Study of the Presidency & Congress (CSPC) launched a project to bring together representatives from the Executive Branch, Congress, and the private sector to discuss how to better secure the U.S. electric grid from the threats of cyberattack, physical attack, electromagnetic pulse, and inclement weather. In this interview with Help Net Security, Dan Mahaffee, the Director of Policy at CSPC, discusses critical security challenges.
AI

Robo Brain Project Wants To Turn the Internet Into a Robotic Hivemind 108

Posted by samzenpus
from the watch-and-learn dept.
malachiorion writes Researchers are force-feeding the internet into a system called Robo Brain. The system has absorbed a billion images and 120,000 YouTube videos so far, and aims to digest 10 times that within a year, in order to create machine-readable commands for robots—how to pour coffee, for example. From the article: "The goal is as direct as the project’s name—to create a centralized, always-online brain for robots to tap into. The more Robo Brain learns from the internet, the more direct lessons it can share with connected machines. How do you turn on a toaster? Robo Brain knows, and can share 3D images of the appliance and the relevant components. It can tell a robot what a coffee mug looks like, and how to carry it by the handle without dumping the contents. It can recognize when a human is watching a television by gauging relative positions, and advise against wandering between the two. Robo Brain looks at a chair or a stool, and knows that these are things that people sit on. It’s a system that understands context, and turns complex associations into direct commands for physical robots."
Programming

Ask Slashdot: Future-Proof Jobs? 509

Posted by Soulskill
from the robot-overlord-exterminator dept.
An anonymous reader writes: My niece, who is graduating from high school, has asked me for some career advice. Since I work in data processing, my first thought was to recommend a degree course in computer science or computer engineering. However, after reading books by Jeremy Rifkin (The Third Industrial Revolution) and Ray Kurzweil (How to Create a Mind), I now wonder whether a career in information technology is actually better than, say, becoming a lawyer or a construction worker. While the two authors differ in their political persuasions (Rifkin is a Green leftist and Kurzweil is a Libertarian transhumanist), both foresee an increasingly automated future where most of humanity would become either jobless or underemployed by the middle of the century. While robots take over the production of consumer hardware, Big Data algorithms like the ones used by Google and IBM appear to be displacing even white collar tech workers. How long before the only ones left on the payroll are the few "rockstar" programmers and administrators needed to maintain the system? Besides politics and drug dealing, what jobs are really future-proof? Would it be better if my niece took a course in the Arts, since creativity is looking to be one of humanity's final frontiers against the inevitable Rise of the Machines?
IBM

IBM To Invest $3 Billion For Semiconductor Research 68

Posted by samzenpus
from the big-bucks dept.
Taco Cowboy points out that many news outlets are reporting that IBM plans to spend $3 billion on semiconductor research and development in the next five years. The first goal is to build chips whose electronic components, called transistors, have features measuring just 7 nanometers, the company announced Wednesday. For comparison, that distance is about a thousandth the width of a human hair, a tenth the width of a virus particle, or the width of 16 potassium atoms side by side. The second goal is to choose among a range of more radical departures from today's silicon chip technology -- a monumental engineering challenge necessary to sustain progress in the computing industry. Among the options are carbon nanotubes and graphene; silicon photonics; quantum computing; brainlike architectures; and silicon substitutes that could run faster even if components aren't smaller. "In the next 10 years, we believe there will be fundamentally new systems that are much more efficient at solving problems or solving problems that are unsolvable today," T.C. Chen, IBM Research's vice president of science and technology, told CNET
AI

The Lovelace Test Is Better Than the Turing Test At Detecting AI 285

Posted by samzenpus
from the why-did-you-program-me-to-feel-pain? dept.
meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
Privacy

Coddled, Surveilled, and Monetized: How Modern Houses Can Watch You 150

Posted by timothy
from the eye-oh-tee dept.
Presto Vivace (882157) links to a critical look in Time Magazine at the creepy side of connected household technology. An excerpt: A modern surveillance state isn't so much being forced on us, as it is sold to us device by device, with the idea that it is for our benefit. ... ... Nest sucks up data on how warm your home is. As Mocana CEO James Isaacs explained to me in early May, a detailed footprint of your comings and goings can be inferred from this information. Nest just bought Dropcam, a company that markets itself as a security tool allowing you to put cameras in your home and view them remotely, but brings with it a raft of disquieting implications about surveillance. Automatic wants you to monitor how far you drive and do things for you like talk to your your house when you're on your way home from work and turn on lights when you pull into your garage. Tied into the new SmartThings platform, a Jawbone UP band becomes a tool for remotely monitoring someone else's activity. The SmartThings hubs and sensors themselves put any switch or door in play. Companies like AT&T want to build a digital home that monitors your security and energy use. ... ... Withings Smart Body Analyzer monitors your weight and pulse. Teddy the Guardian is a soft toy for children that spies on their vital signs. Parrot Flower Power looks at the moisture in your home under the guise of helping you grow plants. The Beam Brush checks up on your teeth-brushing technique. Presto Vivaci adds, "Enough to make the Stasi blush. What I cannot understand is how politicians fail to understand what a future Kenneth Starr is going to do with data like this."

Remember -- only 10% of anything can be in the top 10%.

Working...