Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Robotics

Robots4Us: DARPA's Response To Mounting Robophobia 101

Posted by samzenpus
from the hug-your-robot dept.
malachiorion writes DARPA knows that people are afraid of robots. Even Steve Wozniak has joined the growing chorus of household names (Musk, Hawking, Gates) who are terrified of bots and AI. And the agency's response--a video contest for kids--is equal parts silly and insightful. It's called Robots4Us, and it asks high schoolers to describe their hopes for a robot-assisted future. Five winners will be flown to the DARPA Robotics Competition Finals this June, where they'll participate in a day-after discussion with experts in the field. But this isn't quite as useless as it sounds. As DRC program manager Gill Pratt points out, it's kids who will be impacted by the major changes to come, moreso than people his age.
AI

Do Robots Need Behavioral 'Laws' For Interacting With Other Robots? 129

Posted by Soulskill
from the don't-let-your-quake-3-bots-duel dept.
siddesu writes: Asimov's three laws of robotics don't say anything about how robots should treat each other. The common fear is robots will turn against humans. But what happens if we don't build systems to keep them from conflicting with each other? The article argues, "Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering. National and international technological policies should introduce AIonAI concepts into current programs aimed at developing safe AIs."
Transportation

Ford's New Car Tech Prevents You From Accidentally Speeding 283

Posted by Soulskill
from the autonomy-by-parts dept.
An anonymous reader sends word of Ford's new "Intelligent Speed Limiter" technology, which they say will prevent drivers from unintentionally exceeding the speed limit. When the system is activated (voluntarily) by the driver, it asks for a current maximum speed. From then on, a camera mounted on the windshield will scan the road ahead for speed signs, and automatically adjust the maximum speed to match them. The system can also pull speed limit data from navigation systems. When the system detects the car exceeding the speed limit, it won't automatically apply the brakes — rather, it will deliver less fuel to the engine until the vehicle's speed drops below the limit. If the speed still doesn't drop, a warning noise will sound. The driver can override the speed limit by pressing "firmly" on the accelerator. The technology is being launched in Europe with the Ford S-MAX.
AI

Steve Wozniak Now Afraid of AI Too, Just Like Elon Musk 292

Posted by timothy
from the I-can't-let-you-do-that-steve dept.
quax writes Steve Wozniak maintained for a long time that true AI is relegated to the realm of science fiction. But recent advances in quantum computing have him reconsidering his stance. Just like Elon Musk, he is now worried about what this development will mean for humanity. Will this kind of fear actually engender the dangers that these titans of industry fear? Will Steve Wozniak draw the same conclusion and invest in quantum comuting to keep an eye on the development? One of the bloggers in the field thinks that would be a logical step to take. If you can't beat'em, and the quantum AI is coming, you should at least try to steer the outcome. Woz actually seems more ambivalent than afraid, though: in the interview linked, he says "I hope [AI-enabling quantum computing] does come, and we should pursue it because it is about scientific exploring." "But in the end we just may have created the species that is above us."
Privacy

Google: Our New System For Recognizing Faces Is the Best 90

Posted by timothy
from the sorry-not-yet-april-fool's dept.
schwit1 writes Last week, a trio of Google researchers published a paper on a new artificial intelligence system dubbed FaceNet that it claims represents the most accurate approach yet to recognizing human faces. FaceNet achieved nearly 100-percent accuracy on a popular facial-recognition dataset called Labeled Faces in the Wild, which includes more than 13,000 pictures of faces from across the web. Trained on a massive 260-million-image dataset, FaceNet performed with better than 86 percent accuracy.

The approach Google's researchers took goes beyond simply verifying whether two faces are the same. Its system can also put a name to a face—classic facial recognition—and even present collections of faces that look the most similar or the most distinct.
Every advance in facial recognition makes me think of Paul Theroux's dystopian Ozone.
AI

Lyft CEO: Self-Driving Cars Aren't the Future 451

Posted by timothy
from the but-the-future-branches dept.
Nerval's Lobster writes Google, Tesla, Mercedes and others are working hard to build the best self-driving car. But will anyone actually buy them? In a Q&A session at this year's South by Southwest, Lyft CEO Logan Green insisted the answer is "No." But does Green truly believe in this vision, or is he driven (so to speak) by other motivations? It's possible that Green's stance on self-driving cars has to do more with Uber's decision to aggressively fund research into that technology. Uber CEO Travis Kalanick announcing that self-driving cars were the future was something that greatly upset many Uber drivers, and Green may see that spasm of anger as an opportunity to differentiate Lyft in the hearts and minds of the drivers who work for his service. Whether or not Green's vision is genuine, we won't know the outcome for several more years, considering the probable timeframes before self-driving cars hit the road... if ever.
Transportation

Self-Driving Car Will Make Trip From San Francisco To New York City 132

Posted by samzenpus
from the bo-hands dept.
An anonymous reader writes with news that Delphi Automotive is undertaking the longest test of a driverless car yet, from the Golden Gate Bridge to midtown Manhattan. "Lots of people decide, at one point or another, to drive across the US. College kids. Beat poets. Truckers. In American folklore, it doesn't get much more romantic than cruising down the highway, learning about life (or, you know, hauling shipping pallets). Now that trip is being taken on by a new kind of driver, one that won't appreciate natural beauty or the (temporary) joy that comes from a gas station chili dog: a robot. On March 22, an autonomous car will set out from the Golden Gate Bridge toward New York for a 3,500-mile drive that, if all goes according to plan, will push robo-cars much closer to reality. Audi's taken its self-driving car from Silicon Valley to Las Vegas, Google's racked up more than 700,000 autonomous miles, and Volvo's preparing to put regular people in its robot-controlled vehicles. But this will be one of the most ambitious tests yet for a technology that promises to change just about everything, and it's being done not by Google or Audi or Nissan, but by a company many people have never heard of: Delphi."
The Internet

Oldest Dot-com Domain Turning 30 48

Posted by Soulskill
from the counts-as-a-digital-antique dept.
netbuzz writes: On March 15, 1985, Symbolics, Inc, maker of Lisp computers, registered the Internet's first dot-com address: Symbolics.com. Sunday will mark the 30th anniversary of that registration. And while Symbolics has been out of business for years, the address was sold in 2009 for an undisclosed sum to a speculator who said: "For us to own the first domain is very special to our company, and we feel blessed for having the ability to obtain this unique property." Today there's not much there.
Robotics

Why It's Almost Impossible To Teach a Robot To Do Your Laundry 161

Posted by timothy
from the apparently-I-can't-do-it-either dept.
An anonymous reader writes with this selection from an article at Medium: "For a robot, doing laundry is a nightmare. A robot programmed to do laundry is faced with 14 distinct tasks, but the most washbots right now can only complete about half of them in a sequence. But to even get to that point, there are an inestimable number of ways each task can vary or go wrong—infinite doors that may or may not open."
AI

42 Artificial Intelligences Are Going Head To Head In "Civilization V" 52

Posted by samzenpus
from the race-to-build-Himeji-Castle dept.
rossgneumann writes The r/Civ subreddit is currently hosting a fascinating "Battle Royale" in the strategy game Civilization V, pitting 42 of the game's built-in, computer-controlled players against each other for world domination. The match is being played on the largest Earth-shaped map the game is capable of, with both civilizations that were included in the retail version of the game and custom, player-created civilizations that were modded into it after release.
AI

Machine Intelligence and Religion 531

Posted by Soulskill
from the i'm-sorry-dave,-god-can't-let-you-do-that dept.
itwbennett writes: Earlier this month Reverend Dr. Christopher J. Benek raised eyebrows on the Internet by stating his belief that Christians should seek to convert Artificial Intelligences to Christianity if and when they become autonomous. Of course that's assuming that robots are born atheists, not to mention that there's still a vast difference between what it means to be autonomous and what it means to be human. On the other hand, suppose someone did endow a strong AI with emotion – encoded, say, as a strong preference for one type of experience over another, coupled with the option to subordinate reasoning to that preference upon occasion or according to pattern. what ramifications could that have for algorithmic decision making?
AI

The Believers: Behind the Rise of Neural Nets 45

Posted by samzenpus
from the back-in-the-day dept.
An anonymous reader writes Deep learning is dominating the news these days, but it's quite possible the field could have died if not for a mysterious call that Geoff Hinton, now at Google, got one night in the 1980s: "You don't know me, but I know you," the mystery man said. "I work for the System Development Corporation. We want to fund long-range speculative research. We're particularly interested in research that either won't work or, if it does work, won't work for a long time. And I've been reading some of your papers." The Chronicle of Higher Ed has a readable profile of the minds behind neural nets, from Rosenblatt to Hassabis, told primarily through Hinton's career.
Businesses

5 White Collar Jobs Robots Already Have Taken 257

Posted by samzenpus
from the I-for-one-welcome-our-new-robot-coworkers dept.
bizwriter writes University of Oxford researchers Carl Benedikt Frey and Michael Osborne estimated in 2013 that 47 percent of total U.S. jobs could be automated and taken over by computers by 2033. That now includes occupations once thought safe from automation, AI, and robotics. Such positions as journalists, lawyers, doctors, marketers, and financial analysts are already being invaded by our robot overlords. From the article: "Some experts say not to worry because technology has always created new jobs while eliminating old ones, displacing but not replacing workers. But lately, as technology has become more sophisticated, the drumbeat of worry has intensified. 'What's different now?' asked Leigh Watson Healy, chief analyst at market research firm Outsell. 'The pace of technology advancements plus the big data phenomenon lead to a whole new level of machines to perform higher level cognitive tasks.' Translated: the old formula of creating more demanding jobs that need advanced training may no longer hold true. The number of people needed to oversee the machines, and to create them, is limited. Where do the many whose occupations have become obsolete go?"
United States

US Govt and Private Sector Developing "Precrime" System Against Cyber-Attacks 55

Posted by samzenpus
from the knowing-is-half-the-battle dept.
An anonymous reader writes A division of the U.S. government's Intelligence Advanced Research Projects Activity (IARPA) unit, is inviting proposals from cybersecurity professionals and academics with a five-year view to creating a computer system capable of anticipating cyber-terrorist acts, based on publicly-available Big Data analysis. IBM is tentatively involved in the project, named CAUSE (Cyber-attack Automated Unconventional Sensor Environment), but many of its technologies are already part of the offerings from other interested organizations. Participants will not have access to NSA-intercepted data, but most of the bidding companies are already involved in analyses of public sources such as data on social networks. One company, Battelle, has included the offer to develop a technique for de-anonymizing BItcoin transactions (pdf) as part of CAUSE's security-gathering activities.
AI

Facebook AI Director Discusses Deep Learning, Hype, and the Singularity 71

Posted by timothy
from the you-like-this dept.
An anonymous reader writes In a wide-ranging interview with IEEE Spectrum, Yann LeCun talks about his work at the Facebook AI Research group and the applications and limitations of deep learning and other AI techniques. He also talks about hype, 'cargo cult science', and what he dislikes about the Singularity movement. The discussion also includes brain-inspired processors, supervised vs. unsupervised learning, humanism, morality, and strange airplanes.
AI

The Robots That Will Put Coders Out of Work 266

Posted by timothy
from the uber-drivers-will-be-replaced-by-robots-oh-wait dept.
snydeq writes Researchers warn that a glut of code is coming that will depress wages and turn coders into Uber drivers, InfoWorld reports. "The researchers — Boston University's Seth Benzell, Laurence Kotlikoff, and Guillermo LaGarda, and Columbia University's Jeffrey Sachs — aren't predicting some silly, Terminator-like robot apocalypse. What they are saying is that our economy is entering a new type of boom-and-bust cycle that accelerates the production of new products and new code so rapidly that supply outstrips demand. The solution to that shortage will be to figure out how not to need those hard-to-find human experts. In fact, it's already happening in some areas."
AI

Breakthrough In Face Recognition Software 142

Posted by Soulskill
from the anonymity-takes-another-hit dept.
An anonymous reader writes: Face recognition software underwent a revolution in 2001 with the creation of the Viola-Jones algorithm. Now, the field looks set to dramatically improve once again: computer scientists from Stanford and Yahoo Labs have published a new, simple approach that can find faces turned at an angle and those that are partially blocked by something else. The researchers "capitalize on the advances made in recent years on a type of machine learning known as a deep convolutional neural network. The idea is to train a many-layered neural network using a vast database of annotated examples, in this case pictures of faces from many angles. To that end, Farfade and co created a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They then trained their neural net in batches of 128 images over 50,000 iterations. ... What's more, their algorithm is significantly better at spotting faces when upside down, something other approaches haven't perfected."
AI

Replacing the Turing Test 129

Posted by timothy
from the thinking-is-hard-to-pin-down dept.
mikejuk writes A plan is afoot to replace the Turing test as a measure of a computer's ability to think. The idea is for an annual or bi-annual Turing Championship consisting of three to five different challenging tasks. A recent workshop at the 2015 AAAI Conference of Artificial Intelligence was chaired by Gary Marcus, a professor of psychology at New York University. His opinion is that the Turing Test had reached its expiry date and has become "an exercise in deception and evasion." Marcus points out: the real value of the Turing Test comes from the sense of competition it sparks amongst programmers and engineers which has motivated the new initiative for a multi-task competition. The one of the tasks is based on Winograd Schemas. This requires participants to grasp the meaning of sentences that are easy for humans to understand through their knowledge of the world. One simple example is: "The trophy would not fit in the brown suitcase because it was too big. What was too big?" Another suggestion is for the program to answer questions about a TV program: No existing program — not Watson, not Goostman, not Siri — can currently come close to doing what any bright, real teenager can do: watch an episode of "The Simpsons," and tell us when to laugh. Another is called the "Ikea" challenge and asks for robots to co-operate with humans to build flat-pack furniture. This involves interpreting written instructions, choosing the right piece, and holding it in just the right position for a human teammate. This at least is a useful skill that might encourage us to welcome machines into our homes.
AI

Facebook Will Soon Be Able To ID You In Any Photo 153

Posted by timothy
from the we-shall-call-it-facebook dept.
sciencehabit writes Appear in a photo taken at a protest march, a gay bar, or an abortion clinic, and your friends might recognize you. But a machine probably won't — at least for now. Unless a computer has been tasked to look for you, has trained on dozens of photos of your face, and has high-quality images to examine, your anonymity is safe. Nor is it yet possible for a computer to scour the Internet and find you in random, uncaptioned photos. But within the walled garden of Facebook, which contains by far the largest collection of personal photographs in the world, the technology for doing all that is beginning to blossom.
AI

Programming Safety Into Self-Driving Cars 124

Posted by timothy
from the but-just-not-plan-c dept.
aarondubrow writes Automakers have presented a vision of the future where the driver can check his or her email, chat with friends or even sleep while shuttling between home and the office. However, to AI experts, it's not clear that this vision is a realistic one. In many areas, including driving, we'll go through a long period where humans act as co-pilots or supervisors before the technology reaches full autonomy (if it ever does). In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop. Researchers from the University of Massachusetts Amherst have developed 'fault-tolerant planning' algorithms that allow semi-autonomous machines to devise and enact a "Plan B."