Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment So many level heads (Score 1) 227

I was expecting way, way more "but Skynet" comments here. The fact that so many commenters have a clear-headed perspective on AI, and what AI safety actual means, is fantastic. Good to know the reporters I'm attacking are being read with the proper amount of skepticism. I really think the stubbornly fearful need to come to terms with their SF consumption, and how Hollywood has every reason to present more apocalyptic AI scenarios than beneficial, or even neutral ones. And apart from SF, where are you getting your facts? What are your theories based on? If it's from stories and journalists who aren't putting in the work, and are clearly just focusing on the wacky end-times outcomes, then you're just plagiarizing from the long history of evil robot fiction. Also, remember that Musk is not a computer scientist, and does not work with AI. I'll post about this soon, but his claims that Vicarious is actively safeguarding against bootstrapped AI are false, based on statements from Vicarious' own founders. Even brilliant minds can be embarrassingly wrong.

Submission + - An Open Letter To Everyone Tricked Into Fearing AI (popsci.com)

malachiorion writes: If you're into robots, AI, you've probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message? I spoke to the organization that released letter, andto one of the AI researchers who contributed to it. As is often the case with AI, tech reporters are getting this one wrong on purpose. Here's my analysis for Popular Science. Or, for the TL;DR crowd: "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."

Comment Re:This was already done (Score 1) 108

This system works very differently, though. In a way, Watson is aiming for a more intellectual goal, a kind of evidence-based cognition. And in Watson's most useful applications, it grinds through data, and spits out possible answers and conclusions for review by humans. Robo Brain doesn't care about creating human-digestible conclusions or advice. It's translating human-speak, basically, into robot action, machine-readable results that tell bots how to physically perform certain tasks.

Comment Re:The irony (Score 1) 108

Yeah, I've made it my mission to try to tamp down the general hysteria, when it comes to coverage of really interesting robotics projects. But I spent a solid hour writing and deleting stupid SF-fueled intros to this story. It feels like a movie—not a very good one—that's on the verge of writing itself. Like all you'd have to do is give it the wrong chunk of data culled from the internet, and it would mobilize a machine army that *only* knows how to commit atrocities.

Comment Re:My the force-feeding be with them! (Score 1) 108

Force-feeding was a bit of a silly choice, on my part, but something about the process felt similar. It's not like they unleash Robo Brain on the internet, and let it hoover up whatever it pleases (and bully for that, given what's on the internet). They also don't let the machine filter out topics that it doesn't care for. So if we're going to anthropomorphize this system—which, of course we are, since we're a narcissistic species—it seems more like the Gluttony victim from Se7en than a willing participant.

Submission + - Robo Brain Project Wants To Turn the Internet into a Robotic Hivemind (popsci.com) 1

malachiorion writes: Researchers are force-feeding the internet into a system called Robo Brain, to make the world's robots smarter. Weirder still: Every word in that sentence is true. Robo Brain has absorbed a billion images and 120,000 YouTube videos so far, and aims to digest 10 times that within a year, in order to create machine-readable commands for robots—how to pour coffee, for example. I spoke to one of the researchers about this ridiculously ambitious, and pretty ingenious project, which could finally make household bots viable (in about 5 years...which is still pretty great). My story for Popular Science.

Submission + - Collaborative Algorithm Lets Autonomous Robots Team Up And Learn From Each Other (popsci.com)

malachiorion writes: Autonomous robots are about to get a lot more autonomous, thanks to an algorithm from MIT that turns teams of bots into collaborative learners. This was covered in other places, but I'm not sure why no one's digging into the real implications of this (admittedly somewhat obscure) breakthrough. The algorithm, called AMPS, lets autonomous systems quickly compare notes about what they’ve observed in their respective travels, and come up with a combined worldview. The goal, according to the algorithm's creators, is to achieve "semantic symmetry," which would allow for "lifelong learning" for robots, making them more self-sufficient, and less reliant on constantly pestering humans to explain why the more surprising aspects of the unstructured world they're operating within don't line up with what programmers have prepped them for. Here's my story for Popular Science.

Submission + - Surgical Snakebots Are Real, And Heading For Humanity's Orifices (popsci.com) 1

malachiorion writes: Last week marked the first use of a surgical snakebot—the Flex system, from MA-based Medrobotics—on living human beings. It wriggled down two patient's throats, to be specific, at a hospital in Belgium. That's neat, and could mean an interesting showdown-to-come between this snake-inspired robot (invented by a Carnegie Mellon roboticist), and the more widely-used da Vinci bot. But this is bigger than a business story. The next era in general surgery, which involves making a single small incision after entering the anus or vagina, instead of multiple punctures in the abdomen, might finally be feasible with this kind of bot. This is my analysis for Popular Science about why instrument-bearing snakebots wriggling into our orifices is a technology worth rooting for.

Submission + - Lie Like a Lady: The Profoundly Weird, Gender-Specific Roots of the Turing Test (popsci.com)

malachiorion writes: Alan Turing never wrote about the Turing Test, that legendary measure of machine intelligence that was supposedly passed last weekend. He proposed something much stranger—a contest between men and machines, to see who was better at pretending to be a woman. The details of the Imitation Game aren't secret, or even hard to find, and yet no one seems to reference it. Here's my analysis for Popular Science about why they should, in part because it's so odd, but also because it might be a better test for "machines that think" than the chatbot-infested, seemingly useless Turing Test.

Submission + - Robots Are Evil: The Sci-Fi Myth of Killer Machines (popsci.com)

malachiorion writes: Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. As part of my series for Popular Science on the biggest sci-fi-inspired myths of robotics, this one focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems.

Slashdot Top Deals

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...