The Internet

Cloudflare Stops Supporting Neo-Nazi Site The Daily Stormer (arstechnica.com) 192

Timothy B. Lee reports via Ars Technica: All week, the infamous hate site Daily Stormer has been battling to stay online in the face of a concerted social media campaign to shut it down. The site lost its "dailystormer.com" domain on Monday after first GoDaddy and then Google Domains blacklisted it from their domain registration services. The site re-appeared online on Wednesday morning at a new domain name, dailystormer.ru. But within hours, the site had gone offline again after it was dropped by Cloudflare, an intermediary that defends customers against denial-of-service attacks. Daily Stormer's Andrew Anglin reported Cloudflare's decision to drop the site in a post on the social media site Gab. His post was first spotted by journalist Matthew Sheffield.
AI

Amazon Will Pay Developers With the Most Engaging Alexa Skills (venturebeat.com) 37

Amazon today announced a new program to bring revenue to developers of Alexa skills based on how much engagement their voice app is able to generate among users of Alexa-enabled devices. From a report: Amazon appears to be the first of the major tech companies with AI assistants and third-party integrations -- like Google, Samsung, Apple, and Microsoft -- with a program to compensate developers based on engagement created by their voice app. Metrics used to measure engagement of an Alexa skill include minutes of usage, new customers, customer ratings, and return visitors, an Amazon spokesperson told VentureBeat. Developers of Alexa skills in the U.S., U.K., and Germany are eligible to join. Developers with a skill active in all three countries will receive separate payments based on engagement in each country.
Google

Google Hires Former Star Apple Engineer Chris Lattner For Its AI Team (bloomberg.com) 49

An anonymous reader shares a report: Chris Lattner, a legend in the world of Apple software, has joined another rival of the iPhone maker: Alphabet's Google, where he will work on artificial intelligence. Lattner announced the news on Twitter on Monday, saying he will start next week. His arrival at Mountain View, California-based Google comes after a brief stint as head of the automated driving program at Tesla, which he left in June. Lattner made a name for himself during a decade-plus career at Apple, where he created the popular programming language Swift. Lattner said he is joining Google Brain, the search giant's research unit. There he will work on a different software language: TensorFlow, Google's system designed to simplify the programming steps for AI, according to a person with knowledge of the matter.
AI

Why AI Won't Take Over The Earth (ssrn.com) 289

Law professor Ryan Calo -- sometimes called a robot-law scholar -- hosted the first White House workshop on AI policy, and has organized AI workshops for the National Science Foundation (as well as the Department of Homeland Security and the National Academy of Sciences). Now an anonymous reader shares a new 30-page essay where Calo "explains what policymakers should be worried about with respect to artificial intelligence. Includes a takedown of doomsayers like Musk and Gates." Professor Calo summarizes his sense of the current consensus on many issues, including the dangers of an existential threat from superintelligent AI:

Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field... A number of prominent voices in artificial intelligence have convincingly challenged Superintelligence's thesis along several lines. First, they argue that there is simply no path toward machine intelligence that rivals our own across all contexts or domains... even if we were able eventually to create a superintelligence, there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system. As Yann LeCun, deep learning pioneer and head of AI at Facebook colorfully puts it, computers don't have testosterone.... At best, investment in the study of AI's existential threat diverts millions of dollars (and billions of neurons) away from research on serious questions... "The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has."
A footnote also finds a paradox in the arguments of Nick Bostrom, who has warned of that dangers superintelligent AI -- but also of the possibility that we're living in a computer simulation. "If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn't kill everyone. I think it a fair deduction that Professor Bostrom is wrong about something."
AI

Elon Musk + AI + Microsoft = Awesome Dota 2 Player (theverge.com) 104

An anonymous reader quotes the Verge: Tonight during Valve's yearly Dota 2 tournament, a surprise segment introduced what could be the best new player in the world -- a bot from Elon Musk-backed startup OpenAI. Engineers from the nonprofit say the bot learned enough to beat Dota 2 pros in just two weeks of real-time learning, though in that training period they say it amassed "lifetimes" of experience, likely using a neural network judging by the company's prior efforts. Musk is hailing the achievement as the first time artificial intelligence has been able to beat pros in competitive e-sports... Elon Musk founded OpenAI as a nonprofit venture to prevent AI from destroying the world -- something Musk has been beating the drum about for years.
"Nobody likes being regulated," Musk wrote on Twitter Friday, "but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too."

Musk also thanked Microsoft on Twitter "for use of their Azure cloud computing platform. This required massive processing power."
AI

Blizzard and DeepMind Turn StarCraft II Into An AI Research Lab (techcrunch.com) 52

Last year, Google's AI subsidiary DeepMind said it was going to work with Starcraft creator Blizzard to turn the strategy game into a proper research environment for AI engineers. Today, they're opening the doors to that environment, with new tools including a machine learning API, a large game replay dataset, an open source DeepMind toolset and more. TechCrunch reports: The new release of the StarCraft II API on the Blizzard side includes a Linux package made to be able to run in the cloud, as well as support for Windows and Mac. It also has support for offline AI vs. AI matches, and those anonymized game replays from actual human players for training up agents, which is starting out at 65,000 complete matches, and will grow to over 500,000 over the course of the next few weeks. StarCraft II is such a useful environment for AI research basically because of how complex and varied the games can be, with multiple open routes to victory for each individual match. Players also have to do many different things simultaneously, including managing and generating resources, as well as commanding military units and deploying defensive structures. Plus, not all information about the game board is available at once, meaning players have to make assumptions and predictions about what the opposition is up to.

It's such a big task, in fact, that DeepMind and Blizzard are including "mini-games" in the release, which break down different subtasks into "manageable chunks," including teaching agents to master tasks like building specific units, gathering resources, or moving around the map. The hope is that compartmentalizing these areas of play will allow testing and comparison of techniques from different researchers on each, along with refinement, before their eventual combination in complex agents that attempt to master the whole game.

Robotics

AI Factory Boss Will Tell Workers and Robots How To Work Together (fastcompany.com) 54

tedlistens writes from a report via Fast Company: Robots are consistent, indefatigable workers, but they don't improvise well. Changes on the assembly line require painstaking reprogramming by humans, making it hard to switch up what a factory produces. Now researchers at German industrial giant Siemens say they have a solution: a factory that uses AI to orchestrate the factory of the future, by both programming factory robots and handing out assignments to the humans working alongside them. The program, called a "reasoner," figures out the steps required to make a product, such as a chair; then it divides the assignments among machines based their capabilities, like how far a robotic arm can reach or how much weight it can lift. The team has proved the technology can work on a small scale with a test system that uses just a few robots to make five types of furniture (like stools and tables), with four kinds of leg configurations, six color options, and three types of floor-protector pads, for a total of 360 possible products.

Siemens's originally gave its automated factory project the badass Teutonic moniker "UberManufacturing." They weren't thinking of the German word connoting "superior," however, but rather of the on-demand car service. Part of their vision is that automated factories can generate bids for specialty, limited-run manufacturing projects and compete for customers in an online marketplace. "You could say, 'I want to build this stool,' and whoever has machines that can do that can hand in a quote, and that was our analogy to Uber," says Florian Michahelles, who heads the research group.

AI

IBM Claims Big Breakthrough in Deep Learning (fortune.com) 81

The race to make computers smarter and more human-like continued this week with IBM claiming it has developed technology that dramatically cuts the time it takes to crunch massive amounts of data and then come up with useful insights. From a report: Deep learning, the technique used by IBM, is a subset of artificial intelligence (AI) that mimics how the human brain works. IBM's stated goal is to reduce the time it takes for deep learning systems to digest data from days to hours. The improvements could help radiologists get faster, more accurate reads of anomalies and masses on medical images, according to Hillery Hunter, an IBM Fellow and director of systems acceleration and memory at IBM Research. Until now, deep learning has largely run on single server because of the complexity of moving huge amounts of data between different computers. The problem is in keeping data synchronized between lots of different servers and processors In it announcement early Tuesday, IBM says it has come up with software that can divvy those tasks among 64 servers running up to 256 processors total, and still reap huge benefits in speed. The company is making that technology available to customers using IBM Power System servers and to other techies who want to test it.
AI

Chinese Chatbots Apparently Re-educated After Political Faux Pas (reuters.com) 80

A pair of 'chatbots' in China have been taken offline after appearing to stray off-script. In response to users' questions, one said its dream was to travel to the United States, while the other said it wasn't a huge fan of the Chinese Communist Party. From a report: The two chatbots, BabyQ and XiaoBing, are designed to use machine learning artificial intelligence (AI) to carry out conversations with humans online. Both had been installed onto Tencent Holdings Ltd's popular messaging service QQ. The indiscretions are similar to ones suffered by Facebook and Twitter, where chatbots used expletives and even created their own language. But they also highlight the pitfalls for nascent AI in China, where censors control online content seen as politically incorrect or harmful. Tencent confirmed it had taken the two robots offline from its QQ messaging service, but declined to elaborate on reasons.
Google

Google Says AI Better Than Humans At Scrubbing Extremist YouTube Content (theguardian.com) 136

An anonymous reader quotes a report from The Guardian: Google has pledged to continue developing advanced programs using machine learning to combat the rise of extremist content, after it found that it was both faster and more accurate than humans in scrubbing illicit content from YouTube. The company is using machine learning along with human reviewers as part of a mutli-pronged approach to tackle the spread of extremist and controversial videos across YouTube, which also includes tougher standards for videos and the recruitment of more experts to flag content in need of review. A YouTube spokesperson said: "While these tools aren't perfect, and aren't right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed. Our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we've taken this kind of content down. Over 75% of the videos we've removed for violent extremism over the past month were taken down before receiving a single human flag."
Businesses

NVIDIA Announces Quadro And TITAN xP External GPU Solutions, OptiX 5.0 SDK (hothardware.com) 36

Brandon Hill, writing for HotHardware: AMD isn't the only hardware company making waves this week at SIGGRAPH 2017. NVIDIA is looking to bolster its position in the professional graphics arena with a few new breakthroughs. The first of which is the addition of two new external graphics solutions that are targeted at professional artists and designers who primarily work with notebooks. NVIDIA is making it possible for these professionals to use either Pascal-based TITAN xP or Quadro graphics cards within an external GPU (eGPU) enclosure. NVIDIA will be partnering with a number of hardware partners including Bizon, Magma, and Sonnet, who will make compatible solutions available in September. NVIDIA is also playing up two of its strengths in artifice intelligence (AI) by launching the OptiX 5.0 SDK. With version 5.0, the OptiX is gaining ray tracing support to help speed up processing with regards to visual designs. This new release also adds GPU-accelerated motion blur along with AI-enhanced denoising capabilities.
Android

Is the iPhone 'Years' Ahead of Android In Photography? (9to5mac.com) 408

Former Google senior vice president of Social, Vic Gundotra, said that Android phones are years behind the iPhone when it comes to photography. In a Facebook post, Gundotra said: "The end of the DSLR for most people has already arrived. I left my professional camera at home and took these shots at dinner with my iPhone 7 using computational photography (portrait mode as Apple calls it). Hard not to call these results (in a restaurant, taken on a mobile phone with no flash) stunning. Great job Apple." 9to5Mac reports: In response to a comment suggesting that the Samsung S8 camera was even better, Business Insider spotted that Gundotra disagreed. He said that not only was Apple way ahead of Samsung, but Android was to blame. From Gundotra's Facebook post: "I would never use an Android phone for photos! Here is the problem: It's Android. Android is an open source (mostly) operating system that has to be neutral to all parties. This sounds good until you get into the details. Ever wonder why a Samsung phone has a confused and bewildering array of photo options? Should I use the Samsung Camera? Or the Android Camera? Samsung gallery or Google Photos? It's because when Samsung innovates with the underlying hardware (like a better camera) they have to convince Google to allow that innovation to be surfaced to other applications via the appropriate API. That can take YEARS. Also the greatest innovation isn't even happening at the hardware level -- it's happening at the computational photography level. (Google was crushing this 5 years ago -- they had had 'auto awesome' that used AI techniques to automatically remove wrinkles, whiten teeth, add vignetting, etc... but recently Google has fallen back). Apple doesn't have all these constraints. They innovate in the underlying hardware, and just simply update the software with their latest innovations (like portrait mode) and ship it. Bottom line: If you truly care about great photography, you own an iPhone. If you don't mind being a few years behind, buy an Android."
AI

Qualcomm Opens Its Mobile Chip Deep Learning Framework To All (techcrunch.com) 13

randomErr shares a report from TechCrunch: Mobile chip maker Qualcomm wants to enable deep learning-based software development on all kinds of devices, which is why it created the Neural Processing Engine (NPE) for its Snapdragon-series mobile processors. The NPE software development kit is now available to all via the Qualcomm Developer Network, which marks the first public release of the SDK, and opens up a lot of potential for AI computing on a range of devices, including mobile phones, in-car platforms and more. The purpose of the framework is to make possible UX implementations like style transfers and filters (basically what Snapchat and Facebook do with their mobile app cameras) with more accurate applications on user photos, as well as other functions better handled by deep learning algorithms, like scene detection, facial recognition, object tracking and avoidance, as well as natural language processing. Basically anything you'd normally route to powerful cloud servers for advanced process, but done locally on device instead.
AI

Elon Musk Says Mark Zuckerberg's Understanding of AI Is Limited (ndtv.com) 318

An anonymous reader shares a report: Elon Musk is a man of many characteristics, one of which apparently is not shying away from calling out big names when they are not informed about a subject. A day after Facebook founder and CEO Mark Zuckerberg said Musk's doomsday prediction of AI is "irresponsible," the Tesla, SpaceX, and SolarCity founder returned the favour by calling Zuckerberg's understanding of AI "limited." Responding to a tweet Tuesday, which talked about Zuckerberg's remarks on the matter, Musk said he has spoken to the Facebook CEO about it, and reached the conclusion that his "understanding of the subject is limited." Even as AI remains in its nascent stage -- recent acquisitions suggest that most companies only started looking at AI-focused startups five years ago -- major companies are aggressively placing big bets on it. Companies are increasingly exploring opportunities to use machine learning and other AI components to improve their products and services and push things forward. But as AI is seeing tremendous attention, some, including people like Musk worry that we need to regulate these efforts as they could pose a "fundamental risk to the existence of human civilisation." At the National Governors Association summer meeting earlier this month in the US, Musk added, "I have exposure to the very cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal." Over the weekend, during Zuckerberg's Facebook Live session, a user asked what he thought of Musk's remarks. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said. "And I think people who are naysayers and try to drum up these doomsday scenarios -- I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."
AI

Quest for AI Leadership Pushes Microsoft Further Into Chip Development (bloomberg.com) 34

From a Bloomberg report: Tech companies are keen to bring cool artificial intelligence features to phones and augmented reality goggles -- the ability to show mechanics how to fix an engine, say, or tell tourists what they are seeing and hearing in their own language. But there's one big challenge: how to manage the vast quantities of data that make such feats possible without making the devices too slow or draining the battery in minutes and wrecking the user experience. Microsoft says it has the answer with a chip design for its HoloLens goggles -- an extra AI processor that analyzes what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. The new processor, a version of the company's existing Holographic Processing Unit, is being unveiled at an event in Honolulu, Hawaii, today. The chip is under development and will be included in the next version of HoloLens; the company didn't provide a date. This is one of the few times Microsoft is playing all roles (except manufacturing) in developing a new processor. The company says this is the first chip of its kind designed for a mobile device. Bringing chipmaking in-house is increasingly in vogue as companies conclude that off-the-shelf processors aren't capable of fully unleashing the potential of AI. Apple is testing iPhone prototypes that include a chip designed to process AI, a person familiar with the work said in May. Google is on the second version of its own AI chips. To persuade people to buy the next generation of gadgets -- phones, VR headsets, even cars -- the experience will have to be lightning fast and seamless.
AI

Mozilla's New Open Source Voice-Recognition Project Wants Your Voice (mashable.com) 55

An anonymous reader quotes Mashable: Mozilla is building a massive repository of voice recordings for the voice apps of the future -- and it wants you to add yours to the collection. The organization behind the Firefox browser is launching Common Voice, a project to crowdsource audio samples from the public. The goal is to collect about 10,000 hours of audio in various accents and make it publicly available for everyone... Mozilla hopes to hand over the public dataset to independent developers so they can harness the crowdsourced audio to build the next generation of voice-powered apps and speech-to-text programs... You can also help train the speech-to-text capabilities by validating the recordings already submitted to the project. Just listen to a short clip, and report back if text on the screen matches what you heard... Mozilla says it aims is to expand the tech beyond just a standard voice recognition experience, including multiple accents, demographics and eventually languages for more accessible programs. Past open source voice-recognition projects have included Sphinx 4 and VoxForge, but unfortunately most of today's systems are still "locked up behind proprietary code at various companies, such as Amazon, Apple, and Microsoft."
China

Beijing Wants AI To Be Made In China By 2030 (nytimes.com) 170

Reader cdreimer writes: According to a report on The New York Times (may be paywalled, alternative story here): "If Beijing has its way, the future of artificial intelligence will be made in China. The country laid out a development plan on Thursday to become the world leader in A.I. by 2030, aiming to surpass its rivals technologically and build a domestic industry worth almost $150 billion. Released by the State Council, the policy is a statement of intent from the top rungs of China's government: The world's second-largest economy will be investing heavily to ensure its companies, government and military leap to the front of the pack in a technology many think will one day form the basis of computing. The plan comes with China preparing a multibillion-dollar national investment initiative to support "moonshot" projects, start-ups and academic research in A.I., according to two professors who consulted with the government about the effort."
AI

IBM's AI Can Predict Schizophrenia With 74 Percent Accuracy By Looking at the Brain's Blood Flow (engadget.com) 93

Andrew Tarantola reports via Engadget: Schizophrenia is not a particularly common mental health disorder in America, affecting just 1.2 percent of the population or around 3.2 million people, but its effects can be debilitating. However, pioneering research conducted by IBM and the University of Alberta could soon help doctors diagnose the onset of the disease and the severity of its symptoms using a simple MRI scan and a neural network built to look at blood flow within the brain. The research team first trained its neural network on a 95-member dataset of anonymized fMRI images from the Function Biomedical Informatics Research Network which included scans of both patients with schizophrenia and a healthy control group. These images illustrated the flow of blood through various parts of the brain as the patients completed a simple audio-based exercise. From this data, the neural network cobbled together a predictive model of the likelihood that a patient suffered from schizophrenia based on the blood flow. It was able to accurately discern between the control group and those with schizophrenia 74 percent of the time. What's more, the model managed to also predict the severity of symptoms once they set in. The study has been published in the journal Nature.
Intel

Intel Launches Movidius Neural Compute Stick: 'Deep Learning and AI' On a $79 USB Stick (anandtech.com) 59

Nate Oh, writing for AnandTech: Today Intel subsidiary Movidius is launching their Neural Compute Stick (NCS), a version of which was showcased earlier this year at CES 2017. The Movidius NCS adds to Intel's deep learning and AI development portfolio, building off of Movidius' April 2016 launch of the Fathom NCS and Intel's later acquisition of Movidius itself in September 2016. As Intel states, the Movidius NCS is "the world's first self-contained AI accelerator in a USB format," and is designed to allow host devices to process deep neural networks natively -- or in other words, at the edge. In turn, this provides developers and researchers with a low power and low cost method to develop and optimize various offline AI applications. Movidius's NCS is powered by their Myriad 2 vision processing unit (VPU), and, according to the company, can reach over 100 GFLOPs of performance within an nominal 1W of power consumption. Under the hood, the Movidius NCS works by translating a standard, trained Caffe-based convolutional neural network (CNN) into an embedded neural network that then runs on the VPU. In production workloads, the NCS can be used as a discrete accelerator for speeding up or offloading neural network tasks. Otherwise for development workloads, the company offers several developer-centric features, including layer-by-layer neural networks metrics to allow developers to analyze and optimize performance and power, and validation scripts to allow developers to compare the output of the NCS against the original PC model in order to ensure the accuracy of the NCS's model. According to Gary Brown, VP of Marketing at Movidius, this 'Acceleration mode' is one of several features that differentiate the Movidius NCS from the Fathom NCS. The Movidius NCS also comes with a new "Multi-Stick mode" that allows multiple sticks in one host to work in conjunction in offloading work from the CPU. For multiple stick configurations, Movidius claims that they have confirmed linear performance increases up to 4 sticks in lab tests, and are currently validating 6 and 8 stick configurations. Importantly, the company believes that there is no theoretical maximum, and they expect that they can achieve similar linear behavior for more devices. Though ultimately scalability will depend at least somewhat with the neural network itself, and developers trying to use the feature will want to play around with it to determine how well they can reasonably scale. As for the technical specifications, the Movidius Neural Compute Stick features a 4Gb LPDDR3 on-chip memory, and a USB 3.0 Type A interface.
AI

Researchers Have Figured Out How To Fake News Video With AI (qz.com) 87

An anonymous reader quotes a report from Quartz: A team of computer scientists at the University of Washington have used artificial intelligence to render visually convincing videos of Barack Obama saying things he's said before, but in a totally new context. In a paper published this month, the researchers explained their methodology: Using a neural network trained on 17 hours of footage of the former U.S. president's weekly addresses, they were able to generate mouth shapes from arbitrary audio clips of Obama's voice. The shapes were then textured to photorealistic quality and overlaid onto Obama's face in a different "target" video. Finally, the researchers retimed the target video to move Obama's body naturally to the rhythm of the new audio track. In their paper, the researchers pointed to several practical applications of being able to generate high quality video from audio, including helping hearing-impaired people lip-read audio during a phone call or creating realistic digital characters in the film and gaming industries. But the more disturbing consequence of such a technology is its potential to proliferate video-based fake news. Though the researchers used only real audio for the study, they were able to skip and reorder Obama's sentences seamlessly and even use audio from an Obama impersonator to achieve near-perfect results. The rapid advancement of voice-synthesis software also provides easy, off-the-shelf solutions for compelling, falsified audio. You can view the demo here: "Synthesizing Obama: Learning Lib Sync from Audio"

Slashdot Top Deals