Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×
Google

Ray Kurzeil's Google Team Is Building Intelligent Chatbots (theverge.com) 98

An anonymous reader quotes an article from The Verge. Inventor Ray Kurzweil made his name as a pioneer in technology that helped machines understand human language, both written and spoken. In a video from a recent Singularity conference Kurzweil says he and his team at Google are building a chatbot, and that it will be released sometime later this year... "My team, among other things, is working on chatbots. We expect to release some chatbots you can talk to later this year."

One of the bots will be named Danielle, and according to Kurzweil, it will draw on dialog from a character named Danielle, who appears in a novel he wrote -- a book titled, what else, Danielle... He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your "style, personality, and ideas."

Kurzweil also predicted that we won't see AIs with full "human-level" language abilities until 2029, "But you'll be able to have interesting conversations before that."
Google

Don't Use Google Allo (vice.com) 127

At its developer conference on Wednesday, Google announced Allo, a chatbot-enabled messaging app. The app offers a range of interesting features such as the ability to quickly doodle on an image and get prompt responses. Additionally, it is the "first Google" product to offer end-to-end encryption, though that is not turned on by default. If you're concerned about privacy, you will probably still want to avoid Allo, says the publication. From the report: Allo's big innovation is "Google Assistant," a Siri competitor that will give personalized suggestions and answers to your questions on Allo as well as on the newly announced Google Home, which is a competitor to Amazon's Echo. On Allo, Google Assistant will learn how you talk to certain friends and offer suggested replies to make responding easier. Let that sink in for a moment: The selling point of this app is that Google will read your messages, for your convenience. Google would be insane to not offer some version of end-to-end encryption in a chat app in 2016, when all of its biggest competitors have it enabled by default. Allo uses the Signal Protocol for its encryption, which is good. But as with all other Google products, Allo will work much better if you let Google into your life. Google is banking on the idea that you won't want to enable Incognito Mode, and thus won't enable encryption.Edward Snowden also chimed in on the matter. He said, "Google's decision to disable end-to-end encryption by default in its new Allo chat app is dangerous, and makes it unsafe. Avoid it for now."
Botnet

This Unusual Botnet Targets Scientists, Engineers, and Academics (zdnet.com) 67

schwit1 quotes a report from ZDNet: A botnet and cyberattack campaign is infecting victims across the globe and appears to be tracking the actions of specially selected targets in sectors ranging from government to engineering. Researchers from Forcepoint Security Labs have warned that the campaign it has dubbed 'Jaku' -- after a planet in the Star Wars universe because of references to the sci-fi saga in the malware code -- is different to and more sophisticated than many botnet campaigns. Rather than indiscriminately infecting victims, this campaign is capable of performing "a separate, highly targeted operation" used to monitor members of international non-governmental organizations, engineering companies, academics, scientists and government employees, the researchers said. The findings are set out in Forcepoint's report on Jaku, which outlines how of the estimated 19,000 unique victims, 42 percent are in South Korea and a further 31 percent in Japan. Both are countries and neighbors of North Korea. A further nine percent of Jaku victims are in China, six percent in the US, with the remainder spread across 130 other countries.
Networking

Within 6 Years, Most Vehicles Will Allow OTA Software Updates (computerworld.com) 199

Lucas123 writes: By 2022, using a thumb drive or taking your vehicle to the location you bought it for a software update will seem as strange as it would be for a smartphone or laptop today. By 2022, there will be 203 million vehicles on the road that can receive software over-the-air (SOTA) upgrades; among those vehicles, at least 22 million will also be able to get firmware upgrades, according to a new report by ABI Research. Today, there are about 253 million cars and trucks on the road, according to IHS Automotive. The main reasons automakers are moving quickly to enable OTA upgrades: recall costs, autonomous driving and security risks based on software complexities, according to Susan Beardslee, a senior analyst at ABI Research. "It is a welcome transformation, as OTA is the only way to accomplish secure management of all of a connected car's software in a seamless, comprehensive, and fully integrated manner," Beardslee said.
Government

German Parliament May Need To Replace All Hardware and Software To Stop Malware 189

jfruh writes: Trojan spyware has been running on computers in the German parliament for over four weeks, sending data to an unknown destination; and despite best efforts, nobody's been able to remove it. The German government is seriously considering replacing all hardware and software to get rid of it. From the ITWorld article: "After the attack, part of the parliament’s traffic was routed over the federal government’s more secure data network by the Federal Office For Information Security, Der Spiegel reported. Some Germans suspect that the Russian foreign intelligence service SVR is behind the attack. On Thursday, the parliament will discuss how to address the situation."
Cloud

"Hello Barbie" Listens To Children Via Cloud 163

jones_supa writes For a long time we have had toys that talk back to their owners, but a new "smart" Barbie doll's eavesdropping and data-gathering functions have privacy advocates crying foul. Toymaker Mattel bills Hello Barbie as the world's first "interactive doll" due to its ability to record children's playtime conversations and respond to them, once the audio is transmitted over WiFi to a cloud server. In a demo video, a Mattel presenter at the 2015 Toy Fair in New York says the new doll fulfills the top request that Mattel receives from girls: to have a two-way dialogue. "They want to have a conversation with Barbie," she said, adding that the new toy will be "the very first fashion doll that has continuous learning, so that she can have a unique relationship with each girl." Susan Linn, the executive director of Campaign for a Commercial-Free Childhood, has written a statement in which she says how the product is seriously creepy and creates a host of dangers for children and families. She asks people to join her in a petition under the proposal of Mattel discontinuing the toy.
The Military

US May Sell Armed Drones 131

An anonymous reader writes: Nations allied with the United States may soon be able to purchase armed, unmanned aircraft, according to an updated U.S. arms policy. Purchase requests will be evaluated on a case-by-case basis, and foreign military bodies would have to agree to a set of "proper use" rules in order for the U.S. to go ahead with the sale. For example: "Armed and other advanced UAS are to be used in operations involving the use of force only when there is a lawful basis for use of force under international law, such as national self-defense." These rules have done nothing to silence critics of the plan, who point out that the U.S. has killed civilians during remote strikes without much accountability. The drones are estimated to cost $10-15 million apiece.
Data Storage

Former NATO Nuclear Bunker Now an 'Airless' Unmanned Data Center 148

An anonymous reader writes A German company has converted a 1960s nuclear bunker 100 miles from network hub Frankfurt into a state-of-the-art underground data center with very few operators and very little oxygen. IT Vision Technology (ITVT) CEO Jochen Klipfel says: 'We developed a solution that reduces the oxygen content in the air, so that even matches go outIt took us two years'. ITVT have the European Air Force among its customers, so security is an even higher priority than in the average DC build; the refurbished bunker has walls 11 feet thick and the central complex is buried twenty feet under the earth.

Comment Re:The solution is simple (Score 1) 227

That is harder than you might think. From Smarter than us ( https://drive.google.com/file/... ):

"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvantage.
Little by little (or blindingly quickly, depending on how the game
plays out), humans will be compelled to turn more and more of their
decision making over to the AI. Inevitably, the humans will be out of
the loop for all but a few key decisions.

Moreover, humans may no longer be able to make sensible de-
cisions, because they will no longer understand the forces at their
disposal. Since their role is so reduced, they will no longer compre-
hend what their decisions really entail. This has already happened
with automatic pilots and automated stock-trading algorithms: these
programs occasionally encounter unexpected situations where hu-
mans must override, correct, or rewrite them. But these overseers,
who haven’t been following the intricacies of the algorithm’s decision
process and who don’t have hands-on experience of the situation, are
often at a complete loss as to what to do—and the plane or the stock
market crashes. "

"Consider an AI that is tasked with enhancing shareholder value
for a company, but whose every decision must be ratified by the (hu-
man) CEO. The AI naturally believes that its own plans are the most
effective way of increasing the value of the company. (If it didn’t be-
lieve that, it would search for other plans.) Therefore, from its per-
spective, shareholder value is enhanced by the CEO agreeing to what-
ever the AI wants to do. Thus it will be compelled, by its own pro-
gramming, to present its plans in such a way as to ensure maximum
likelihood of CEO agreement. It will do all it can do to seduce, trick,
or influence the CEO into agreement. Ensuring that it does not do so
brings us right back to the problem of precisely constructing the right
goals for the AI, so that it doesn’t simply find a loophole in whatever
security mechanisms we’ve come up with."

Comment Re:Fear (Score 1) 227

>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
So?
>If you're smart it's better to uphold the law and not hurt others.
Why?

A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice to other people are only relevant if we have reasonably similar amounts of power. If you want me to not worry about AI argue that it is reasonable to be kind to ants, because that will be the level of power difference.

Personally, I think it is more important that we concentrate on the AIs being ethical in general, than doing exactly what we want.

Comment Smarter than us (Score 1) 227

I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from https://intelligence.org/smart... or since it is CC BY-NC-SA 3.0, you can also just download it https://drive.google.com/file/...

The book contains the following summary:

1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.
3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).
4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.
5. The relevant experts do not seem poised to solve this problem.
6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.

The only one of those statements I have much doubt about is 4. Even if the AIs are safe, they still probably will not be under human control.

Slashdot Top Deals

Statistics are no substitute for judgement. -- Henry Clay

Working...