Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Really? (Score 2) 48

by Frosty Piss (#49772249) Attached to: Sniffing and Tracking Wearable Tech and Smartphones

The findings have raised concerns about the privacy and confidentiality wearable devices may provide.

Who ever suggested that there was any "privacy and confidentiality" of wearable devices that use Bluetooth? Who would even think such a thing? We're not talking about encrypted communications devices here...

Comment: Re:Photo? (Score 2) 175

I'm just confused about people deep-linking walled off content. It's fucking pointless. It's a bit like me offering you this awesome picture of Mel Gibson riding a motorbike with a chipmunk balanced on the handlebars:
file:///home/cederic/pictures/awesome/mel/motorbike-chipmunk.png

Awesome, isn't it?

Comment: Re:32MB (Score 1) 222

by Cederic (#49767193) Attached to: Google Developing 'Brillo' OS For Internet of Things

It's a shame you replied to my post several branches down a conversation, as that security point is extremely pertinent.

Between the data slurping, the ad provision and the security concerns I'm reluctant to tread too heavily down the IoT route.

Then again, I have a smart tv with built-in camera and voice commands. You could argue I've already lost.

Comment: Re:Yes to Brexit (Score 1) 391

Had the people living in Scotland voted to leave the UK then I'd have accepted that choice with no fuss at all.

I was and still am however mocking the SNP desire to be independent from the UK but part of the European superstate. I think that demonstrates that independence is not their genuine desire, and that leaves their motives open to challenge.

Comment: Re:Anthropomorphizing (Score 1) 395

by mpeskett (#49765453) Attached to: What AI Experts Think About the Existential Risk of AI

The concern isn't so much that the AI would have human-like goals that drive it into conflict with regular-grade humanity in a war of conquest, so much as that it might have goals that are anything at all from within the space of "goals that are incompatible with general human happiness and well-being".

If we're designing an AI intended to do things in the world of its own accord (rather than strictly in response to instructions) then it would likely have something akin to a utility function that it's seeking to maximise, and so implicitly has a goal defined by that function - some arrangement of the world that scores the most highly according to that function. Whether the nature of that goal is inscrutable beyond the wit of man, or utterly prosaic like the "paperclip maximiser"... if it doesn't share our values then the things that we value may end up disassembled for raw materials.

In the admittedly unlikely event of a machine achieving a degree of intelligence that allows it to completely achieve any goal it happens to have, the only way for humanity to win is if it has goals that align near-perfectly with what's best for humanity, which is a vanishingly small target when you consider the universe of possible utility functions that aren't that.

Obviously not really a concern with the current state of technology, but if progress in making more intelligent machines follows anything like an exponential curve then we could fall foul of how bad our intuitions are around exponentials, and end up being taken by surprise by a machine that's rather abruptly more intelligent than we expected. Especially if we make it able to improve itself.

Comment: Re:What Would We Be Competing For? (Score 1) 395

by JoshuaZ (#49765283) Attached to: What AI Experts Think About the Existential Risk of AI
You are made for carbon. The AI can use that carbon and other atoms for something else. Your atoms are nearby to it and it doesn't need to move up a gravity well. And why restrict what resources it uses when it doesn't need to? And if finds the nearby atmosphere "toxic" then why not respond by modifying that atmosphere? You are drastically underestimating how much freedom the AI has potential to do. We cannot risk it deciding what it does and gamble that it makes decisions that don't hurt us simply because you can conceive of possible ways it might be able to achieve its goals without doing so. That's wishful thinking in a nutshell.

Your computer account is overdrawn. Please reauthorize.

Working...