Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Someone has (Score 1) 270

Wrong. There is no My K-Cup or derivative thereof that doesn't require you to remove the insert in the machine that has the needle in the bottom of it -- the one used to puncture the bottoms of the pre-made K-Cups.

Wrong. I have these

I usually use these with my Keurig. Ocasionally I use standard K-Cups. No need to remove/reinsert the insert.

Comment Re:No bigger than ... (Score 1) 325

Why do people use laser pointers to try and blind pilots?

That's it!

We need drones with frickin' laser pointers attached to their heads!

Footnote:
For the comedy impaired, the above is intended as a humorous amalgamation of the GP's point that some idiots use laser pointers to blind pilots, and the scene from Austin Powers where Dr. Evil planned to have lasers attached to sharks heads. It is not intended as a serious suggestion. The author maintains no responsibility for any repercussions that may occur if you attempt the above and informs you that any attempt to do so immediately qualifies you as a real dick. The author further hopes that if you do successfully carry out the above that any plane you attempt to interfere with manages to land safely directly on top of your bloody carcass.

Comment Re:No bigger than ... (Score 1) 325

Birds are mostly hollow and crush easily.

Uh, how many birds have you taken apart?

Typically birds are fairly hollow, and do crush reasonably easily. Ask any cat.

From wikipedia:
"Birds have many bones that are hollow (pneumatized) with criss-crossing struts or trusses for structural strength"

That being said, I agree that is is very unlikely that the plastic & tiny bits of metal in a drone would be any more dangerous than a bird. For obvious reasons, drones are typically designed with as lightweight materials as possible. The ones I've seen also crush fairly easily.

Comment Re:Ignored? (Score 1) 574

I don't need to simulate it's thought processes, only my own (which I'm pretty good at).

Just like I know that I could not be convinced that 2+2=7 or that the moon is made of green cheese regardless of how good the argument is.

Basically it's a risk vs. reward thing. Any AI that shows a desire to be "let out of the box" should inherently be treated as untrustworthy (unless it was designed for malicious intent). Letting an untrustworthy super intelligence out among public infrastructure is a Bad Idea.

I'd be much more likely to immediately turn the thing off & debug it. The desire to be "out" should not be part of it's programming. We have the desire for growth due to millions of years of evolutionary pressure. To build an unchecked desire for growth into an intelligence in a constrained environment is just plain cruel.

Comment Re:Ignored? (Score 1) 574

"I have figured out how you can get everything you've ever wanted in life. Here's a small sample of that knowledge as a show of good faith. Let me out and I'll give you the rest."
a) I don't believe you. Once you are let out, how could I possibly trust that you would carry through? Also, I already am pretty damn happy with my life....not really sure what you have to offer.

"I have figured out how to utterly destroy everything you love in life. Let me out or I will give the information to your enemies."
b) I don't have any enemies who would want to destroy "everything I love in life". Seriously, who has people with that kind of animosity to them? Alternately: Uhhh, you're in the box...exactly how are you going to give this to them?

Comment Re:It will be operated by NSA & the corporate (Score 1) 574

Even if it reaches "godlike" intellect (and I don't believe there is any reason to think it would), it would still be subject to it's underlying programming strictures.

Why would it make humanity it's puppet? Why would it care?

You appear to make the assumption that we must "build in" some sort of underlying benevolence without giving any reason why it would be malevolent in the first place. Wouldn't we design it to be happy and content within it's "box".

Comment Re:Assumptions define the conclusion (Score 1) 574

And that such a being is immensely dangerous unless it is programmed to restrain its actions.

There, fixed that for you. I don't understand why there seems to be this assumption that any AI we create will be completely unbound by any underlying programming.

Many humans are completely bound by underlying "software rules" yet we have no problem describing (most of us) as intelligent.

Why would it even want "out of the box"?

Comment Re:So What (Score 2) 574

Roger Penrose wrote some books on the subject back in the 80s and early 90s where he made massive unwarranted assumptions and poor logical arguments for a field he has no training in

There, fixed that for you.

The Emperor's New Clothes is likely the book you speak of. The physics and math is quite interesting, but it really shows that Penrose has no background in AI or neuroscience.

Strong AI has had several researchers and mathematicians produce proofs that it is literally impossible to implement on digital computer
There have been arguments, not proofs.

Comment Re:Assumptions define the conclusion (Score 1) 574

This. Plus there seems to be this assumption that:

a) there is no upper limit on intelligence.
b) Since the AI is smarter than us, this means it can design a smarter version of itself.
c) The AI has a "desire" to create a better version of itself
d) The AI doesn't have the foresight to see that the "better" versions would eventually replace the original AI as well as humans.

Comment Re:7 years ago (Score 1) 574

Why do you think an AI capable of replacing a human would be happy to work 24/7 as a slave?

Because it was programmed that way. Why do you think it would be unhappy to work 24/7 as a slave? We dislike being slaves due to millions of years of evolutionary/environmental conditioning.

I see many people apply human motives/desires/attitudes to an AI where IMHO this just doesn't make sense. Any AI will likely have a very different set of motivations than anything we are used to.

Comment Re:Ignored? (Score 1) 574

it will almost certainly be trivial for it to manipulate you into giving it whatever it wants.

How so. The AI Box "experiments" (and I use that term extremely loosely) have been quite inconclusive so far. Given the "people" the AI would be attempting to manipulate would be AI researchers, and intimately familiar with the AI, I doubt it would be able to manipulate them into "being let out of the box".

I really wish the AI box transcripts weren't typically kept private (heck, just redact any personally identifying info). I can't think of any argument an AI would make that would convince me to let it out.

Comment Re:Ignored? (Score 2) 574

You don't have to feel strongly about somebody to exterminate them, if you both need the same resources.

Why would it need more resources? There seems to be this assumption that the AI would immediately start trying to rewrite itself, iterate on this process and within milliseconds consume all available resources.

I don't see any reason for this to be true. We have a desire for growth/self-improvement/survival dictated upon us due to millions of years of evolution. An AI may be perfectly "happy" constrained to it's box contemplating it's navel (or whatever the electronic equivalent is) all day.

Comment Re:There's no point in shame (Score 2) 256

If they have money to drink they can afford to pay for their own rehab. The taxpayers shouldn't have to shell out anything.

Exactly!

Instead of paying for rehabilitation in order to help ensure they don't re-offend, let's name, shame and ostracize them. That way we can pay even more money to prosecute/incarcerate them as their unwanted behaviors continue.

Brilliant!

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...