Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Birthday paradox? (Score 5, Insightful) 334

The birthday paradox would mean that even if planets with intelligent life are an average of thousands of light years from the nearest alien planet with intelligent life, the likelihood of one pair of planets with intelligent life existing much closer together than that is high. Those two planets would be like the two people who share a birthday in the paradox. That's a completely different idea than this article is about.

Comment Re:Free from captivity... for how long? (Score 1) 341

Good point. Maybe he could be considered mentally incompetent and placed in a non-jail institution. I think a zoo could be nice, but if he's considered a legal person, that's probably considered cruelty. If he's considered a person, we also wouldn't able to let him live in the wild, I think. Casting a person out into the wild would be considered cruel, too. I'm all for treating animals nicely, but granting legal personhood doesn't seem like the way to go about it. I think it would be more productive to treat mentally ill and mentally defective people better instead. And maybe also allow people who are suffering to end their lives the way they wish.

Comment Re:No, it's not even possible (Score 1) 181

Going into the 3rd dimension will mean even less surface area per transistor for heat to escape. We're not going to be able to pack millions more transistors per unit volume than we can now by stacking processor boards and putting cooling units between them, unless we can get the power consumption per transistor down by a factor of thousands without shrinking the transistors. It's theoretically possible, given our current knowledge of physics, but engineering such a system might take a while...

Comment Re:Cost of certificates (Score 3, Informative) 238

You can get SSL certificates for free, but they're WAY more difficult to use than they need to be. I've installed certificates before, and it's a bunch of tedious, boring, repetitive work. What are computers for but to automate tedious, boring, repetitive work!? The computer should handle all work for me, and all I should have to do is click a button, for chrissake! That's what Let's Encrypt does.

Comment Re:Drop HTTP completely? (Score 1) 238

There isn't such an extension already? If there isn't, someone should write one or alter an existing one to add that functionality, at least as an option. Then people should try it and let us know how painful it actually is to use. My guess would be: extremely painful for most users for the next several years, so painful that hardly anyone would use it willingly. Maybe some businesses could force it on their employees.

Comment Re:Drop HTTP completely? (Score 3, Informative) 238

The problem with HTTP is that a middleman can see and alter content. If a browser doesn't warn when it encounters a self-signed certificate, then HTTPS would be no more secure than HTTP -- all the middleman has to do is use a self-signed certificate to decrypt/encrypt packets as needed. So browsers do prefer HTTPS, when the certificate can be verified. If you're using HTTPS and the certificate can't be verified, it's no more secure than HTTP unless the user is warned, and in fact it's a way of detecting that a middleman may be present. That's the whole reason for the death warning!

Comment Re:My Take (Score 1) 181

I still think it's worse than that. I think we will sooner be able to clone humans reliably and perform brain-content transfers between clones or between a real brain and a simulated brain before we'll be able to reverse-engineer the brain or otherwise construct an artificial intelligence that isn't just a copy or near-copy of a brain. So practical immortality will come before artificial general intelligence, too.

Comment Re:My Take (Score 1) 181

Yes, you understand exactly!

But climbing higher in the tree will never get you to the moon. Programs that do better than humans in one particular area will not develop to the point that they have general intelligence. They'll be idiot savants, great at one specific thing to the point of being better than any human (like playing chess or Jeopardy, driving a car, performing surgery, or even writing a symphony), but a complete idiot at everything else.

I also think these programs will never get as good as the best humans at certain activities, like doing significant novel scientific research, proving hard math theorems, doing general programming, or translating languages. Certain activities do require general intelligence, not just one narrow specialty.

Comment Re:My Take (Score 1) 181

I think the situation is worse than that. Not only do we not have anything approaching a decent understanding of how actual intelligence works, it's probably way too complicated for a human to understand. Perhaps we could construct a computer system that could in some sense "understand" how the brain works, and it could design a better brain. That better brain could in turn build a better computer system, ad singularity. Actually I never thought of that approach before.

But does an agent have to understand how to make a thing before a thing can come into existence? Biological and AI research has shown that this is not the case.

Comment My Take (Score 1) 181

As someone who has a recent graduate degree in computer science, and has a fair amount of experience in applying AI techniques, let me offer my take on the matter. What is referred to as "artificial intelligence" today will never, ever result in an agent that we could consider intelligent, by the standards of human intelligence. Nearly all AI research is focused on solving very narrow problems. To give one example, Watson is no more than a sophisticated search engine. It's barely more intelligent than Google's search engine. It has no understanding of the queries it gets and the results it gives. It merely gives the illusion of understanding because it often gives results that we might expect from a human who does understand. One quote that sums this up is "Believing that writing these types of programs will bring us closer to real artificial intelligence is like believing that someone climbing a tree is making progress toward reaching the moon."

The small percentage of AI research that really is trying to make something that can truly think and understand has had very limited results. The Cyc project, for example, might be the most successful of these projects but it's also been widely criticized. One might say that it understands what it is processing in some way, but its results are not that impressive. If you have a specific problem domain, it's much more effective to use standard machine learning and natural language processing techniques, which generally do some kind of simple number crunching to perform a statistical analysis.

Will we someday create an AI that's truly intelligent? I think so, and I have written up how I think it can be done, but I think we'll have cheap fusion power, interstellar space travel, and nanobots that perform apparent magic before we have AI that can do what an average human can do. I'm not worried about the singularity happening in my lifetime.

Slashdot Top Deals

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...