Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Boy who cried wolf (Score 3, Informative) 163

Where do you get this "end of the world" thing? As for the claim of "alarmism", do you not remember the flu strain several years ago that tended to kill healthy people in the prime of their life, rather than "immunocompromised hosts"?

It's not that the reports are "alarmist". It's (1) you're not understanding the actual risk, and (2) you're pretending that the reports are predicting the end of the world.

Comment Re:Birthday paradox? (Score 5, Insightful) 334

The birthday paradox would mean that even if planets with intelligent life are an average of thousands of light years from the nearest alien planet with intelligent life, the likelihood of one pair of planets with intelligent life existing much closer together than that is high. Those two planets would be like the two people who share a birthday in the paradox. That's a completely different idea than this article is about.

Comment Re:Free from captivity... for how long? (Score 1) 341

Good point. Maybe he could be considered mentally incompetent and placed in a non-jail institution. I think a zoo could be nice, but if he's considered a legal person, that's probably considered cruelty. If he's considered a person, we also wouldn't able to let him live in the wild, I think. Casting a person out into the wild would be considered cruel, too. I'm all for treating animals nicely, but granting legal personhood doesn't seem like the way to go about it. I think it would be more productive to treat mentally ill and mentally defective people better instead. And maybe also allow people who are suffering to end their lives the way they wish.

Comment Re:No, it's not even possible (Score 1) 181

Going into the 3rd dimension will mean even less surface area per transistor for heat to escape. We're not going to be able to pack millions more transistors per unit volume than we can now by stacking processor boards and putting cooling units between them, unless we can get the power consumption per transistor down by a factor of thousands without shrinking the transistors. It's theoretically possible, given our current knowledge of physics, but engineering such a system might take a while...

Comment Re:Cost of certificates (Score 3, Informative) 238

You can get SSL certificates for free, but they're WAY more difficult to use than they need to be. I've installed certificates before, and it's a bunch of tedious, boring, repetitive work. What are computers for but to automate tedious, boring, repetitive work!? The computer should handle all work for me, and all I should have to do is click a button, for chrissake! That's what Let's Encrypt does.

Comment Re:Drop HTTP completely? (Score 1) 238

There isn't such an extension already? If there isn't, someone should write one or alter an existing one to add that functionality, at least as an option. Then people should try it and let us know how painful it actually is to use. My guess would be: extremely painful for most users for the next several years, so painful that hardly anyone would use it willingly. Maybe some businesses could force it on their employees.

Comment Re:Drop HTTP completely? (Score 3, Informative) 238

The problem with HTTP is that a middleman can see and alter content. If a browser doesn't warn when it encounters a self-signed certificate, then HTTPS would be no more secure than HTTP -- all the middleman has to do is use a self-signed certificate to decrypt/encrypt packets as needed. So browsers do prefer HTTPS, when the certificate can be verified. If you're using HTTPS and the certificate can't be verified, it's no more secure than HTTP unless the user is warned, and in fact it's a way of detecting that a middleman may be present. That's the whole reason for the death warning!

Comment Re:My Take (Score 1) 181

I still think it's worse than that. I think we will sooner be able to clone humans reliably and perform brain-content transfers between clones or between a real brain and a simulated brain before we'll be able to reverse-engineer the brain or otherwise construct an artificial intelligence that isn't just a copy or near-copy of a brain. So practical immortality will come before artificial general intelligence, too.

Comment Re:My Take (Score 1) 181

Yes, you understand exactly!

But climbing higher in the tree will never get you to the moon. Programs that do better than humans in one particular area will not develop to the point that they have general intelligence. They'll be idiot savants, great at one specific thing to the point of being better than any human (like playing chess or Jeopardy, driving a car, performing surgery, or even writing a symphony), but a complete idiot at everything else.

I also think these programs will never get as good as the best humans at certain activities, like doing significant novel scientific research, proving hard math theorems, doing general programming, or translating languages. Certain activities do require general intelligence, not just one narrow specialty.

Slashdot Top Deals

To the systems programmer, users and applications serve only to provide a test load.

Working...