Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Fuck Me (Score 2) 553

Christ almighty, this beast is a fucking monster. What's next, a shell and a userland?

According to the slashdot editors, the next thing is clearly debiand!

Apparently it is to be the systemd module which uses the Debian logo/filter on front page /. articles to clearly indicate a story about generic linux software made by a guy at redhat that emulates behavior in microsoft windows...

After that they will install the new shutupd module, that does nothing but write "Woah slow down there cowboy, you last posted 140*10^12 minutes ago, try again later to give others a chance" to stdout - before repeatedly restarting itself for no good reason, as every proper init service boot manager network shell app should do

Comment Re: what about spectrums rights? (Score 1) 104

I am all for legalizing cannabis. I have no interest in legalising heroine, crack, meth, PCP, etc.

I'd rather abusers spend time in jail than around me and mine.

We have to draw a line somewhere, and crossing that line is how the black market makes money. Taking away that line altogether is akin to anarchy.

Stealing it without paying for it would still be a crime. Fraud would still be a crime. It wouldn't be anarchy, no not nearly.

Comment Re:I no longer think this is an issue (Score 2, Insightful) 258

But why would a machine have any goal if it is not motivated in the first place?

Same reason kids get sent to soccer lessons or swimming lessons or piano lessons the kid didn't want to take.

In the above example, it is the parents "programming" the kids behavior (even if that programming results in the child acting out later in life, as such actions can cause)

In the AI example, the essence is the same. An AI would have a goal because we programmed such a goal into it.

That isn't to say an AI must be programmed with a goal, it fully depends on how we go about constructing a given AI.

If the AI is I because we are simulating a brain, nervous system, and hormonal systems along with simulated inputs and outputs - that AI is likely to have goals (assuming it isn't driven insane by gaps in our knowledge in said simulation of course)

If the AI was brought forth in a brute-force manor or comes about from emergent properties, it is impossible to guess or even relate to its thinking to assume.
It may have goals similar to how we do. It may have goals brought about by completely different emergent properties. It may have no goals but what we program, or even no goals at all.
It's impossible to say without some knowledge of the process creating the AI, and at this point in time no such thing exists to have knowledge about.

But we know we humans have goals (or at least some of us), so if an AI is a strict simulation of a human, it will have goals just like we do. So we know for a fact it is possible for a thinking conscious being to have goals (humans being the evidence)

We don't know as sure if it's possible to not have goals in such a situation, but so far there is no evidence it isn't possible, so it is quite premature to rule it out at our current stage of understanding.

Comment Re:Conclusion goes too far? (Score 1) 159

The /8 part may be a stretch, but it would not surprise me if they run the nation on the 10.x.x.x range (or at least the public facing stuff).

they can still nest the B and C ranges inside that, and you have to know your stuff to reach the outside world via smuggled in equipment. And such attempts probably sticks out like a sore thumb to the uniforms operating the national firewall.

Slashdot Top Deals

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...