Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:I just don't get that. (Score 2) 117

I agree the justice system has gone haywire.

I agree the justice system has no business going haywire.

I agree the justice system has no business treating one person differently from another.

I agree that what was done was completely wrong, not just in this case but in many others.

I've said as much, repeatedly, on The Guardian's website on relevant topics. This isn't a new opinion for me.

There is a difference between having no sympathy for the guy (IMHO he deserved it) and agreeing with the justice system. I agree, and always have, with Tolkien's phrasing of it: "Deserved death? I daresay he did. I daresay there are many who live who deserve to die. I daresay there are many who've died who deserve life. Can you give them that also?" Whilst I admit that I'm "quick to judge" on occasion, I heed Tolkien's words and do not believe that "deserving" is sufficient to warrant inflicting what is "deserved". I do not believe retribution is a functional way to go about things. Trashing a hard drive with a sledgehammer might stop bugs in software affecting you, but it doesn't actually fix anything. To do that, you have to not inflict retribution but therapy, fixing the defects.

The same is true of people. Fixing the defects of character is harder, but certainly achievable in most cases. That pays attention to Tolkien/Gandalf's advice, leaves the world a richer place, and is generally a Good Thing. It's also cheaper than inflicting punishment. A lot cheaper, if the world is a lot richer for it.

He has smarts, he has savvy, with a little examination of why he chose the path he was on and some tests, it would not be hard to figure out how he could either offer the same service in essentially the same way in a protected manner, or (if he preferred) to do something different but that makes use of his skills and knowledge.

Bankrupting him has left the world poorer, because there's no way on Earth anyone will convince him to be more charitable and considerate now, and that's the only way the world would ever benefit from his skills and know-how.

To me, this is simple economics. At vast expense, the US has turned a person who was merely dysfunctional but a potential asset nonetheless to society if he could be persuaded into a dysfunctional wreck with a chip on his shoulder the size of the Empire State Building who is never going to let the world see the positive in his abilities. In short, by clocking up a huge liability, the US has achieved the dubious distinction of turning an asset into an additional liability.

I hold that there is always a solution that is both economically sound and ethically sound over the long term, over society as a whole, and that on closer examination, such solutions will always be superior to those that appear ethically sound but are economically unsound. Most of what is truly ethical is also a boost to some key aspect - to a person, society or planet - in the long term that is in excess of the cost, and thus will automatically be also economically sensible. Everything that is truly unethical may produce some short term benefit of some kind to some person, but is invariably expensive to everyone and everything in the long run. In consequence, even the ethical things with no obvious benefits will be cheaper than the great burdens created by the unethical.

I would not do well in a Star Trek universe.

Comment I've a really hard time sympathizing. (Score 2) 117

A parasite (he didn't get a fleet of flashy cars by donating disk space to anyone) gets sucked dry by a bigger, nastier parasite.

Sorry, but if you live by a dog-eat-dog creed, don't expect tears when your pet poodle is a predator's desert.

I'm sympathetic with ISOhunt, who got crippled by the UK government, as I'm willing to bet that people after illegal ISOs searched elsewhere. They're a major source of information on ISOs for F/L/OS software, though, which is entirely legal. They got a raw deal on that, because of the bad name the *AA have given torrents. Blocking the others won't do the UK any good, but that's not the point. Nor is it the point that these services index, not host. The point is that it doesn't matter whether the links point to legitimate or illegitimate content, they're tarnished not by what they index but by the mode of transport used.

Kim DotCom is another matter. He raked in an awful lot of money by doing very very little. He'd make a great bank CEO or politician, such is his level of verminicity. Had he done essentially the same, with far less profit (it's ok for him to live, just not ok for him to own half the cars in New Zealand), far less arrogance (like I said, a bank CEO or politician), and far less swagger (maybe, just maybe a touch of humility), I might pity him more. The humble earn at least some respect for being humble. It's rare enough.

If he'd presented his service as "common carrier", then that too would be worth respect. That's legal, that's all about NOT looking at what's there and NOT being shot in the process. DotCom's approach was to be a braggart. Sorry, but that kills any respect.

As judges are renown for disliking the arrogant, swaggering braggart type, that might well have cost him every court case contested. Even on the rare occasion that justice is blind, it still has a sense of smell and arrogant, swaggering braggarts stink.

Comment Re:AI researcher here (Score 1) 455

As I've said, that's the field known as Genetic Algorithms. It's a fun area and highly promising in some fields of work, but the contexts are too simple and the algorithms are too naive. A good example of a naive Genetic Algorithm is the one used by stock brokers to game the system. It "works", but only if the system is well-behaved. But, by working en-masse, it causes the system to not be well-behaved. Because it's naive, it's incapable of evolving to deal with this.

Comment Re:AI researcher here (Score 1) 455

No I don't. I do not subscribe to Professor Penrose's Chinese Room argument. You do not understand my argument and that's perfectly obvious. The more you shout, the deafer you show yourself to be.

No, it's not "completely false". It's standard AI thought. Your examples show nothing because you do not comprehend the thought. You'd probably do better to ASK once in a while than to argue with someone older and wiser. Now get off my lawn!

Comment Re:Emergent Intelligence? (Score 1) 455

That's an argument I can buy. Absolutely, with NN, the topology is static. Unless every node is connected to every other node, bi-directionally, you cannot emulate a dynamic topology. And that's assuming a fixed number of neurons. We know, in the brain, the number of neurons varies according to usage. So even a fully-connected NN would not be sufficient unless it started off at the maximum potential size.

I agree that to evolve, you've got to have an environment to evolve in, a means to evolve and a pressure to evolve. The AI field that looks at this sort of thing is "Genetic Algorithms", and there are a few systems in that area which look promising.

It's my thesis, though, that Strong AI must be more complex than even that. All higher life-forms have not only an external environment but an internal one as well. There is a simulation of the local "world" in the brain that is updated by the senses and this is the "reality" we perceive. The consciousness is not directly connected to any sense, which is why you can induce synaesthesia. The mind, therefore, evolves according to this simplified internal model. and not the external reality.

The idea of Emergent Intelligence is therefore very appealing. It is possible to construct a virtual world for the Artificial Life and a second virtual world maintained by the Artificial Life. This doesn't require knowing how to develop intelligence or how to define it. They're just virtual worlds, nothing more. All you need then is an initial condition and a set of rules. These would be more sophisticated than a conventional genetic algorithm, but based on the same idea. If you don't know what something will be, but know how to determine how close you are, herustics are sufficient for you to close the gap as much as you like.

This would not be "Artificial Intelligence" in the sense that the intelligence emerged with no human intervention past the initial state. It was not made, it's not an artifact, it's perfectly natural but in an artificial world running on an artificial computer. It is possible to determine if this universe is a simulation running on a computer running on a universe of the same size, but it is not possible if this universe is a simulation running in a larger universe. The decision on whether something is artificial or not cannot, then, be governed by the platform because we've no idea if this is top-level or not and we cannot. Nonetheless, we're indistinguishable from a natural lifeform, thus we have to say that it is this property that decides if something is natural.

An imitation of the whole human brain is planned in Europe. The EU is building a massive supercomputer that will run a neuron-for-neuron (and presumably complete connectome) simulation of the brain for the purpose of understanding how it works internally. I think that's an excellent project for what it is designed for, but I don't think it'll be Strong AI.

Let's say, however, you built a virtual world at a reasonably fine-grain (doesn't have to be too fine, just good enough), a second virtual world that was much coarser-grain and which used lossy encoding in a way that preserved some information from all prior states, a crude set of genetic algorithms that mapped outer virtual world to inner virtual world, and finally an independent set of genetic algorithms that decide what to do (but not how), a set for examining the internal virtual world for past examples of how, a set for generating an alternative method for how without recourse for memory, and a final set for picking the method that sounds best and implementing it, and an extensive set that initially starts off with reconciling differences between what was expected and what happened.

That should be sufficient for Emergent Intelligence of some sort to evolve.

Comment Re:AI researcher here (Score 1) 455

"the human brain is ultimately nothing more than a gigantic conglomerate of gates itself"

Which is sufficient evidence, as far as I'm concerned that you didn't read my post and replied to what you thought I should have written according to what you think I should believe.

Guess what. You're wrong.

Comment Re:Philosophy -- graveyard of fact (Score 3, Interesting) 455

Not true. The Scientific Method is itself a philosophy, as is mathematics. (Mathematics is not a science, it is a humanity and specifically a philosophy.) Mathematics is the core of all science.

Your understanding of philosophy clearly needs some refreshing. I suggest you start with Bertrand Russel's formalization of logic and progress to John Patrick Day's excellent textbook on mathematical philosophy. It's clear you do not know what serious (as opposed to populist) philosophers are concerned with. This is no better than judging physics by Fleich and Pons' Cold Fusion work, or judging biology by examining 1960s American perversions of brain surgery.

You've got to look at the real work. And the odds are that there's more in your computer that was developed by a philosopher than ever came close to a "non-philosophical" scientist (whatever those might be).

Comment Re:AI researcher here (Score 2) 455

Expert systems are not intelligent. They're nothing more than a fancy version of Animals. If/then/else isn't even weak AI and a binary search of an index is just a search. It doesn't mimic an expert, because experts only start with simple diagnostic tools like that. That's the beginning, not the end. Experts know when answers are off and know how to recover from it - when it's unimportant and when it's absolutely critical. Experts also know how to handle cases never encountered before, because they don't just know a bunch of checklist questions, they know how information relates and they know the patterns that are generic across all cases, known and unknown. You can't program an Expert System Shell with Category Theory maps, Prolog isn't going to know what to do with meta-abstraction.

Neural Networks are debatable. Fundamentally, a Neural Network is a very large set of multi-input gates. Nothing more. If it's trained, then all you've done is simplified the derivation of the gates. You've not added any intelligence. Self-organizing networks are another beast entirely. These can be argued to be "intelligent", since the human brain is ultimately nothing more than a gigantic conglomerate of gates itself. The only reason you have the illusion of intelligence is that there's self-organizing involved. However, no self-organizing neural net on any computer yet built is so powerful that it can simulate the functioning of a nematode's brain. Strong AI, which is what most non-CS people think of as AI, cannot yet even be described. We have no comprehension of what it is, therefore cannot build it.

What the professor is really talking about though, as indicated by the reference to cellular biology, is not AI but ALife. Nothing currently in existence can be called true artificial life, although the Bugs program from Scientific American is a good start. Artificial Life is many orders of magnitude harder than Strong AI. It's not enough to emulate the properties of intelligence, you have to emulate the reason for there needing to be intelligence in the first place. Even those working on Strong AI aren't tackling such self-consistency issues, far too complex for them.

(It's clear that most AI work is incompatible with a self-consistent Strong AI, so I'm inclined to believe Singularity isn't going to be here for a while. Progress is, as others have noted, somewhere between non-linear and exponential, but even if we assume exponential, it'll be between 75-150 years before Strong Artificial Life is within reach, where Strong ALife is Strong AI and Artificial Life and self-consistency.)

Comment Re:Why bother? (Score 2) 50

There are lots of pressing problems.

Cyphers, as opposed to codes, have well-defined functions (be it an algorithm or a lookup table) which map the input to the output. The same functions are applied in the same way across the entire input. Unless the functions are such that the output is truly indistinguishable from a random oracle (or, indeed, any other Oracle product), information is exposed, both information about the message and information about the method for producing the cyphertext. Since randomness can tell you nothing, by definition, the amount of information exposed cannot exceed the the information limit proposed by Shannon for a channel whose bandwidth is equal to the non-randomness of the output.

(A channel is a channel is a channel. The rules don't care.)

So, obviously you want to know how to get at the greatest amount of the unencrypted data that's encoded in the non-randomness, and how do you actually then extract the contents?

In other words, is there a general purpose function that can do basic, naive cryptanalysis? And what, exactly, can such a function achieve given a channel of N bits and a message of M bits?

In other words, how much non-randomness can a cypher have before you definitely know there's enough information leakage in some arbitrary cypher for the most naive cryptanalysis possible (excluding brute-force, since that's not analytical and isn't naive since you have to know the cypher) to be able to break the cypher in finite time? (Even if that's longer than the universe is expected to last.)

Is there some function which can take the information leakage rate and the type and complexity of the cypher to produce a half-life of that class of cyphers, where you can expect half of a random selection of cyphers (out of all cyphers with the same characteristics) to be broken at around that estimated half-life point?

If you can do that, then you know how complex you can make your cypher for a competition page, and how simple you can afford it when building a TrueCrypt replacement.

Comment Re:Clock -- Time is running out! (Score 1) 50

Damn. I was hoping he was going to say that the solution was written down but the piece of acid-free archival paper had been cut into segments, placed in acid-free envelopes, in turn placed in argon-filled boxes, which in turn were buried at secret locations, with the GPS coordinates for each segment written in encrypted format in the will.

Comment OneCore? (Score 2) 171

*Freddy Mercury impression*

One Core, One System!
The bright neon looks oh-so tacky.
They've screwed it up, it's now worse than wacky!
Oh oh oh, give them some vision!

No true, no false, the GUI will only do a slow waltz
No blood, no vein, MS zombies wanna much on your brain
No specs, no mission, the code's just some fried chicken!

*Switches to Gandal*

Nine cores for mortal tasks, doomed to die()
Seven for the Intel lords, in their halls of silicon
Three for the MIPS under the NSA
One for the Dark Hoarde on their Dark Campus.
One Core to rule them all, One Core to crash them,
One Core to freeze them all and in the darkness mash them!
In the land of Redmond, where the dotnet lies!

Slashdot Top Deals

Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.

Working...