Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Emergent Intelligence? (Score 1) 455

That's an argument I can buy. Absolutely, with NN, the topology is static. Unless every node is connected to every other node, bi-directionally, you cannot emulate a dynamic topology. And that's assuming a fixed number of neurons. We know, in the brain, the number of neurons varies according to usage. So even a fully-connected NN would not be sufficient unless it started off at the maximum potential size.

I agree that to evolve, you've got to have an environment to evolve in, a means to evolve and a pressure to evolve. The AI field that looks at this sort of thing is "Genetic Algorithms", and there are a few systems in that area which look promising.

It's my thesis, though, that Strong AI must be more complex than even that. All higher life-forms have not only an external environment but an internal one as well. There is a simulation of the local "world" in the brain that is updated by the senses and this is the "reality" we perceive. The consciousness is not directly connected to any sense, which is why you can induce synaesthesia. The mind, therefore, evolves according to this simplified internal model. and not the external reality.

The idea of Emergent Intelligence is therefore very appealing. It is possible to construct a virtual world for the Artificial Life and a second virtual world maintained by the Artificial Life. This doesn't require knowing how to develop intelligence or how to define it. They're just virtual worlds, nothing more. All you need then is an initial condition and a set of rules. These would be more sophisticated than a conventional genetic algorithm, but based on the same idea. If you don't know what something will be, but know how to determine how close you are, herustics are sufficient for you to close the gap as much as you like.

This would not be "Artificial Intelligence" in the sense that the intelligence emerged with no human intervention past the initial state. It was not made, it's not an artifact, it's perfectly natural but in an artificial world running on an artificial computer. It is possible to determine if this universe is a simulation running on a computer running on a universe of the same size, but it is not possible if this universe is a simulation running in a larger universe. The decision on whether something is artificial or not cannot, then, be governed by the platform because we've no idea if this is top-level or not and we cannot. Nonetheless, we're indistinguishable from a natural lifeform, thus we have to say that it is this property that decides if something is natural.

An imitation of the whole human brain is planned in Europe. The EU is building a massive supercomputer that will run a neuron-for-neuron (and presumably complete connectome) simulation of the brain for the purpose of understanding how it works internally. I think that's an excellent project for what it is designed for, but I don't think it'll be Strong AI.

Let's say, however, you built a virtual world at a reasonably fine-grain (doesn't have to be too fine, just good enough), a second virtual world that was much coarser-grain and which used lossy encoding in a way that preserved some information from all prior states, a crude set of genetic algorithms that mapped outer virtual world to inner virtual world, and finally an independent set of genetic algorithms that decide what to do (but not how), a set for examining the internal virtual world for past examples of how, a set for generating an alternative method for how without recourse for memory, and a final set for picking the method that sounds best and implementing it, and an extensive set that initially starts off with reconciling differences between what was expected and what happened.

That should be sufficient for Emergent Intelligence of some sort to evolve.

Comment Re:AI researcher here (Score 1) 455

"the human brain is ultimately nothing more than a gigantic conglomerate of gates itself"

Which is sufficient evidence, as far as I'm concerned that you didn't read my post and replied to what you thought I should have written according to what you think I should believe.

Guess what. You're wrong.

Comment Re:Philosophy -- graveyard of fact (Score 3, Interesting) 455

Not true. The Scientific Method is itself a philosophy, as is mathematics. (Mathematics is not a science, it is a humanity and specifically a philosophy.) Mathematics is the core of all science.

Your understanding of philosophy clearly needs some refreshing. I suggest you start with Bertrand Russel's formalization of logic and progress to John Patrick Day's excellent textbook on mathematical philosophy. It's clear you do not know what serious (as opposed to populist) philosophers are concerned with. This is no better than judging physics by Fleich and Pons' Cold Fusion work, or judging biology by examining 1960s American perversions of brain surgery.

You've got to look at the real work. And the odds are that there's more in your computer that was developed by a philosopher than ever came close to a "non-philosophical" scientist (whatever those might be).

Comment Re:AI researcher here (Score 2) 455

Expert systems are not intelligent. They're nothing more than a fancy version of Animals. If/then/else isn't even weak AI and a binary search of an index is just a search. It doesn't mimic an expert, because experts only start with simple diagnostic tools like that. That's the beginning, not the end. Experts know when answers are off and know how to recover from it - when it's unimportant and when it's absolutely critical. Experts also know how to handle cases never encountered before, because they don't just know a bunch of checklist questions, they know how information relates and they know the patterns that are generic across all cases, known and unknown. You can't program an Expert System Shell with Category Theory maps, Prolog isn't going to know what to do with meta-abstraction.

Neural Networks are debatable. Fundamentally, a Neural Network is a very large set of multi-input gates. Nothing more. If it's trained, then all you've done is simplified the derivation of the gates. You've not added any intelligence. Self-organizing networks are another beast entirely. These can be argued to be "intelligent", since the human brain is ultimately nothing more than a gigantic conglomerate of gates itself. The only reason you have the illusion of intelligence is that there's self-organizing involved. However, no self-organizing neural net on any computer yet built is so powerful that it can simulate the functioning of a nematode's brain. Strong AI, which is what most non-CS people think of as AI, cannot yet even be described. We have no comprehension of what it is, therefore cannot build it.

What the professor is really talking about though, as indicated by the reference to cellular biology, is not AI but ALife. Nothing currently in existence can be called true artificial life, although the Bugs program from Scientific American is a good start. Artificial Life is many orders of magnitude harder than Strong AI. It's not enough to emulate the properties of intelligence, you have to emulate the reason for there needing to be intelligence in the first place. Even those working on Strong AI aren't tackling such self-consistency issues, far too complex for them.

(It's clear that most AI work is incompatible with a self-consistent Strong AI, so I'm inclined to believe Singularity isn't going to be here for a while. Progress is, as others have noted, somewhere between non-linear and exponential, but even if we assume exponential, it'll be between 75-150 years before Strong Artificial Life is within reach, where Strong ALife is Strong AI and Artificial Life and self-consistency.)

Comment Re:Why bother? (Score 2) 50

There are lots of pressing problems.

Cyphers, as opposed to codes, have well-defined functions (be it an algorithm or a lookup table) which map the input to the output. The same functions are applied in the same way across the entire input. Unless the functions are such that the output is truly indistinguishable from a random oracle (or, indeed, any other Oracle product), information is exposed, both information about the message and information about the method for producing the cyphertext. Since randomness can tell you nothing, by definition, the amount of information exposed cannot exceed the the information limit proposed by Shannon for a channel whose bandwidth is equal to the non-randomness of the output.

(A channel is a channel is a channel. The rules don't care.)

So, obviously you want to know how to get at the greatest amount of the unencrypted data that's encoded in the non-randomness, and how do you actually then extract the contents?

In other words, is there a general purpose function that can do basic, naive cryptanalysis? And what, exactly, can such a function achieve given a channel of N bits and a message of M bits?

In other words, how much non-randomness can a cypher have before you definitely know there's enough information leakage in some arbitrary cypher for the most naive cryptanalysis possible (excluding brute-force, since that's not analytical and isn't naive since you have to know the cypher) to be able to break the cypher in finite time? (Even if that's longer than the universe is expected to last.)

Is there some function which can take the information leakage rate and the type and complexity of the cypher to produce a half-life of that class of cyphers, where you can expect half of a random selection of cyphers (out of all cyphers with the same characteristics) to be broken at around that estimated half-life point?

If you can do that, then you know how complex you can make your cypher for a competition page, and how simple you can afford it when building a TrueCrypt replacement.

Comment Re:Clock -- Time is running out! (Score 1) 50

Damn. I was hoping he was going to say that the solution was written down but the piece of acid-free archival paper had been cut into segments, placed in acid-free envelopes, in turn placed in argon-filled boxes, which in turn were buried at secret locations, with the GPS coordinates for each segment written in encrypted format in the will.

Comment OneCore? (Score 2) 171

*Freddy Mercury impression*

One Core, One System!
The bright neon looks oh-so tacky.
They've screwed it up, it's now worse than wacky!
Oh oh oh, give them some vision!

No true, no false, the GUI will only do a slow waltz
No blood, no vein, MS zombies wanna much on your brain
No specs, no mission, the code's just some fried chicken!

*Switches to Gandal*

Nine cores for mortal tasks, doomed to die()
Seven for the Intel lords, in their halls of silicon
Three for the MIPS under the NSA
One for the Dark Hoarde on their Dark Campus.
One Core to rule them all, One Core to crash them,
One Core to freeze them all and in the darkness mash them!
In the land of Redmond, where the dotnet lies!

Comment Re: Nuclear Power has Dangers (Score 1) 523

They're probably no different from regular battery terminals. Minor metallic taste, nothing special. The taste when wire-cutting with your front teeth is more interesting as you get the plastic overtones. Sniffing molten leaded solder (produces a thick smoke) is also fun. Reminds me a bit of slightly burned cinnamon toast.

I'm not normal, am I?

Comment Americium is preferred to Plutonium (Score 1) 523

It's cheaper, the shielding is lighter, gives about the same results, and the press doesn't hate it so much.

However, it doesn't much matter which you'd use, you'd get superior results. Provided things didn't break in the bounce. That was a particularly nasty prang. The yellow flags are out for sure. I wonder if Murray Walker had predicted it would go smoothly.

The way I would have done it would be to have a radioisotope battery that could run the computers and heaters (if any) but not the instruments or radio. Those should be on a separate power system, running off the battery, although I see no reason why the computer couldn't have an idle mode which consumed minimal power specifically to top off the battery.

The reason? The instruments take a lot of power over a relatively short timeframe. Same with the transmitter. That's a very different characteristic from the computers, which probably have a very flat profile. No significant change in power at different times. The computers can also be digesting data between science runs.

Well, that's one reason. The other is you don't want single points of failure. If one power system barfs, say due to a kilometre-long vault and crunch, the other has to be sufficiently useful to get work done. The problem is weight constraints. It's hard to build gas jets that can steer a fridge-freezer through space, but much harder if there's a kitchen sink bolted on. That means less-than-ideal for both power sources, which means if both function properly, you want to match power draw profiles to power deliverable. That reduces sensitivity to demand, which means you can remove a lot of protection needed for mismatched systems.

What we really need is a collaboration with ESA and NASA to produce an "educational game" where you design a probe and lander (ignoring the initial rocket stage) by plugging components into a frame, then dropping the lander on a comet or asteroid with typical (ie: high) component failure rates. Then instead of abstract discussions, we can get an approximation to "build it and see", which is the correct way to engineer.

Comment Seems obvious to me. (Score 1) 213

The Knights Hospitalers (I think, could have been Templars) had a fortress that was never conquered. Attackers would be bottlenecked, relative to defenders, were forever being harassed on the flanks and faced numerous blind corners.

Simply build a reproduction of this fortress around the White House. They can build a moat around it, if they like. Ringed by an electric fence. Oh, the moat needs sharks with lasers. Any suggestion for shark species?

The great thing about this is that the White House can remain a tourist attraction. Everyone loves castles, and taking blindfolded and handcuffed tourists through the maze of twisty little passages (all alike) would surely be a massive draw. BDSM is big business these days.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...