Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:It's all a simulation (Score 1) 167

As a concrete example: suppose P != NP. Then no simulation can solve NP-hard problems in polynomial outside time, so most simulations will have the same constraint as the parent universe (where we reside) and also be unable to solve NP-hard problems in polynomial inside time.

The "hey, this universe looks like a simulation too!" argument would be: every simulation we make has P!=NP. Our universe has P!=NP... sure looks like what a simulation would be like. Obviously, every simulation's rules is heavily correlated with the rules of the parent universe. But that doesn't mean that the parent universe is also a simulation.

To take this example with respect to time: it might be that time is fuzzy in our universe. If so, it puts a constraint on how "sharp" time in a simulation can be as well without excessive penalties to runtime (like the poly inside time = exponential outside time example above for any simulation with P=NP). But if time is fuzzy inside and time is fuzzy outside, that doesn't mean that the outside is also a simulation, at least not unless you've controlled for the effect that every simulation in this universe will be greatly biased towards being like the universe.

Now, the outside (our universe) might seem to be like the inside (a simulation) in surprising ways, e.g. things seeming to be lazy-evaluated with how quantum mechanics weirdness works. But mathematical patterns might persist in seemingly disparate structures. So it may be, say, that it's easier to lazy-evaluate small systems (like simulations in RAM) because it's hard to change large amounts of information at once, and it's hard to change large amounts of information at once because that's how our universe works.

In a way, not taking this correlation into account when speculating about whether the universe is a simulation begs the question.

Comment Re:Rich are winning class war [Re: Bull] (Score 1) 644

Higher up in the thread, someone mentioned the Butlerian Jihad. In light of the quote:

Once we get there, what reason is there to have most of us sweaty, non-machine-owning meatbags around?

we might as well ask what reason there would be for the completely automated AIs to have even the machine-owning meatbags around. What purpose do the Titans, err, the .1% serve once one of them invents an Omnius?

Comment Re:Arrow of time (Score 1) 119

I'd imagine the most likely explanation to be that the statistical probability of event x happening in the future given that y has happened in the past is not the same as the probability of y happening in the future given that x happened in the past; but at very small scales, the difference is too small to see at any small time delta.

If so, try speeding up the video or yes, look at a higher level.

Comment Re:He does have a point... (Score 1) 251

I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen.

—Commissioner Pravin Lal,
“Man and Machine”

Comment Re:I agree - AI's strength is with details (Score 1) 110

The way it seems to be for games AI is generally:

- Minimax type AI has spectacularly good micro, but sucks at macro. E.g. chess AIs, or see minimax used on RTSes - quote: " RTMM plays perfect short term micro-scale game, but plays a very bad high-level (long term) strategy ..."
- UCT type AI has somewhere between consistently poor and consistently average play on both macro and micro, see e.g. Go programs prior to AlphaGo.
- Neural net AI has good macro but suck at micro. See AlphaGo: as long as it could do a death of a thousand cuts type play against Sedol, it won; but when Sedol forced it into a tactical trap in game 4, it failed badly. And from the TD-Gammon article on Wikipedia: "TD-Gammon's strengths and weaknesses were the opposite of symbolic artificial intelligence programs and most computer software in general: it was good at matters that require an intuitive "feel", but bad at systematic analysis." A problem with neural net AI is that it has to be designed to fit the problem. AlphaGo used a UCT hybrid with convolutional neural networks while TD-Gammon used temporal difference learning.

Humans are also much better at figuring out a game without being told how it works, or at constructing models for situations where trial and error is out of the question; this probably contributes to why we're not seeing fire-and-forget AI for say, governance or management. There's no rulebook for that kind of "game", and a neural net AI can't train itself by self-play unless it knows what the rules are to begin with.

Comment Re:Natrual Selection for people and AIs (Score 1) 74

This is very much an important point. In game theory terms, behaving ethically might not be the Nash equilibrium, but it's very much an evolutionary stable strategy. Since it's an ESS that provides greater utility than the Homo Economicus model, people are honest and ethical, because the Homo Economicuses either get detected and ostracized (in small numbers) or outcompeted (in separate populations).

Slashdot Top Deals

Old programmers never die, they just hit account block limit.

Working...