Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Luddites? (Score 1) 1052

we should work on halting population growth first

Overpopulation is not a problem. I'm mystified that anyone still thinks it is. Every first-world country has negative population growth, and they grow only via immigration. By 2050 at the latest, global population will actually start to decline, which might pose an even greater threat.

Comment Re: He proves again... (Score 1) 830

Second, point 1) is completely irrelevant. We're not going to be more or less likely a simulation if 1) is true or false.

Wrong. You clearly don't even understand what Bostrom means by "posthuman", or you wouldn't make such a ridiculous objection.

This also conflates probabilities of events inside our universe with things that are not in our universe. It's another not even wrong thing.

Your objections about inside/outside are complete nonsense.

And third, note that the author conflates "ancestor simulations" with "living in a simulation".

There is no difference.

It seems, heh, "likely" that anything with the computing power to simulate ancestors would also have the computing power to simulate alien universes with alien physics and alien lifeforms.

Yes, and your point is? The point of the proof was to argue how likely it is that we, in our universe, are a simulation. Alien universe simulations aren't relevant.

Comment Re:Actually that paper makes a god argument... (Score 1) 830

Your only objection that has any bearing at all is that the human brain may need quantum mechanics, but a) this is speculation, and thus not an objection, but even if it were, b) simulating the quantum mechanics needed only for brains is already acknowledged in Bostrom's argument. Like I said, everything you mentioned is utterly irrelevant.

Comment Re: He proves again... (Score 1) 830

I can already tell you it's not a convincing proof because it's not a proof. A huge gaping flaw is the absence of a measure by which we can compute likelihood.

Irrelevant. Quantifying the precise probability isn't relevant to this type of proof. The proof is an case analysis of an equation based on unknown parameters. The result is undeniable. You simply must accept one of the three cases Bostrom lays out.

It may not only be missing, it may be mathematically impossible to compute likelihood (say if number of universes with sentient life exceeds the cardinality of the real numbers).

Irrelevant. Plenty of proofs rely on incomputable quantities, like Kolmogorov complexity. For instance, this is why we utilize oracles to explore super-Turing computation.

No, it doesn't follow. You still don't know how many universes have humans naturally appear in them in the first place.

Irrelevant. Only the universes in which they do appear need to be considered.

You also implicitly assume that the only means for sentience to create new universes is via a particular form of simulation.

What does that even mean? A simulation belongs to an equivalence class. No assumptions are made about the equivalence class of any simulation.

Comment Re:It is literally a god argumet (Score 1) 830

From a science standpoint arguing that we are living in a computer simulation is no different than arguing god created the universe.

That's a ridiculous claim. The simulation argument is based on some very reasonable assumptions and simple math that no one disputes. Comparable proofs of a deity's existence are nowhere near as convincing.

Comment Re: He proves again... (Score 2) 830

Remind me, what does "proof" mean again in philosophy?

The same thing it means in every other discipline: a logical argument proceeding from assumptions to conclusions. Bostrom's simulation argument is a very convincing proof, and I highly recommend reading it. Abstract:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

Basically, if you accept the proposition that humans will continue to exist long into the future, and you accept that future humans have just as much interest in simulating their ancestors as we have in simulating our ancestors, then we are almost certainly living in a simulation.

Comment Algorithms aren't biased (Score 1) 571

The results of those interactions also demonstrate male favoritism. It took Apple more than four years to fix Siri's responses to questions about abortion services, and yet the company didn't seem to have any problem programming Siri to search for prostitutes and Viagra.

This demonstrates a clear and fundamental misunderstanding of how algorithms work. Siri's responses are likely conditioned based on what people talk about most when "abortion" comes up, and surprise surprise, rabid pro-life people love talking about the evils of abortion far more than pro-choice people. Your disappointment should be directed at your fellow non-programmer humans, not programmers. "Fixing" Siri would basically amount to adding more contextual discernment, but this is hardly easy.

Comment Re:Strange flamebait article (Score 1) 354

L4 and QNX are nice, but do you have an example of their use outside of the embedded space?

L4Linux, ie. Linux running on L4 as a guest, came out before Xen and paravirtualization was even a thing. The overhead of running on L4 was demonstrably lower than Xen. So you could run L4 on a desktop using Linux as a guest. Maybe you still can, though the Linux kernel is probably quite outdated now.

Context-switching overhead has always been the argument against microkernels

It's substantially better than hypervisors which are now everywhere.

So basically, despite their fancy message passing design, to get performance they have to lump everything together into gigantic monolithic applications, albeit running in userspace. Doesn't sound like a great proof-of-principle design to me.

QNX was and is typically deployed in embedded systems where resource constraints dominate. These are domains where you'd use something like the lwIP library embedded directly in your application to get a networking stack. These certainly aren't representative of desktop or server systems, which is presumably what you're asking about.

Furthermore, there's no question that achieving high performance in a decomposed design with lots of isolation boundaries is harder, particularly if you want to achieve security or other properties, which is where researchers mostly focus, but it was solved at least 12 years ago. If a final release wants to squeeze out that extra 2-5% of throughput, you can switch a compile-time option to link everything monolithically, but that doesn't mean you should design it monolithically by default.

Microkernel "performance issues" are largely a myth. The very first microkernels in the 80s had some issues due to their design, and simple profiling identified IPC as the problem. Liedtke then invented the L3 microkernel that solved this problem, and there has never since been an informed performance complaint against microkernels. This myth persists due to that initial impression and to developers looking at the structure of this system and simply saying, "well obviously this will be much slower". Not very scientific. Past research is why microkernel papers focus on IPC; it's just science in action.

Finally, the KeyKOS operating system was a high-security microkernel design that was widely deployed in commercial timesharing systems, and even early ATMs, back in the 70s and 80s. It was proprietary and unpublished until later, and included hard disk drivers in the kernel because its core design included orthogonal persistence and the verifiable security properties depended on an audited disk driver. Other than that it was a legit microkernel and hosted an optional full POSIX guest.

Comment Re:Strange flamebait article (Score 3, Informative) 354

Show me a working performant microkernel (no, XNU is not a microkernel) and I'll concede you have a point. Until then, it is just useless chatter.

L4 and QNX are working, performant microkernels that have seen plenty of deployment in the real world. Not on the desktop granted, but performant microkernels aren't some mythical unicorn, they're everywhere and have been for over a decade.

Comment Re:To be fair, the Feds seemed to be pretty thorou (Score 2) 67

This is the very definition of circumstantial. It's enough to justify further investigation, at best.

You haven't presented evidence that he made those posts with that phone that he was seen purchasing. For all you know, he could have lost it or had it stolen right after leaving Walmart, or lent it to someone, or it might not have even been his phone at all and he's just unlucky. This is why circumstantial evidence isn't nearly sufficient for conviction. Coincidences happen all the time.

If the phones that made those posts were not in his possession when they searched his things, all the evidence you've presented so far means nothing.

Slashdot Top Deals

The use of money is all the advantage there is to having money. -- B. Franklin

Working...