Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:This is not a photon drive (Score 1) 480

Those conjectures are based on the author's explanations of the mechanism, which we already know to be largely bunk.

Compared to the actual momentum imparted by 1kW worth of photons, which is what current physics suggests would be the only source of momentum, the amount of force measured is much more significant. Hence, fairly efficient by comparison.

Finally note that the one experiment that got close to one Newton/kW was not done in a vacuum.

The NASA tests measured the same force in both vacuum and non-vacuum environments. Any results from China are suspect since falsification is much more rampant there.

Comment Re:This is not a photon drive (Score 1) 480

They put in metric ton-loads of energy and measure a very small effect. They say they will need to increase the efficiency by many orders of magnitude to create a practical device. I say they probably made a mistake somewhere and the tiny effect they measured is either noise or due to something else they haven't yet accounted for.

They didn't actually put in that much energy compared to the thrust they measured. If the tiny effect is what you're worried about, then a proper metric ton-load of energy would immediately point out any error.

Comment Re:You do not discharge anger from engaging in it (Score 1) 58

The experiment excluded a certain class of catharsis theories. Fortunately, it's hardly the only such experiment. Catharsis theory has basically no evidence supporting it at this point, and enough evidence against it that it's pretty safe to say that indulging your anger does not "expend it" in any meaningful sense.

Comment Re:ignorant hypocrites (Score 2) 347

You now have min, max, and average times for a maze of that size/complexity

Estimating the size/complexity of a software project is exactly where all the error lies. The original quote was correct that it's like solving a maze, you just assumed they were talking about mazes on paper that you could estimate size and complexity at a glance. No, this is a full sized maze that you walk through and have no idea how deep and complex it is.

Comment Re:Did he read it? (Score 1) 249

It does not claim (as I understand it) to represent every scenario, merely a special case of a specific scenario. Explicitly, it requires the organism to have enough intelligence to remember what happened in previous games, so a bacteria without memory is not covered under this model. The strategy requires multiple rounds be played.

Irrelevant. Genes that survive are the "memory" and successive generations are the rounds. If cooperation were an optimal solution to the iterated prisoner's dilemma, then it would explain the emergence of cooperation in evolution via natural selection.

Of course, these game theory papers are searching for global optimal solutions, but evolution via natural selection does not necessarily find such points. If the amount of energy required to achieve a global maxima is quite high, then it is quite "happy" to settle in a local maxima. This might just be such a case.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

That is such an idiotic statement that I won't even bother continuing the discussion. This link is the wikipedia page.

Perhaps you should actually read the link you specified. Linus himself describes the term "hybrid" kernel as simple marketing, which is exactly what I said.

Pulseaudio is a brain damaged [beastwithin.org] piece of software and one of the first things to be removed in any distribution.

And yet, it's not removed seeing as it's still around and still quite popular.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

Yet for machines like home computers it is simply not possible. It is only with the relative advance of hardware that the microkernel can actually get close to a monolithic kernel in terms of performance.

This is not true. The L4 kernel ran Linux as a guest OS with 5% overhead. This was the birth of hypervisors like Xen, which are really just a sort of microkernel. Virtualization is everywhere now, and no one seems to be complaining about overhead. If you can run a virtualized host with so little overhead, native execution is at least as fast as your guest.

It's perhaps too easy to introduce performance problems via a poor choice of decomposition, but that doesn't entail that decomposed systems must necessarily perform poorly.

It is this performance consequence that made Windows NT, originally designed on a microkernel architecture, move towards a hybrid kernel.

Poorly designed systems perform poorly. The NT kernel might have been a nice kernel design from some people's standpoint, but then people thought the Mach kernel was a good microkernel too. Both opinions are incorrect.

There are far more Mac and Windows installations than all Linux distributions combined. These are all microkernel or hybrid-kernel architectures.

There is no such thing as a hybrid. You are either on fire, or you are not. You are either a microkernel, or you are a monolithic kernel. Mac's may use the Mach microkernel, but it's a poor kernel and the BSD subsystem really consists of most of the system calls, which all execute in kernel space. This is a monolithic kernel that has a poorly designed microkernel as its ancestor.

Systemd is all about marketing, and nothing about engineering. It too will fail and be replaced, just like PulseAudio by ALSA.

Except PulseAudio hasn't been replaced, it's still used by most distributions.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

In the end the real debate was HOW to accomplish the modularity, not whether to make the kernel modular or not

Right, so make the kernel modular via isolation which provides fault tolerance in your most critical piece of code, or make it monolithic, ie. everything lives in kernel address space.

There is no reason on earth that an init system would need a specific journal daemon

There is no reason on earth device drivers need to live in kernel space either. Performance arguments are simply false, and this point has been disproven many times over.

Of course arguments and hard data aren't meaningful in these discussions, and monolithic has clearly won in terms of marketshare. Once again, why fight the tid of history instead of being more constructive? You are going to lose.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

But systemd isn't actually monolithic, it's monolithic by fiat because the daemons refuse to play well with others, which (again) is against the Unix Way.

Microkernels are arguably more "Unixy" than monolithic kernels. Each device driver is simply a process that has a well-defined stream interface that can be piped to any other process, not just the kernel itself. Microkernels are Unix taken to the extreme.

So again, this argument failed for microkernels, so why should it succeed here? Perhaps some core functionality, not just system calls, should also be in some monolithic service and not a set of composable subsystems.

It's taken until recently for Minix to become even vaguely usable as anything other than a learning operating system, it's lagged behind everything else in terms of features always.

Tanenbaum and others weren't arguing that Minix was the microkernel OS that should be selected, merely that some microkernel should be preferred over monolithic kernels. The high performance L3 and EROS microkernels both existed at the time, albeit in early stages like Linux.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

Yes, that is an excellent reason to add even more vulnerability vectors!

Granted, but more granular fault isolation wasn't convincing when Tanenbaum and Linus were arguing microkernels vs. monolithic kernels, so why should it be convincing now? I'm certain your other complaints are fixable given the current framework, assuming there aren't other mitigating issues.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

I have no opinion on systemd. However, more granular fault isolation wasn't convincing when Tanenbaum and Linus were arguing microkernels vs. monolithic kernels, so why should it be convincing now?

Every condemnation leveled against the monolithic systemd are just rehashed arguments of monolithic vs microkernels. Monolithic kernels clearly dominate, and chances are systemd will similarly dominate, so instead of wasting your time battling the tide of history, perhaps you should be more constructive.

Slashdot Top Deals

Neutrinos have bad breadth.

Working...