Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

There is no such thing as a hybrid. You are either on fire, or you are not.

That is such an idiotic statement that I won't even bother continuing the discussion. This link is the wikipedia page. And is Linus himself speaking about the mix of kernel architectures.

The people who push systemd have serious issues with reality it seems. Pulseaudio is a brain damaged piece of software and one of the first things to be removed in any distribution.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

There is no reason on earth device drivers need to live in kernel space either. Performance arguments are simply false, and this point has been disproven many times over.

Actually the performance arguments are the only arguments. Everyone is agreed that separating the various components is necessary for system stability. Yet for machines like home computers it is simply not possible. It is only with the relative advance of hardware that the microkernel can actually get close to a monolithic kernel in terms of performance. The address space separation always comes at the cost of inter-process communication and context-switching which necessarily have performance consequences. It is this performance consequence that made Windows NT, originally designed on a microkernel architecture, move towards a hybrid kernel. However even Windows engineers have struggled to move towards a microkernel, with Vista being the first version I believe which had some drivers run in user mode.

Of course arguments and hard data aren't meaningful in these discussions, and monolithic has clearly won in terms of marketshare. Once again, why fight the tid of history instead of being more constructive? You are going to lose.

Actually the monolithic kernel has already lost the marketshare battle. There are far more Mac and Windows installations than all Linux distributions combined. These are all microkernel or hybrid-kernel architectures.

You make an error in thinking that history has already been written. Unless you are a prophet of some sort, there is no way you can tell us what the tide of history is at this moment. Systemd may indeed be the end of Linux in server space, as many serious companies are already migrating to the BSDs. How large this migration may be we won't know until the statistics are taken after the fact. I'm old enough to remember the commercials for "New Coke", which was vaunted to be the formula to crush the competition. In reality the competition destroyed Coca-Cola. Systemd is all about marketing, and nothing about engineering. It too will fail and be replaced, just like PulseAudio by ALSA.

Comment Re:Lennart, do you listen to sysadmins? (Score 1) 551

Actually, no. The modern linux kernel is far more modular than in the beginning. Now you have kernel 'modules' that can be unloaded during runtime, which wasn't possible in the past. In the end the real debate was HOW to accomplish the modularity, not whether to make the kernel modular or not. The modern linux kernel is not as monolithic as it used to be, and has absorbed many of the features of the microkernel.

You can run the kernel with any number of modules according to the functionality that you need, with various levels of dependancy. There are even distributions that remove all of these modules for embedded devices. You still have a kernel that provides a way to speak to the hardware it supports even without the other modules.

Systemd is nothing like this. You cannot run systemd without journald for instance, not simply because of a dependency but due to bad design. There is no reason on earth that an init system would need a specific journal daemon, and yet you cannot run systemd without journald. To use any other journaling software you have to use the output of journald. Thus journald is not separate from systemd in any meaningful term.

And who in their right mind makes a logging daemon write to binary files? That is just retarded. The first thing you look on a machine that has failed in some way is to look at the log, and the tool to look at the log is almost always from another machine that is different than the one you are looking at [because that one is broken, duh]. A technician will use tools that he has on hand, which sometimes is only a text editor, or even just cat. To expect someone to install another piece of software on their machine just so they can see what happened to the server IS JUST FUCKING INSANE.

Comment Re:Bloat (Score 2) 104

You perhaps know that one of the reasons slashdot itself (one of the major tech sites on the internet) doesn't support unicode fully is not only due to the laziness of the developers. Gmail until recently also had difficulties. The DNS system as well has all sorts of troubles with the Russian 'a' and the ASCII 'a'. Just selecting through several pages of memory to draw the right symbol is not going to happen without some cost.

"Displaying text and pictures" is not so simple as it may sound. Do you remember the JPEG flaw that was used as an exploit in Internet Explorer?

I'm not against supporting all sorts of character sets, but we can't imagine that it doesn't come without a price and potentially with several possible dangers.

IF Wingdings fonts makes my computer run as slow as molasses and weakens its security, then it is simply a flaw and not a feature. If our beloved web browser programmers however spend more time on implementing emoj than web standards, we have a problem. If they can get it to work without destroying fundamental functionality, I don't really care.

Comment Bloat (Score 2) 104

Another reason browsers are way too bloated. This stuff does not come for free. Not to mention the possible security implications. What happens when a malformed emoj is put in the address field? What about in the preferences? What about as a http-header?

Seriously, some features should just not be implemented, just like kids should not be given everything they ask for. Not everything you want is good for you, nor good for the internet.

And get off my lawn.

Comment Re:Lookup tables are faster and more accurate (Score 1) 226

True, the table is a question of space/accuracy trade-off.

The process of correcting is in the interpolation, which is why I included the additional links in the same thread.

For instance in the simple manner of simple linear interpolation one can interpolate an arbitrary \epsilon between two table values. Repeating this brings us closer to a fixed point. The book that I linked to gives also many other ways of interpolation, as well as the article. It is this interpolation that is the manner of finding increased accuracy.

The article I linked to states that the method that he employs shows a noticeable gain over recalculating the value [though it is not the same function in discussion]. As in most algorithms it depends a lot on what is more important - space or time.

Comment Re:Lookup tables are faster and more accurate (Score 2) 226

For pow(a,b), [a,b real numbers], you are essentially calculating:

  a^b = (e^log(a))^b) or pow(pow(e, log(a)), b) which is e^(b*log(a)) or pow(e, b*log(a)) where e is the base of the natural logarithm.

What you have in your table are the values for e^x and log(x), like any good book of logarithms of ancient times. Precision according to your needs. For quick lookup you can even index the mantissa in a b-tree if your table is huge.

Then it becomes very quick:

step 1: look up log (a) in the table, interpolate if needed.
step 2: calculate b * (value in last step).
step 3: lookup up e^x where x is the value at step 2 in your table, interpolate if needed.
step 4: profit! as you now have your result.

And as a bonus, you are sure the result is within the precision of your table immediately, within the error of your interpolation.

Note that interpolation for exp(x) is quite fast. There are some exotic methods out there as well for interpolating exp(x) and log(x), as per this abstract which are quite efficient if you need high precision. For 10 digit precision you could easily fit both your tables into 8k.

Comment Re:Lookup tables are faster and more accurate (Score 4, Interesting) 226

What is perhaps a bit of irony of history, even for humans a lookup table is faster and more precise than manually calculating it via formula. That is why they published books of logarithms. Using interpolation you can even stretch out the precision to several more digits. With a table of values in memory you can also narrow down the inputs to Newton's method and calculate any differentiable function very quickly to an arbitrary precision. With some functions the linear approximation is so close that you can reduce it in just a few cycles.

Even in most trigonometric functions there is a simple table upon which the angle addition formulas are used to get the other values[an old example].

Given the size of most operating systems, where 8k of ram is hardly noticed (most gifs are larger than this), I am actually quite surprised that the lookup table method is not more used. It would seem one of the first things to put in cache on your ALU.

Slashdot Top Deals

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...