Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:I wouldn't trust non-professional reviewers (Score 1) 248

There is absolutely no value in having random people review things.

Any sufficiently articulate reviewer can give me very valuable insight that helps me decide whether I'll likely enjoy a science fiction book from an author I've never read, for example.

You might have a point for reviews that are meant to deeply analyze high literature works. For most books sold on Amazon, what you wrote doesn't apply at all.

Comment Re:Just like any high impact (to the head) sport. (Score 1) 271

If you read the second link in the comment that started this thread, you'll see that

On average, elite male soccer players -- who often use their heads to direct the ball -- had a range of negative changes in white matter architecture compared with a group of competitive swimmers who were unlikely to have repetitive brain trauma

I think they picked the control group from professional athletes because they are more likely to have similar lifestyles. Even so, the authors of the study are cautious and note that

differences in head injury rates, sudden accelerations, or even lifestyle could contribute

So, like MightyMartian said, obviously researchers do their homework, otherwise their papers would be ridiculed during peer-review.

Comment Re:No plans for LLVM (Score 2) 102

It's undeniable that microkernels open very cool possibilities, like the ones you mentioned.

But my first point was that, every time someone makes a microkernel that has to compete with the kernels we have today, they end up doing all kinds of compromises ("hybrid" approaches) which end up with all sorts of drivers (network, disk, graphics) in kernel space. Anything else just slows things down too much, to the point where very few people would want to use those kernels.

And, to be honest, while the kinds of things you mention would be useful, crashes in drivers for mainstream hardware are very rare, so there's very little practical reason to try too much to mitigate their effects. Unless, of course, you count gaming graphics cards (their drivers do tend to crash a lot) -- but in this case, trading a lot of performance (and if you remove the graphics driver from kernel space you do lose a lot of performance) for the ability to recover from a crash is really not what you want.

Comment Re:No plans for LLVM (Score 3, Insightful) 102

a kernel mode component can crash the system and leave no trace of what did it. Like pre X MacOS or DOS.

... and Linux, NT, and the Mac OS X kernel (XNU).

NT and the Mac OS X kernels are interesting cases: they started as microkernels, but soon moved on to "hybrid" approaches that keep a lot of drivers inside kernel space.

Everybody knows mircrokernels are slower. They are more stable. Misbehaving drivers are identified quickly. They usually have fewer issues and the issues they have don't take the whole system down.

That sounds great in theory, but if a disk or network driver crashes on a production server, how much do you care that the rest of the system is still working? These things must not crash, period -- if they do crash, the state of the rest of the system is usually irrelevant.

Comment Re:Without the use of a loop!? (Score 1) 438

you definitely are an idiot.

That's great to know, thanks! I look forward to more varied insults.

An #include influences the linker just in the most trivial sense that it changes the code seen by the compiler, and so it changes the compiled code seen by the linker. The point is that there's nothing magical about "stdio.h" that requires one to #include it; everything that can be done with #includes can be done without them. I keep repeating this point because I'm still not sure you understand it.

You at least *did* realize that your previous point about code compiling but not linking is complete and utter nonsense, right?

Comment Re:Without the use of a loop!? (Score 1) 438

I know what it does you idiot.

Ah, so you've finally realized you're wrong, and now are resorting to ad-hominem attacks...

That's fine, as long as we're clear that it is possible to write any C program as a one-liner, and #include is just a text-processing directive that doesn't influence the linker.

Comment Re:Without the use of a loop!? (Score 1) 438

One more time: #include does exactly what I said: it just includes a file verbatim (I can't believe you haven't taken the time to check this, after all this time in this discussion). This means that, after you resolved all #includes, you can then proceed to put the contents of that result in one line, ant the result will be a one-line program. If the original program compiled and linked, then the one-liner also will, by construction.

Now, to the point of missing struct declarations:

If I am trying to link to something that takes a pointer to a struct and I don't include the headers for it, the linker won't work. So, with it the linker succeeds and without it it doesn't.

In your specific example, it will either fail during compilation or during execution, never at link time. That's because the information about return and argument types is gone by the time the linker runs (it might be present on debug sections of the object files, but the linker doesn't care about those). So, if the compiler is happy with the struct declarations, the linker will happily work -- even though the result might be undefined and crash during execution.

Comment Re:Without the use of a loop!? (Score 1) 438

What happens when your function returns something other than int and you don't declare it first?

What happens is undefined behavior, according to the standard. But now you're changing your story.

In response to my examples (both of which are completely valid and contain no undefined behavior), you wrote nonsense like "It might compile, but it won't link, ergo it's not a valid program". Now you're trying to sound as you were talking about general programs.

The whole point is that #include does nothing magical that influences the linker. It's *exactly* the same as inserting a file verbatim at the #include point. Which means you don't *need* any #includes in C (in particular, they're not required by the linker). Ergo[1] it's possible to write any C program in one line, keeping in mind the limitation about the minimum line length the translator is required to accept, which I explained earlier.

Good luck learning C!

That's very funny, you gave me a good laugh!

[1] I can try to sound smart, too :)

Comment Re:Without the use of a loop!? (Score 2) 438

OK, now it's clear that you have absolutely no idea what you're talking about -- you might want to read the C standard, or at least test it on a compiler.

If you're interested to learn, here's a general explanation:

The function declarations usually present in header files just tell the compiler what are the parameters accepted by the function, and what is the function return value. In C (not C++), these declarations are not required. If a function with no declaration is called, the compiler/interpreter/whatever uses an "implicit declaration" -- most compilers will give a warning saying that's what they're doing. An implicit declaration assumes that the function returns an int and receives the arguments that were passed in the call, with some promotions: for example, char is converted to int, float is converted to double, and so on (I don't remember all of the promotions rules, and I can't be bothered looking them up).

An #include directive has nothing to do with the linker, except in some non-standard extensions by compilers (e.g. Visual Studio has the #pragma comment(lib, "library.lib"), which tells the linker to link with "library.lib") which are found on some header files.

Note that some header files have more than just function declarations -- some also define macros (e.g. NULL). If you use them, you *must* include the header or define the macros yourself (e.g., #define NULL (void*)0): macros don't have an "implicit declaration". Of course, you might just expand the macros yourself (e.g., use (void*)0 instead of NULL): in this case, you obviously don't need to include the header.

Comment Re:Without the use of a loop!? (Score 2) 438

It might generate a warning on some compilers, but it's a perfectly valid C program. In fact, since this program uses rand without declaration, it might as well use printf.

Nothing prevents one from doing this, though:

int rand(void); int printf(const char * restrict f, ...); main() { ten: printf("%c", (rand()%2)?47:92); goto ten; }

(Remove the "restrict" before "f" for C89)

In general, though, ISO C doesn't require translators (the technical name the ISO uses for C compilers/interpreters) to accept lines greater than some limit (509 in ISO C89, 4095 in ISO C99), so technically it's debatable whether it's possible to write every C program as a one-liner: you *could*, but a compiler might correctly not accept it.

Comment Re:I Wish (Score 5, Informative) 259

If everything at the quantum level always worked the same way forwards as it does backwards, then entropy would be constant; the universe would be in some kind of steady state and nothing would matter because we wouldn't be here.

That's not true. "Everything at the quantum level always working the same way forwards and backwards" is completely consistent with the second law of thermodynamics ("entropy never decreases"), and completely consistent with the observable universe (barring CP violation). All that's necessary is that the universe started with very low entropy -- like, say, the Big Bang.

See for example this from this Arrow of Time FAQ (from cosmologist Sean Carroll):

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it's a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Comment Re:mdsolar writes (Score 1) 210

And a legitimate question: is the conversion rate constant?

It's exponential, following the formula x(t)=x(0)*(1/2)^(t/h), where h is the half life, x(t) is the amount left at time t and x(0) is the initial amount -- you can see that at time t=h (that is, after an amount of time equal to the half-life), you have x(h)=x(0)*(1/2)^1, that is, the half the initial amount is left.

Note that this is a statistical property. Each atom has a fixed probability to decay at every instant; but if you have a lot of atoms, the behavior of the "population" is fairly predictable.

Slashdot Top Deals

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...