Doesn't make any of it "incredible."
I dunno, I find most anti-science hard to believe!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Doesn't make any of it "incredible."
I dunno, I find most anti-science hard to believe!
You didn't upgrade systemd. You upgraded the systemd package. You won't actually start using the new version of systemd until you do a reboot.
No, that's completely wrong. The new version will run if the package upgrade script tells the daemon to re-exec itself. Which (at least in the case of RHEL7), it does.
What would you propose as a better alternative to this idiom in a language that lacks exceptions:
I propose this; namely using variables to keep track of the state of resources, and then cleaning up based on the values of those variables. In my experience this is much less error-prone than the "goto" equivalent - for example, reordering the code is much less likely to break the cleanup.
Ethernet packets do not have error correction, just error detection.
Wrong - Gigabit ethernet (over copper) does have a form of error correction.
Adding a registry entry to remap keys is pretty trivial, too.
You need to be an administrator to do that. That makes it pretty non-trivial.
It would, except that users having Admin access is much more common on Windows systems. (Being an Administrator on Windows does not (in theory, at least) have the complete "game over" privileges that "root" traditionally does on Unix-based systems, so there are still further privilege levels to be escalated to.)
is running a different OS which doesn't treat Ctrl+Alt+Del in a special way
Now your suggesting what exactly? That the attacker is going to throw in a linux live CD, boot it, run his 'fake login screen' that looks like the usual windows screen?
Ok... yes I guess that is a theoretically possible attack; although you'd probably get caught as soon as the user isn't actually able to log-in and IT gets called in...
Why would IT get called in? After the user's entered their password, you just display a simulated BSOD and then reboot into the genuine OS; no user will be remotely suprised
Deliberately conflating, but not confused.
It's hard to tell the difference from here
I can trivially run a program to throw up a screen that looks like the login screen on a PC at work. TRIVIALLY.
Adding a registry entry to remap keys is pretty trivial, too... as, for that matter, is running a different OS which doesn't treat Ctrl+Alt+Del in a special way! Thus any extra security provided is minimal. Which is fine - as you say, security doesn't have to be perfect in order to be useful - but in my view overselling the effectiveness of a measure is counterproductive.
Nobody here is arguing ctrl-alt-delete is some magical super thing,
Alas that is exactly what Microsoft claimed for years (possibly still claim?)...
You aren't going to be tampering with or installing of ANY of that from user land.
I think you're confusing the user vs administrator distinction with the userland-vs-kernel-mode distinction... but never mind...
And if you have root... you can just install a keylogger be done with it. Why bother with dorky fake lock screens?
What I'm saying is that the "Ctrl+Alt+Del protects your password" claim is overblown; the suggestions you give only amplify that, as they are even more ways to circumvent it...
I think you possibly mean ad nauseam?
You're tricking yourself into security theater. You can't intercept an actual ctrl-alt-del, but you can read the ctrl and alt keys, and just unlock your fake lock a couple seconds later.
This. Or the fact that there are registry entries that allow remapping of any key to any other, including (as far as I remember) the Ctrl, Alt and Del keys. The "security" of Ctrl+Alt+Del has always been over-hyped
 It could be a macro, but most coding conventions require macros that can't be used as if they were functions to be all-caps.
Or x could be an array... in which case the called function can modify the value of the variable. The point you're making is valid, but C isn't 100% consistent in this regard
IIRC, pascal begin/end are not optional.
If body of the "if" (or "while" etc) is a single (simple) statement, then "begin" and "end" are optional - so you can write either
if cond then
if cond then do_stuff()
For example, you can compare the readability of Arabic numbers vs Roman numerals by asking two people proficient in each to perform the same arithmetic calculations, and you time them.
That would measure how easy it is to perform arithmetic in the two systems... which is not the same as readability. Similarly it's a good idea not to confuse "easy for a computer to read (and execute)" vs "easy for a human to read (and understand)" - both are important in different ways, but they are entirely separate concerns!
Perhaps you should once google what QMX actually is, so you relize it has nothing to do with 'process control'.
QNX, not QMX. It's a hard-realtime microkernel OS. That doesn't mean it can do process control on its own, but the realtime features are handy if that is what you want to do with it.
Why haven't you written such a thing before? Because it's too much hassle. Which is the very reason threading is underused.
LOL. Actually there's a better reason such a thread launch facility doesn't commonly get written - which is that, in most circumstances, it really doesn't help performance that much, if at all - and the added complexity makes for a big net minus. There are a number of issues:
Firstly, spawning threads is expensive. Yes, on Linux it's "cheap", but that's "cheap" compared to other implementations - it's still a lot compared to doing a modest amount of work on the local CPU. (Why is it so expensive? Basically because there's a lot of housekeeping to do. In addition to the kernel creating new kernel structures for the new thread of execution (similar to creating a process), the process's thread library must allocate a stack for the new thread (involving modifying the process's page tables), iterate through all loaded shared libraries in order to allocate any thread-local storage they require, and so on, requiring multiple syscalls, a TLB flush, at least one context switch, and so on. To some extent the impact of this overhead can be reduced by maintaining a pool of ready-created threads, but this either takes away control of performance (if done automatically by your language/library) or substantially increases complexity (if you implement it yourself, since you then have to synchronise the threads carefully).
The second problem is that, unless you're very careful, extra threads don't buy you much performance, and can indeed hurt. Take the example you gave - doing some processing on each struct in an array, where each such struct contains an int and a double (16 bytes total, including alignment padding). With 64-byte cache lines (typical on x86), there are 4 such structs per cache line. If you distribute the processing over threads running on different cores, then instead of one core waiting for the cache line to come in to main memory, and then processing the 4 structs very rapidly (since they're now all in cache), you'll have 4 cores each waiting for the data to be available - i.e. up to a 4x slowdown for memory-bound tasks. And that's assuming the structure is only read from; if it's written to as well then the cache line will have to bounce between cores, and the multithreading slowdown will be many times worse. Now, if you ensure that structs in the same cache line get processed by the same core (ideally in sequence, and by the same kernel thread), then you do potentially get a big speedup - provided you don't hit any other gotchas - but the C++ code you're promoting doesn't seem to guarantee this in any way.
Third, and perhaps most importantly, data dependencies matter. In your example you're detaching all the threads; this is not realistic, because that means you cannot ever depend on their operations having finished. In the vast majority of cases you do need to know when an operation has finished: you're generally doing work for a reason - i.e. that you're going to use the result - and you can't begin to use that result until you know it has been produced. That, in of itself, adds complexity: you have to analyse your program's dataflow much more carefully in the presence of threads, because C/C++ will quite happily let you use a variable before another thread has finished assigning to it, without any sort of warning or exception. The analysis can certainly be done, and synchronisation put in place to eliminate the problems - but that is further overhead, both in the program's performance but also in the complexity of the program itself, and hence the time taken to write it (and especially to enhance it later, when the synchronisation model may not be so fresh in one's mind).
Used correctly and in the right circumstances, threads on an N-core system can give a N-times speedup (or greater, due to caching effects). Used badly, at best they'll reduce performance, and usually they'll increase complexity and lead to subtle bugs that are hard to debug.
The new thread features in modern C++ are very cool, but the fact they didn't exist before is not what's been preventing competent programmers from using threads all over the place
What good is a ticket to the good life, if you can't find the entrance?