The article now contains an update stating that queued TRIM was not involved (that ordinary TRIM was to blame is also mentioned in the author's Hackers News comment) only ordinary TRIM. It also appears that the company's drives were enterprise Samsung PROs.
Anecdotes 1 + 2 were running in native mode because it was initially a fresh install that was then switched sysvinit-core. Anecdote 3 I don't know (most likely compatability mode as it was several years ago). Even if all the anecdotes were only running in compatibility mode the results show systemd finishing quicker...
Does the above make things any better and how fast do you expect things to go when you're bottlenecked on I/O throughput? Is 15 seconds for a hard disk or 4(!) seconds for a RAM disk so bad?
Interesting... Googling around backs up your memory - systemd's readahead implementation was removed in v217.
The NFS mounts were always mounted at boot and not on demand both before and after systemd. The difference was that because there were multiple NFS mounts only proceeded in serial before the switch. After the switch the waiting of the mounts happened in parallel not just with each other but other services too. Yes it is entirely pathological but it really happened (and presumably other parallel inits would have solved it this way too).
For what it's worth I have an EeePC too. The default Xandros distro was a classic case of not running very much because it was highly custom (e.g. no printing, no mdns, kernel doesn't have to probe for all different kinds of hardware, hotplug of USB devices, io scheduler set to deadline etc) and thus was fast - I would always expect it to outperform a generic "full fat" Linux distribution. I'd expect your FreeBSD to beat my current XFCE Ubuntu setup too because I bet you can get start less. With lots of hand tweaks the boot speed to an XFCE desktop on my EeePC 900 is still 17 seconds...
Anecdote 1: I've just timed a Debian Jessie single CPU hard disk based VM install with BTRFS as the filesystem, a GNOME 3 desktop where the user is auto logged in boot and where an autostart script records the time. Here are my rough systemd and sysvinit results (times are from after the kernel core finished to when the GNOME script ran):
sysvinit (apt-get install sysvinit-core)
First boot: 20 seconds
Second boot: 18 seconds
Third boot: 19 seconds
systemd (apt-get remove sysvinit-core)
First boot: 15 seconds
Second boot: 16 seconds
Third boot: 15 seconds
sysvinit averages 19 seconds, systemd averages 15.33 seconds. In this case it does appear that systemd booted the system faster.
Anecdote 2: Same as above but where the VM's disk is sitting wholly in RAM. Time for sysvinit dropped to 5 seconds and the time for systemd dropped to 4 seconds.
My personal guess is that the more you are running, the slower the disk the more likely systemd is to benefit you. You don't say how you did your comparison though or what type your "disks" were. If your comparison was between different versions of Linux distro then it could simply be that the previous version did less (which is always the fastest way to boot)...
Another anecdote: a few years back I saw Slackware systems at a University converted over to systemd. Boot times (which involved waiting for multiple NFS mounts) went from over three minutes to down to less than a minute because more of the waiting was done in parallel.
While my interaction with storage is "how long did it take for this operation to finish" the answer to that can vary dramatically on a hard disk depending on type and frequency of I/O I was doing (whereas the difference between best and worst values narrows with flash).
Your table doesn't capture the difference in wait time between reading only reading a 1GByte ISO in big chunks that's been laid out sequentially and serving up multiple torrents from big files that are over different parts of the disk simultaneously. You aren't always in the worst case and if the data is already cached in RAM you may be totally unable to perceive a difference because the waiting was decoupled from the request.
Telling people that "wait time" describes what they're seeing feels like it's trying to push latency and throughput into one number when both need to be distributions describing particular patterns of (potentially parallel) I/O and that's not even getting into queue depth. I guess there are parallels to telling people to describe things in terms of load average too...
While PCs continue to ship with legacy BIOS support you should be able to continue booting DOS on bare metal PCs (as you did). However FreeDOS does not yet support UEFI so if/when UEFI only machines come out the grand parent's challenge will become more difficult.
(I'm not a Slashdot manager just another site user) Since the redesign things like comment boxes and posts have no white space at their left edge. Additionally something strange seems to be up with tags - they only display when you hover over them?
It's fine if the documentation is highly technical, I've written linux kernel drivers before :)
If I believe the data is going to already be mostly sorted why wouldn't I just use insertion sort?