I think he means that it's trivial in Gentoo to run arbitrary versions of any old library or dependency for the sake of a given application that is stuck in the past, not just package-pinning as we do in Debian-land. For example, I have an old gnuradio application that was written for gnuradio 3.6.x, but this was never shipped in any official release of Debian (it went gnuradio 3.5 in wheezy -> gnuradio 3.7 in jessie).
In gentoo it's trivial to have a specific old version of libfoo (and all the old, terribly specific versions of its huge pile of dependencies) installed along-side whatever passes for the current version of libfoo for the rest of your applications which aren't stuck in the past.
In Debian I had to re-build gnuradio from the 3.6 source, with much tweaking of the debian/control, debian/rules files and wading through debian-specific patchsets intended for gnuradio 3.5 or gnuradio 3.7, that don't apply to gnuradio 3.6. And all its dependencies. And suffer the fact that now all of the rest of my applications are forced to use gnuradio 3.6.
Other than being forced to type in 12 passphrases manually to decrypt each hard disk at every single goddamn boot, because custom keyscripts just "aren't the systemd way". Or spending hours figuring out why your Requires=network.target units inexplicably never start on boot, without a single shred of clue or evidence or event in any logs whatsoever, despite LogLevel=Debug and even though network.target clearly flashes by during boot and systemd-analyze clearly shows that it knows about this relationship with your unit and the service starts normally when you login and systemctl start manually. Or that tweaking your daemon args now requires a systemd daemon-reload as well as restart.
Yes, apart from all that, and the time saved now that admins will never have to see another freaky, alien shell script ever again because init systems were the only thing which used them, apart from all that... I'm hoping like hell systemd will hopefully one day buy me something other than more downtime.
Docker's transparent caching of RUN/ADD/etc Dockerfile steps has nothing to do with reusable containers. That "it takes less than a second [to] create a handful of [new] containers" is just as true for docker as it is for plain old LXC.
There are two sentences here but I'm not sure how they relate to each other, or the docker feature I'm discussing.
Before docker, as a (not necessarily web) developer I used vagrant to create reproducible environments and build artefacts from a very small set of files. The goal being: I should be able to git clone a very tiny repo tracking a few KiB of scripts and declartiive stuff/config, which - when run - pulls down dependencies, reproduces a build toolchain/environment, performs the build, and delivers substantially identical artefacts regardless of which machine I run it from. I should be able look at an artefact in 2-3 years time, look it up in our version history and reproduce it easily.
Achieving this isn't so easy. Even if I had been using LXC all along I still wouldn't have had the main thing from Dockerfiles that I enjoy: cached build steps. I've been cornered due to time pressures in the past where I can't afford to tweak everything nicely so I've had to release build artefacts from a process which isn't captured in the automation (i.e. I manually babysat the build to meet a requirement on time). This is because hour-long builds make for maybe 3-4 iterations per day, so you have one thread of work where you're hacking interactively while you wait to see if the automation was able to deliver the result you were up to an hour or two ago. I still have this to an extent with Docker (adjust build step and re-run, or step in interactively to explore what's needed) but because Dockerfile steps are cached these iterations are massively accelerated and there's only a handful of occasions where I had to bypass this process now.
I can't speak for using Docker to actually containerize running applications (that's not how I use it), but just this narrow usage of Docker has helped my productivity enormously.
There is *still* no alternative to keyscript in crypttab. Upgrading to systemd trashes a system that relies on smartcards or other hardware to obtain key material for mounting encrypted disks. I wouldn't be this upset, normally - you can imagine that this is just a normal teething problem - except I read through this thread where Lennart seems to doubt the very validity of the entire use-case... I had briefly contemplated seeing if I could contribute to this bug, but the insistence that we should all write C programs (unless you want your initrd to carry python or perl interpreters and all that baggage), for every possible permutation of every key delivery system devisable by admin-kind, made me give up and revert to sysvinit instead.
I believe Advantech will still happily sell you ISA backplanes. At the same time I put these things together, I had to reverse-engineer and fabricate some old I/O cards which had "unique" (incompatible with readily available cards) interrupt register mappings, also with EAGLE - great software!
I should mention: the MS-DOS system has outlasted three replacement attempts (two windows-based applications were from the original vendor who sold the MS-DOS system). There's just something completely unbreakable about the old stuff.
That's true. People scoff at the older taxonomic groupings from before we had molecular evidence, but actually I'm often surprised at how similar new phylogenies are to huge chunks of the old taxonomies. What's more, at least with plants, one molecular study can produce quite a different looking evolutionary tree to another depending on what genes they used to compute them.
Which begs the quesiton... what's the ground truth? Data from classical taxonomy is actually extremely valuable. It can help inform molecular studies. It can be used to feed consensus trees or indicate which genes might yield certain phenotypes.
There seems to be many who think that with enough CPU power and algorithms we can turn any old meaningless garbage string of GATC into something we can pretend is useful. It seems like a lossy way of thinking... you can do interesting work without names, that's true - but the reckless abandon and total lack of scientific discipline when using names would never be tolerated in the "harder" sciences.
I dare you to pick up ten different papers using species or group names... and find even just one that cites the name in a reporducible, scientifically useful way (i.e. cites the taxnomic publication which specifies what they mean when they use the name).
I wouldn't ask Chrome to do anything more than it already does, which is to just do its job - help me navigate the web. I refuse to believe that a prominent domain part which yields the exact same phishing mitigation, and a visible path part are mutually exclusive things.
I am at a loss as to why you'd dismiss the ability to spot obviously funky URLs with a dodgy "but script injection vulnerabilities are browser-independent!" straw man; surely there's a stronger rebuttable to my thoughts than this.
Are you seriously suggesting that a prominent domain part and a visible path part are mutually exclusive?
And whilst it's fun to talk about redundancy between the <h1> text, title text and the address bar, it's also true that the address bar is the only one that's always visible in a consistent location that isn't lying to you.