Docker's transparent caching of RUN/ADD/etc Dockerfile steps has nothing to do with reusable containers. That "it takes less than a second [to] create a handful of [new] containers" is just as true for docker as it is for plain old LXC.
There are two sentences here but I'm not sure how they relate to each other, or the docker feature I'm discussing.
Before docker, as a (not necessarily web) developer I used vagrant to create reproducible environments and build artefacts from a very small set of files. The goal being: I should be able to git clone a very tiny repo tracking a few KiB of scripts and declartiive stuff/config, which - when run - pulls down dependencies, reproduces a build toolchain/environment, performs the build, and delivers substantially identical artefacts regardless of which machine I run it from. I should be able look at an artefact in 2-3 years time, look it up in our version history and reproduce it easily.
Achieving this isn't so easy. Even if I had been using LXC all along I still wouldn't have had the main thing from Dockerfiles that I enjoy: cached build steps. I've been cornered due to time pressures in the past where I can't afford to tweak everything nicely so I've had to release build artefacts from a process which isn't captured in the automation (i.e. I manually babysat the build to meet a requirement on time). This is because hour-long builds make for maybe 3-4 iterations per day, so you have one thread of work where you're hacking interactively while you wait to see if the automation was able to deliver the result you were up to an hour or two ago. I still have this to an extent with Docker (adjust build step and re-run, or step in interactively to explore what's needed) but because Dockerfile steps are cached these iterations are massively accelerated and there's only a handful of occasions where I had to bypass this process now.
I can't speak for using Docker to actually containerize running applications (that's not how I use it), but just this narrow usage of Docker has helped my productivity enormously.
There is *still* no alternative to keyscript in crypttab. Upgrading to systemd trashes a system that relies on smartcards or other hardware to obtain key material for mounting encrypted disks. I wouldn't be this upset, normally - you can imagine that this is just a normal teething problem - except I read through this thread where Lennart seems to doubt the very validity of the entire use-case... I had briefly contemplated seeing if I could contribute to this bug, but the insistence that we should all write C programs (unless you want your initrd to carry python or perl interpreters and all that baggage), for every possible permutation of every key delivery system devisable by admin-kind, made me give up and revert to sysvinit instead.
I believe Advantech will still happily sell you ISA backplanes. At the same time I put these things together, I had to reverse-engineer and fabricate some old I/O cards which had "unique" (incompatible with readily available cards) interrupt register mappings, also with EAGLE - great software!
I should mention: the MS-DOS system has outlasted three replacement attempts (two windows-based applications were from the original vendor who sold the MS-DOS system). There's just something completely unbreakable about the old stuff.
That's true. People scoff at the older taxonomic groupings from before we had molecular evidence, but actually I'm often surprised at how similar new phylogenies are to huge chunks of the old taxonomies. What's more, at least with plants, one molecular study can produce quite a different looking evolutionary tree to another depending on what genes they used to compute them.
Which begs the quesiton... what's the ground truth? Data from classical taxonomy is actually extremely valuable. It can help inform molecular studies. It can be used to feed consensus trees or indicate which genes might yield certain phenotypes.
There seems to be many who think that with enough CPU power and algorithms we can turn any old meaningless garbage string of GATC into something we can pretend is useful. It seems like a lossy way of thinking... you can do interesting work without names, that's true - but the reckless abandon and total lack of scientific discipline when using names would never be tolerated in the "harder" sciences.
I dare you to pick up ten different papers using species or group names... and find even just one that cites the name in a reporducible, scientifically useful way (i.e. cites the taxnomic publication which specifies what they mean when they use the name).
I wouldn't ask Chrome to do anything more than it already does, which is to just do its job - help me navigate the web. I refuse to believe that a prominent domain part which yields the exact same phishing mitigation, and a visible path part are mutually exclusive things.
I am at a loss as to why you'd dismiss the ability to spot obviously funky URLs with a dodgy "but script injection vulnerabilities are browser-independent!" straw man; surely there's a stronger rebuttable to my thoughts than this.
Are you seriously suggesting that a prominent domain part and a visible path part are mutually exclusive?
And whilst it's fun to talk about redundancy between the <h1> text, title text and the address bar, it's also true that the address bar is the only one that's always visible in a consistent location that isn't lying to you.
There's obvious ways to shoot for the phishing mitigations that this is apparently seeking to achieve, without turning the web into an app store. We used to make fun of stupid flash sites due to lack of linkability, is it really necessary to so thoroughly lunge off the cliff into this idiocy now?
I wonder how many bad guys are already thinking of ways to exploit this. Yes the domain is more prominent, that should have been fixed years ago - but how many sites out there are completely free of XSS vulnerabilites? When this eventually becomes non-optional, how am I going to spot https://mybank.foo/?q="><script>evil; stuff;</script>
The perfect irony of course is that Google's own pagerank depends on cross-site linking... By robbing people of URLs, a future generation of net users will grow up never knowing how to share a page with their friends unless there's a sharing mechanism within the same site their friends already use.
BTRFS is so mature already, I never lost my data with it
Dude, nobody said BTRFS is mature. Did you read the part where I've had to manually rebalance several volumes on multiple occasions? I'm sorry that you interpreted this statement as a ringing endorsement of a mature filesystem - but it's not the case that users should have to do this kind of babysitting in a mature technology.
I *have* had BTRFS fill my logs with checksum failures on a couple of dying disks, and I was able to recover everything intact (the bulk of this data had shasums thanks to some deduplication I had been doing months earlier).ext4 on the other hand (by its very design, unless you count recent kernels where metadata may be checksummed) happily allows the disk (or whatever) to take a shit all over your data without so much as the slightest hint that something might be wrong until you go to open a file years later and discover it's zero bytes long, truncated, or full of garbage.
The data integrity features of the new file systems are nice only if you can assume them to be bug free.
No shit. But if your idea of data integrity is to start with something that doesn't even try, there just isn't any hope of that is there?
I've been using btrfs on all my machines/laptops for more than 2 years now. I've never had corruption or lost data (btrfs has actually coped rather well with failing/dying disks in my experience), unlike ext4. COW, subvolumes and snapshots are nifty.
But too many times I've had the dreaded "no space left of device" (despite 100GBs remaining) when you run out of metadata blocks. The fix is to run btrfs balance start
Recent months of Docker usage has made me encounter this condition twice this year already.
I'll continue using btrfs because I've experienced silent corruption with ext4 before which I believe btrfs would have protected me against, and I like snapshots and ability to test my firmware images cheaply with cp --reflink pristine.img test.img.