Forgot your password?
typodupeerror

Comment: Re: USB Device Recommendation (Score 1) 119

by csirac (#48200711) Attached to: Google Adds USB Security Keys To 2-Factor Authentication Options
I actually can't tell if you're being sarcastic... but you've just described U2F. Whilst YubiKeys and other vendors do challenge/response, I think FIDO usage is typically one-time-pad mode. All other items are addressed (you can set a PIN to protect config and firmware updates, or finalize so it can't be changed ever again).

Comment: which is why I should learn mercurial (Score 1) 243

by csirac (#48191565) Attached to: Help ESR Stamp Out CVS and SVN In Our Lifetime
I eventually got submodules to work properly for me, and have been using them effectively (I think) for a few years now. But it's not easy teaching other devs. Which is why I need to spend some time investigating hg properly. Although you can do sparse checkouts with git, apparently hg has some plugins which allow you to partially clone a repo without necessarily cloning all of the objects in its history (supposedly plugins can fetch that on demand, rather than in the initial clone). It seems this is possible because git is designed around a data format, whereas hg is designed around an API. It all seems great but I just can't find the time to invest in hg

Comment: Re:Gentoo (Score 1) 303

by csirac (#48100433) Attached to: What's Been the Best Linux Distro of 2014?

I think he means that it's trivial in Gentoo to run arbitrary versions of any old library or dependency for the sake of a given application that is stuck in the past, not just package-pinning as we do in Debian-land. For example, I have an old gnuradio application that was written for gnuradio 3.6.x, but this was never shipped in any official release of Debian (it went gnuradio 3.5 in wheezy -> gnuradio 3.7 in jessie).

In gentoo it's trivial to have a specific old version of libfoo (and all the old, terribly specific versions of its huge pile of dependencies) installed along-side whatever passes for the current version of libfoo for the rest of your applications which aren't stuck in the past.

In Debian I had to re-build gnuradio from the 3.6 source, with much tweaking of the debian/control, debian/rules files and wading through debian-specific patchsets intended for gnuradio 3.5 or gnuradio 3.7, that don't apply to gnuradio 3.6. And all its dependencies. And suffer the fact that now all of the rest of my applications are forced to use gnuradio 3.6.

Comment: Re:Why do people care so much? (Score 1) 774

by csirac (#48099965) Attached to: Systemd Adding Its Own Console To Linux Systems

Other than being forced to type in 12 passphrases manually to decrypt each hard disk at every single goddamn boot, because custom keyscripts just "aren't the systemd way". Or spending hours figuring out why your Requires=network.target units inexplicably never start on boot, without a single shred of clue or evidence or event in any logs whatsoever, despite LogLevel=Debug and even though network.target clearly flashes by during boot and systemd-analyze clearly shows that it knows about this relationship with your unit and the service starts normally when you login and systemctl start manually. Or that tweaking your daemon args now requires a systemd daemon-reload as well as restart.

Yes, apart from all that, and the time saved now that admins will never have to see another freaky, alien shell script ever again because init systems were the only thing which used them, apart from all that... I'm hoping like hell systemd will hopefully one day buy me something other than more downtime.

Comment: Re: Unfamiliar (Score 2) 370

by csirac (#47881169) Attached to: The State of ZFS On Linux
For the same reasons your package manager bothers with shasums on the software you install even though the several network layers reaponsible for delivering it already faithfully checksummed each little packet as it flew past: the filesystem is the earliest and only point which knows exactly what files are supposed to actually look like in their entirety. That ZFS/BTRFS scrubs turn up errors on large pools with otherwise perfectly fine hardware means those block/packet level validations are at too low a level to make assurances for the higher level data structures using them.

Comment: Re:No one cares enough to build a competitor. (Score 1) 47

by csirac (#47844451) Attached to: Should Docker Move To a Non-Profit Foundation?

Docker's transparent caching of RUN/ADD/etc Dockerfile steps has nothing to do with reusable containers. That "it takes less than a second [to] create a handful of [new] containers" is just as true for docker as it is for plain old LXC.

There are two sentences here but I'm not sure how they relate to each other, or the docker feature I'm discussing.

Comment: Re:No one cares enough to build a competitor. (Score 3, Interesting) 47

by csirac (#47843677) Attached to: Should Docker Move To a Non-Profit Foundation?

Before docker, as a (not necessarily web) developer I used vagrant to create reproducible environments and build artefacts from a very small set of files. The goal being: I should be able to git clone a very tiny repo tracking a few KiB of scripts and declartiive stuff/config, which - when run - pulls down dependencies, reproduces a build toolchain/environment, performs the build, and delivers substantially identical artefacts regardless of which machine I run it from. I should be able look at an artefact in 2-3 years time, look it up in our version history and reproduce it easily.

Achieving this isn't so easy. Even if I had been using LXC all along I still wouldn't have had the main thing from Dockerfiles that I enjoy: cached build steps. I've been cornered due to time pressures in the past where I can't afford to tweak everything nicely so I've had to release build artefacts from a process which isn't captured in the automation (i.e. I manually babysat the build to meet a requirement on time). This is because hour-long builds make for maybe 3-4 iterations per day, so you have one thread of work where you're hacking interactively while you wait to see if the automation was able to deliver the result you were up to an hour or two ago. I still have this to an extent with Docker (adjust build step and re-run, or step in interactively to explore what's needed) but because Dockerfile steps are cached these iterations are massively accelerated and there's only a handful of occasions where I had to bypass this process now.

I can't speak for using Docker to actually containerize running applications (that's not how I use it), but just this narrow usage of Docker has helped my productivity enormously.

Comment: Re:My opinion on the matter. (Score 1) 826

by csirac (#47753003) Attached to: Choose Your Side On the Linux Divide

There is *still* no alternative to keyscript in crypttab. Upgrading to systemd trashes a system that relies on smartcards or other hardware to obtain key material for mounting encrypted disks. I wouldn't be this upset, normally - you can imagine that this is just a normal teething problem - except I read through this thread where Lennart seems to doubt the very validity of the entire use-case... I had briefly contemplated seeing if I could contribute to this bug, but the insistence that we should all write C programs (unless you want your initrd to carry python or perl interpreters and all that baggage), for every possible permutation of every key delivery system devisable by admin-kind, made me give up and revert to sysvinit instead.

Comment: Re:Wear leveling (Score 1) 68

I believe Advantech will still happily sell you ISA backplanes. At the same time I put these things together, I had to reverse-engineer and fabricate some old I/O cards which had "unique" (incompatible with readily available cards) interrupt register mappings, also with EAGLE - great software!

I should mention: the MS-DOS system has outlasted three replacement attempts (two windows-based applications were from the original vendor who sold the MS-DOS system). There's just something completely unbreakable about the old stuff.

Comment: Re:Wear leveling (Score 4, Informative) 68

Many industrial computers have CF-card slots for this very application. I put together a few MS-DOS systems using SanDisk CF cards around 8 years ago and they're still going strong, using a variant of one of these cards which has a CF slot built-in (so no need for a CF -> IDE adapter): PCA-6751

Comment: Re:It's not dead, it's evolving (Score 1) 76

by csirac (#46958891) Attached to: As Species Decline, So Do the Scientists Who Name Them

That's true. People scoff at the older taxonomic groupings from before we had molecular evidence, but actually I'm often surprised at how similar new phylogenies are to huge chunks of the old taxonomies. What's more, at least with plants, one molecular study can produce quite a different looking evolutionary tree to another depending on what genes they used to compute them.

Which begs the quesiton... what's the ground truth? Data from classical taxonomy is actually extremely valuable. It can help inform molecular studies. It can be used to feed consensus trees or indicate which genes might yield certain phenotypes.

There seems to be many who think that with enough CPU power and algorithms we can turn any old meaningless garbage string of GATC into something we can pretend is useful. It seems like a lossy way of thinking... you can do interesting work without names, that's true - but the reckless abandon and total lack of scientific discipline when using names would never be tolerated in the "harder" sciences.

I dare you to pick up ten different papers using species or group names... and find even just one that cites the name in a reporducible, scientifically useful way (i.e. cites the taxnomic publication which specifies what they mean when they use the name).

Comment: Re:Please try harder. (Score 1) 327

I wouldn't ask Chrome to do anything more than it already does, which is to just do its job - help me navigate the web. I refuse to believe that a prominent domain part which yields the exact same phishing mitigation, and a visible path part are mutually exclusive things.

I am at a loss as to why you'd dismiss the ability to spot obviously funky URLs with a dodgy "but script injection vulnerabilities are browser-independent!" straw man; surely there's a stronger rebuttable to my thoughts than this.

Comment: Re:Please try harder. (Score 1) 327

Are you seriously suggesting that a prominent domain part and a visible path part are mutually exclusive?

And whilst it's fun to talk about redundancy between the <h1> text, title text and the address bar, it's also true that the address bar is the only one that's always visible in a consistent location that isn't lying to you.

A failure will not appear until a unit has passed final inspection.

Working...