Forgot your password?

Comment: Re:Good Bye, Joey, and it is a pity! (Score 1) 447

by csirac (#48343151) Attached to: Joey Hess Resigns From Debian
Though I never was aware of him before, I do use debhelper all the time. It's a shame it's come to this. For what it's worth, my impression is that he sees the recent GR proposal two weeks from jessie freeze as being the last straw for him in a long line of suffering from administrative/bureaucratic interference with getting real work done (don't know if he "liked" systemd, but it seems he wants the Debian project to stop wasting energy on debating it endlessly, especially given that endless debates were apparently already had last year).

Comment: Re:I will be changing to FreeBSD too (Score 1) 447

by csirac (#48343123) Attached to: Joey Hess Resigns From Debian

For me, it's keyscript in crypttab which completely stopped my systems from booting. They're not keen to ever implement becase apparently shell scripts are intrinsically racy (for what it's worth my own keyscripts have a 10s timeout and fall-back to askpass if the crpyt token doesn't become available. I've never had one of my servers reach this time-out, the hardware config rarely ever changes). I wrote about some of the infinite permutations possible to support the use-case of just having a 4-line shell script, but it just seems that systemd is religiously opposed to shell scripts. Eventually, someone pointed out that I could pass keyscript args in the kernel boot parameters which seems to be a partially satisfactory solution for now.

For what it's worth, I do like the declarative nature of systemd for starting processes, socket activation etc. and I have migrated most of my stuff to systemd. It bothers me that debugging dependency issues is still so hard (ever tried "systemd-analyze dot"? the output is completely worthless as a debugging aid). Still, I am uneasy about the dogmatic anti-shellscript religion, I worry about the project's overall approach to security when simply accidentally running systemctl as a non-root user causes it to segfault, and it doesn't seem right that a change in pid1 should even remotely impact userland applications at all, let alone as deeply as systemd has.

At the end of the day, choice of default init system isn't going to make me switch from my favourite distro of the last 14 years (apart from a 2 year excursion to Ubuntu), but I think some of my own hostility toward systemd has been a result of the instantly dismissive remarks whenever I've tried to explain a problem I've had with it - by now I'm realizing that perhaps everybody is just too tired to tell the difference between a valid systemd complaint and yet another "get off my lawn" argument. In any case it's made me realize I should really diversify my tastes a little, currently playing with FreeBSD (again) and NixOS - that has to be a good thing.

Comment: Re: USB Device Recommendation (Score 1) 121

by csirac (#48200711) Attached to: Google Adds USB Security Keys To 2-Factor Authentication Options
I actually can't tell if you're being sarcastic... but you've just described U2F. Whilst YubiKeys and other vendors do challenge/response, I think FIDO usage is typically one-time-pad mode. All other items are addressed (you can set a PIN to protect config and firmware updates, or finalize so it can't be changed ever again).

Comment: which is why I should learn mercurial (Score 1) 245

by csirac (#48191565) Attached to: Help ESR Stamp Out CVS and SVN In Our Lifetime
I eventually got submodules to work properly for me, and have been using them effectively (I think) for a few years now. But it's not easy teaching other devs. Which is why I need to spend some time investigating hg properly. Although you can do sparse checkouts with git, apparently hg has some plugins which allow you to partially clone a repo without necessarily cloning all of the objects in its history (supposedly plugins can fetch that on demand, rather than in the initial clone). It seems this is possible because git is designed around a data format, whereas hg is designed around an API. It all seems great but I just can't find the time to invest in hg

Comment: Re:Gentoo (Score 1) 303

by csirac (#48100433) Attached to: What's Been the Best Linux Distro of 2014?

I think he means that it's trivial in Gentoo to run arbitrary versions of any old library or dependency for the sake of a given application that is stuck in the past, not just package-pinning as we do in Debian-land. For example, I have an old gnuradio application that was written for gnuradio 3.6.x, but this was never shipped in any official release of Debian (it went gnuradio 3.5 in wheezy -> gnuradio 3.7 in jessie).

In gentoo it's trivial to have a specific old version of libfoo (and all the old, terribly specific versions of its huge pile of dependencies) installed along-side whatever passes for the current version of libfoo for the rest of your applications which aren't stuck in the past.

In Debian I had to re-build gnuradio from the 3.6 source, with much tweaking of the debian/control, debian/rules files and wading through debian-specific patchsets intended for gnuradio 3.5 or gnuradio 3.7, that don't apply to gnuradio 3.6. And all its dependencies. And suffer the fact that now all of the rest of my applications are forced to use gnuradio 3.6.

Comment: Re:Why do people care so much? (Score 1) 774

by csirac (#48099965) Attached to: Systemd Adding Its Own Console To Linux Systems

Other than being forced to type in 12 passphrases manually to decrypt each hard disk at every single goddamn boot, because custom keyscripts just "aren't the systemd way". Or spending hours figuring out why your units inexplicably never start on boot, without a single shred of clue or evidence or event in any logs whatsoever, despite LogLevel=Debug and even though clearly flashes by during boot and systemd-analyze clearly shows that it knows about this relationship with your unit and the service starts normally when you login and systemctl start manually. Or that tweaking your daemon args now requires a systemd daemon-reload as well as restart.

Yes, apart from all that, and the time saved now that admins will never have to see another freaky, alien shell script ever again because init systems were the only thing which used them, apart from all that... I'm hoping like hell systemd will hopefully one day buy me something other than more downtime.

Comment: Re: Unfamiliar (Score 2) 370

by csirac (#47881169) Attached to: The State of ZFS On Linux
For the same reasons your package manager bothers with shasums on the software you install even though the several network layers reaponsible for delivering it already faithfully checksummed each little packet as it flew past: the filesystem is the earliest and only point which knows exactly what files are supposed to actually look like in their entirety. That ZFS/BTRFS scrubs turn up errors on large pools with otherwise perfectly fine hardware means those block/packet level validations are at too low a level to make assurances for the higher level data structures using them.

Comment: Re:No one cares enough to build a competitor. (Score 1) 47

by csirac (#47844451) Attached to: Should Docker Move To a Non-Profit Foundation?

Docker's transparent caching of RUN/ADD/etc Dockerfile steps has nothing to do with reusable containers. That "it takes less than a second [to] create a handful of [new] containers" is just as true for docker as it is for plain old LXC.

There are two sentences here but I'm not sure how they relate to each other, or the docker feature I'm discussing.

Comment: Re:No one cares enough to build a competitor. (Score 3, Interesting) 47

by csirac (#47843677) Attached to: Should Docker Move To a Non-Profit Foundation?

Before docker, as a (not necessarily web) developer I used vagrant to create reproducible environments and build artefacts from a very small set of files. The goal being: I should be able to git clone a very tiny repo tracking a few KiB of scripts and declartiive stuff/config, which - when run - pulls down dependencies, reproduces a build toolchain/environment, performs the build, and delivers substantially identical artefacts regardless of which machine I run it from. I should be able look at an artefact in 2-3 years time, look it up in our version history and reproduce it easily.

Achieving this isn't so easy. Even if I had been using LXC all along I still wouldn't have had the main thing from Dockerfiles that I enjoy: cached build steps. I've been cornered due to time pressures in the past where I can't afford to tweak everything nicely so I've had to release build artefacts from a process which isn't captured in the automation (i.e. I manually babysat the build to meet a requirement on time). This is because hour-long builds make for maybe 3-4 iterations per day, so you have one thread of work where you're hacking interactively while you wait to see if the automation was able to deliver the result you were up to an hour or two ago. I still have this to an extent with Docker (adjust build step and re-run, or step in interactively to explore what's needed) but because Dockerfile steps are cached these iterations are massively accelerated and there's only a handful of occasions where I had to bypass this process now.

I can't speak for using Docker to actually containerize running applications (that's not how I use it), but just this narrow usage of Docker has helped my productivity enormously.

Comment: Re:My opinion on the matter. (Score 1) 826

by csirac (#47753003) Attached to: Choose Your Side On the Linux Divide

There is *still* no alternative to keyscript in crypttab. Upgrading to systemd trashes a system that relies on smartcards or other hardware to obtain key material for mounting encrypted disks. I wouldn't be this upset, normally - you can imagine that this is just a normal teething problem - except I read through this thread where Lennart seems to doubt the very validity of the entire use-case... I had briefly contemplated seeing if I could contribute to this bug, but the insistence that we should all write C programs (unless you want your initrd to carry python or perl interpreters and all that baggage), for every possible permutation of every key delivery system devisable by admin-kind, made me give up and revert to sysvinit instead.

Comment: Re:Wear leveling (Score 1) 68

I believe Advantech will still happily sell you ISA backplanes. At the same time I put these things together, I had to reverse-engineer and fabricate some old I/O cards which had "unique" (incompatible with readily available cards) interrupt register mappings, also with EAGLE - great software!

I should mention: the MS-DOS system has outlasted three replacement attempts (two windows-based applications were from the original vendor who sold the MS-DOS system). There's just something completely unbreakable about the old stuff.

Comment: Re:Wear leveling (Score 4, Informative) 68

Many industrial computers have CF-card slots for this very application. I put together a few MS-DOS systems using SanDisk CF cards around 8 years ago and they're still going strong, using a variant of one of these cards which has a CF slot built-in (so no need for a CF -> IDE adapter): PCA-6751

Comment: Re:It's not dead, it's evolving (Score 1) 76

by csirac (#46958891) Attached to: As Species Decline, So Do the Scientists Who Name Them

That's true. People scoff at the older taxonomic groupings from before we had molecular evidence, but actually I'm often surprised at how similar new phylogenies are to huge chunks of the old taxonomies. What's more, at least with plants, one molecular study can produce quite a different looking evolutionary tree to another depending on what genes they used to compute them.

Which begs the quesiton... what's the ground truth? Data from classical taxonomy is actually extremely valuable. It can help inform molecular studies. It can be used to feed consensus trees or indicate which genes might yield certain phenotypes.

There seems to be many who think that with enough CPU power and algorithms we can turn any old meaningless garbage string of GATC into something we can pretend is useful. It seems like a lossy way of thinking... you can do interesting work without names, that's true - but the reckless abandon and total lack of scientific discipline when using names would never be tolerated in the "harder" sciences.

I dare you to pick up ten different papers using species or group names... and find even just one that cites the name in a reporducible, scientifically useful way (i.e. cites the taxnomic publication which specifies what they mean when they use the name).

"The hottest places in Hell are reserved for those who, in times of moral crisis, preserved their neutrality." -- Dante