Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Have they fixed the need to manually rebalance? (Score 1) 91

BTRFS is so mature already, I never lost my data with it

Dude, nobody said BTRFS is mature. Did you read the part where I've had to manually rebalance several volumes on multiple occasions? I'm sorry that you interpreted this statement as a ringing endorsement of a mature filesystem - but it's not the case that users should have to do this kind of babysitting in a mature technology.

I *have* had BTRFS fill my logs with checksum failures on a couple of dying disks, and I was able to recover everything intact (the bulk of this data had shasums thanks to some deduplication I had been doing months earlier).ext4 on the other hand (by its very design, unless you count recent kernels where metadata may be checksummed) happily allows the disk (or whatever) to take a shit all over your data without so much as the slightest hint that something might be wrong until you go to open a file years later and discover it's zero bytes long, truncated, or full of garbage.

The data integrity features of the new file systems are nice only if you can assume them to be bug free.

No shit. But if your idea of data integrity is to start with something that doesn't even try, there just isn't any hope of that is there?

Comment Have they fixed the need to manually rebalance? (Score 4, Informative) 91

I've been using btrfs on all my machines/laptops for more than 2 years now. I've never had corruption or lost data (btrfs has actually coped rather well with failing/dying disks in my experience), unlike ext4. COW, subvolumes and snapshots are nifty.

But too many times I've had the dreaded "no space left of device" (despite 100GBs remaining) when you run out of metadata blocks. The fix is to run btrfs balance start /volume/path - I now have a weekly cron job on my non-SSD machines - but it's hugely inconvenient having your machine go down because you're expected to babysit the filesystem.

Recent months of Docker usage has made me encounter this condition twice this year already.

I'll continue using btrfs because I've experienced silent corruption with ext4 before which I believe btrfs would have protected me against, and I like snapshots and ability to test my firmware images cheaply with cp --reflink pristine.img test.img.

Comment Re:I'd replace Java with Perl, for one. (Score 2) 247

It's a shame that perl's taint mode is actively discouraged from use in production. At least from #perl lurkers. I've encountered a few almost-show-stopper unicode regex regressions that core perl devs just don't seem to care about - apparently there aren't enough people that care about taint mode enough among perl core devs.

Comment Re:"Classic?" Or Just Uniform (Score 1) 503

Nothing the GP said was incorrect - perhaps you've misread it. I thought GP was referring to the FUD/backlash against KDE which lasted many, many years longer than the actual licensing dillemma itself (less than a year?).

So yes, politics/belief/FUD drove the creation of Gnome, and that mis-maneouver by Qt/KDE project - despite being quickly rectified - had repurcussions that lasted much of a decade, despite the indifference of pragmatic users such as yourself.

Comment Re:Don't know about you guys... (Score 2) 503

I've been a Gnome user since around 2001, to say things were pretty rough back then is an understatement... In 2012 I switched to KDE. I finally had a machine with 16GB ram to run it on (FWIW KDE seems slightly better at running on limited hardware now, but stil..) Its defaults made me angry, though (especially Konsole - seriously, no keyboard shortcuts to hit a specific tab? Tabs at the bottom [oposite edge to the menus and titlebar]?) but I can actually repair it a lot quicker than fixing Unity/Gnome.

It's been this long and they still can't make KDE remember the orientation/resolution/relative position of any monitor that isn't the primary one - if I'm going to suffer through that sort of thing I might as well give i3-wm a proper go. I was able to use it productively for a whole day recently, which is more than awesome and xmonad lasted for me.

Comment Re: Abolish software patents (Score 1) 204

A *lot* of funky SCADA software. In 2012 built another MS-DOS 6.22-based data acquisition server (which is still in use, along with the others) using incredibly overpriced (albeit reliable) bits from Advantech, 16-bit ISA cards and all. The application's last update was 1996, not quite 20 years but getting there. Slightly less ancient data acquisition software runs in parallel with nicer looking reports and modern export formats, but isn't as reliable. The DOS machine, as clunky and ugly as it is, just absolutely refuses to ever fail. And I can't say that disaster recovery in an environment without any internet connectivity (drivers? activation? updates? etc.) is any worse with MS-DOS: transplanting the windows software from one installation to the next is actually quite traumatic compared to "let's just dd this image to a new CF-Card and boot from that"...

Also, did some work last year on an impressively large website with many millions of hits per month whose codebase began circa 1997. I should tell them perl 5.19 has dropped CGI.pm from the core distribution, heh

Comment Re:Who the fuck wants to use GNU trash? (Score 1) 166

What a strange question. Octave has quite an enormous userbase, perhaps not as big as R but with a heritage going back to the 1980s.

The real question is what can't you do in Octave that you'd do in Matlab: it's been quite some years since I used either, but I did have to port my Matlab code to use different or missing toolboxes so that it would run on Octave. The other big problem is a complete lack of integration with data/signal acquisition hardware which has drivers for Matlab (up to a crusty old version you've probably just retired)...

Comment Re:this has me wondering (Score 1) 151

Now personally, I happen to feel that maintaining those people's lives is a net loss for the human race, because they'll never contribute anything of import. These are not capable, creative people. These are chair-warming wal-mart shoppers.

How on earth did you reach this world view? Some of the most brilliant people I know are less than fully functioning human beings... I'm reminded of the famous mathematician Paul Erdos, a person whose achievements are truly remarkable but he famously had to ask one of his hosts once to close a window for him... apparently in the middle of one rainy night, he couldn't figure it out how to close it for himself. If he's a chair-warming waste of space, who isn't?

Comment Re:Why? (Score 1) 289

I always do a little research before buying my next computer, to see if there are any Linux compatibility issues. My last few laptops have been Lenovos, they seem to have pretty vanilla intel-centric hardware that works well for me with Debian.

On my recent x230 install I stumbled a bit as it was my first install on a UEFI boot machine, and KDE never remembers that I want the touchpad disabled at all times (it also never remembers how to configure my external display when I dock at my desk) - but meh. Other things which used to be a monumental pain in the arse, Eg. bluetooth tethering, printing, suspend/resume - "just works" now, so I'm probably a little more forgiving than the average windows user for any rough edges (multi-monitor support in windows is definitely superior, especially if you're spanning across different video adapters).

Comment Re: This isn't metadata. It's just data. (Score 1) 60

Metadata refers to side-channel data.

Don't make that assumption. As someone who works on data acquisition/management/processing (not telco) and gets trapped into hours-long discussions on data standards, especially derived data assets where the provenance/curation/modification history (not to mention the inputs, processing parameters, process versions/systems etc.) are just as important as the assets themselves... what is "meta" (or meta-meta, or...) and what isn't - is a huge area of ambiguity. The word "metadata" becomes utterly meaningless; I've been in meetings which informally ban it (lest we get lost into meta-meta-meta-meta-data - no exaggeration - and people lose their bearings, frame of reference and everybody gets confused about what "level" of meta-ness the conversation has collapsed into).

There is a good argument that the content of the call is only an incomplete record of the call. Without knowing the caller/callee/duration/date/time etc. we cannot put a voice recording into context and so the recording becomes useless and even perhaps unsearchable. If that's the case, then this "data" is of "first-order" importance and cannot be omitted by anyone - especially not the telcos who want generate any billing.

What is "meta" and what isn't, is all in the eye of the beholder. Meaningful documentation of protocols and information standards need to avoid assuming any common sense notion of the word.

I would be surprised if telcos consider "metadata" of a call to be far more boring than anybody cares about: technical stuff; SS7 attributes of the call, routing/exchanges/equipment involved, hand-overs between different mobile phone cells/towers, signal quality/encoding/protocol modes, measurements of bit error rate/latency/jitter/etc.

Comment Re:Is the science repeatable? (Score 1) 69

And putting huge amount of computation and DNA from the same animal through it, and we're even less stuffed. Seems to me to be a pretty damn useful technique, overall, even if it's only "statistically" correct.

Which technique are we discussing? Next-gen contig/alignment is quite mature, as is the understanding of the limitations. Older, slower, more expensive tech is still in use in some lesser-studied critters which, for want of a better word, aren't entirely validated on the next-gen stuff and some experiments are just plain easier to do.

If we're talking about the tech in TFA, yes it certainly describes the most delicate way one could conceive of for treating precious single strands of ancient DNA molecules - a kind of almost-in-situ imaging (versus the much more traumatic chemical techniques I'm more familiar with). Hence the 10-20x-ish increase in reach back in time - they've substantially lowered the minimum DNA quality/quantity required to get interesting sequence data out.

I guess I just wanted to convey something along the lines of "garbage in, garbage out". If you've got garbage in, no amount of CPU power is going to fix that. Denying this, as you know, is like yelling "enhance!" at bad images/videos on sci-fi or crime movies. Fitting data to models might yield some interesting stuff, or it might just yield whatever you want it to yield. I've seen geologists play tricks on each other with seismic interp, tuning filters to create convincing structures out of white noise!

And, as you said, even if they can get good data over just a few genes, even that's useful to evolutionary biologists - they can talk all day about rate of change in those genese and have arguments about calibrating genetic clocks with fossil records (unless that's just a plant biologist thing...)

Comment Re:Is the science repeatable? (Score 1) 69

I only mention the contamination issue because, at one of the seminars run by the Ancience Centre for DNA in Adelaide - it was highlighted as a significant problem early on in their research which resulted in detailed and rigorous sampling and processing protocols to get any worthwhile results at all. I seem to recall that early ancient DNA efforts had several false success which later turned out to be contamination - it's non-trivial. Even the act of using bare hands to wash an old bone in water overwhelms any tiny amount of useful, amplifiable ancient DNA.

The ACAD lab looked more like a semiconductor cleanroom compared to the more traditional labs near where I was working (plant DNA).

Slashdot Top Deals

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...