Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Fukushima too (Score 2) 444

Well, it sure as hell is crazy unsafe when they *are* lazy bastards, and it sure is a hell of a lot safer when they are painstaking. Look at the US Navy nuclear submarines and aircraft carriers. Perfect nuclear safety record with respect to the nuclear power plants. Hell, look at the US Nuclear electricity industry, even though I wouldn't put it close to being good enough. Zero uncontained meltdowns. Zero hydrogen explosions.

Comment Re:production ready? (Score 1) 370

Yes, on linux you can specify drives to ZFS using any of the mechanisms in /dev/disk: by-id , by-label, by-path and by-uuid. As well as /dev/sd* of course. This is one aspect where linux is way ahead of FreeBSD.

Comment Re:Little Baby Linux (Score 1) 370

I really hope you're aware that the CDDL and the BSD License(s) are not the same thing. ZFS is CDDL.

But at least they are plainly *compatible*, aren't they? I mean, since FreeBSD has ZFS in the base system.

And CDDL is *incompatible* with GPL, Isn't it? I mean, since no distro includes it and you have to graft the two together from separate sources.

P.S. - I actually USE ZFS on Linux (as well as on FreeBSD), and I love it.

Comment Re:Unfamiliar (Score 1) 370

Sounds like a fairy tale to me. Yes, one can conceive of scenarios where this MIGHT happen, but in general no. There are other scenarios where a RAM error leads to writing wrong data which will in fact be FIXED by ZFS checksums. And still other scenarios where a RAM error has absolutely no effect on ZFS checksumming. Probably the third group of scenarios is most common, followed by the second group.

Generally, with non-ECC RAM, either you can't find a single bit error in years of runtime, or else the first onset of errors will be severe enough to crash the system and the user will run memtest and fix it before any significant damage is done.

Bad advice.

Comment Re:Unfamiliar (Score 1) 370

Your reference is a lot of utter bullcrap mixed in with a few posters who have a clue. The consequence of RAM errors is EXACTLY the same using ZFS as it is with any other filesystem. You can corrupt your data, or even metadata, either coming from or going to storage. So what. Unreliable SATA cables or bad drive electronics can do EXACTLY the same thing. Even ECC RAM has a finite undetected bit error rate.

Obviously ECC RAM is a Good Idea when you have Important Data, no matter what the file system is, but there is absolutely nothing magic about ZFS that makes magically higher demands on RAM.

Comment Re:Unfamiliar (Score 1) 370

No problem, if youi're talking about a reasonabe size ZFS store. Not 64 SAS drives, but say 4-12 SATA. Forget dedupe, which is useless in normal settings anyway, and 16 GB would do it. At or below the lower end of 4-12 drives, you could probably get away half decently with 8 GB depending on how many GB you gobble up in user processes (damn that Firefox!).

I'm running 12 SATA 3 TB in a 16 GB server, where there is nada RAM usage since there is no local user, and even no GUI running at all. It works just dandy for my usage pattern - the great preponderance of the files are 2-30 GB in size, not a whole lot of tiny files - and there is only one user, me.

If you have less than 8 GB, you have to be kidding me.

Comment Re:Unfamiliar (Score 1) 370

While you cannot add new drives to a vdev ...

Yes you can, if the vdev is a concatenation of drives or other vdevs. And yes you can (in the form of additional copies) if it is a mirror.

And, for example, if you have a 6 drive raidz2, you can change each component - on the fly - from a single drive to a logical concatenation of 2-3 drives. Yes, the data safety will be reduced because you only have to simultaneously lose 3 drives out of 12 or 18 instead of 3 out of 6, to get catastrophic data loss, but you've still got double parity.

Comment Re:Unfamiliar (Score 1) 370

You can look at this as nitpicking and ZFS ass-covering if you want, but it's meant to be constructive.

Twelve good-size drives is too many for a single-level raidz2 (or RAID6 for that matter). Any guru will tell you that. The design on zfs would be far better with a zpool built on two raidz2 vdevs, 6 drives each. Six drives is the sweet spot for double parity. OK, that's now 16 TB instead of 20. A tradeoff I would make (and did in fact make, with twelve 3 TB drives) in a heartbeat.

Now when you want to grow your pool you can logically concatenate another vdev of 6 more drives. That won't involve any data rebuilding at all, like a RAID setup does. With ZFS the operation is essentially instantaneous. OK, thats an 8 TB increment instead of 4, but a 50% increment makes a whole lot more sense than a lousy 20% anyway.

Comment Re: License mismatch (Score 1) 370

Does btrfs' idea of RAID include the enormously improved features of raidz[23], or is it clearly on the roadmap?
Does btfs support nested filesets, or is it clearly on the roadmap?

They're questions, not a challenge.

Comment Re: Magic (Score 1) 370

You're exactly right as far as I know. You would have to build a new raidz with larger drives or more drives, and with both old and new pools online, zfs send -> zfs receive all the data from old to new; then you could remove the old raidz and throw away the ridiculous tiny obsolete drives.

BTW, a bit of terminology. Zpool is the top level (root) ZFS structure for *any* use of ZFS, even one which only uses a single drive (degenerate case - which actually "works" just fine and dandy, and is a great improvement over ext4fs because of numerous ZFS features such as snapshots and block checksumming). You can have any number of independent zpools. Within a zpool you have vdev components.

From the bottom up, a vdev can be a single drive or partition, or multiple sub vdevs in the form of either (1) a logical concatenation of vdevs (like RAID0), a mirror (sort of like RAID1 but better), a raidz (single parity; sort of like RAID5 but considerably better), a raidz2 (double parity; sort of like RAID6 but considerably better), or a raidz3 (triple parity, beyond RAID6).

A zpool is then either a single vdev (possibly nothing but a single drive or partition), or multiple vdevs combined in exactly the same way.

Thus you can have a tree of arbitrary complexity, for example a mirror of mirrors of mirrors of mirrors of ...
Or a raidz of raidzs. Or a mirror of raidz2s. Or a raidz3 of mirrors. Or, you name it.

Finally, within a zpool you can create, in a completely ad hoc manner, any combination of zfs filesets (recursively if you wish). This is orthogonal to structure of vdevs and sub vdevs. Each such fileset can independently grow to any size, limited only by the size of the zpool it lies within.

And the stark reality. Once you create a raidz[23], the number of components and the utilized size of the components are forever set in stone for that particular raidz[23]. Mirrors, on the other hand, can have extra elements added (from 2-way to 3-way to arbitrarily many copies). And, if you know how, even the sizes of the individual elements can be grown.

Comment Re:Magic (Score 1) 370

I ran zfs on freebsd for a few years but gave up on it. at one time, I did a cvsup (like an apt-get update, sort of, on bsd) and it updated zfs code, updated a disk format encoding but you could not revert it!

That must have been a while ago, because cvsup has been obsolete for years. Furthermore, cvsup never touched system code; only userland ports. The way you update system code is freebsd-update. I believe there was a lot more to your disaster than you are saying. And I bet ZFS on Linux either didn't exist, or was very primitive and immature, at that time. Sort of like btrfs is now (btrfs is the only native linux filesystem that is even remotely comparable to ZFS in some, but far from all, ways).

That said, it's true that updating FreeBSD should never be a heedless operation. You need a lot more insight and attention to detail to work with FreeBSD than linux. That's the tradeoff for getting a true Unix system.

BTW, most claims that ZFS is extremely RAM hungry stem from users that don't know what they are doing. For example, deduplication virtually never should be enabled for most use. Dedupe can eat RAM like a blue whale. But wthout dedup, any realistic system with 16 GB+ is fine. Skimping on RAM is just silly, anyway.

Comment Re:It's a bad sign (Score 1) 223

When you are about to have an economic crash groups like Occupy and the Tea Party are an inevitability, whatever the names or politcal leaning may be. When you are about to have an economic crash the powers that be prepare to suppress revolt and domestic spying is job one. Militarizing the police is job two.

During the Great Depression Fascism was where economically desperate people turned as they are doing in Greece today.

Slashdot Top Deals

"Spock, did you see the looks on their faces?" "Yes, Captain, a sort of vacant contentment."

Working...