ZFS Replication To the Cloud Is Finally Here and It's Fast (arstechnica.com) 150
New submitter kozubik writes: Jim Salter at Ars Technica provides a detailed, technical rundown of ZFS send and receive, and compares it to traditional remote syncing and backup tools such as rsync. He writes: "In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net. Who cares, right? As the service itself states, If you're not sure what this means, our product is Not For You. ... after 15 years of daily use, I knew exactly what rsync's weaknesses were, and I targeted them ruthlessly."
rsync and zfs do different things (Score:5, Informative)
rsync synchronises files. ZFS synchronises a file system. Of course it is better to work that way because you can transfer just the changed components of a file. Moving a file just changes a pointer, so send the pointer. That sort of thing.
Opens up another major possibility (Score:2)
ZFS doesn't think in terms of files. It thinks in terms of blocks, and in a redundant z-volume (similar to a RAID array) it distributes those blocks over multiple virtual devices (vdevs) - you can think of them as disks, but they don't have to be. These vdevs can be a disk
Re: (Score:2)
Look into how HDFS works [apache.org] it's the filesystem underlying Hadoop.
Re: (Score:2)
It's more than that - ZFS is basically taking fast snapshots and syncing just the deltas between the latest snapshot and the previous snapshot, which are blocks. Files and pointers don't matter - it's syncing individual changed blocks. You change one letter in a file, it's not syncing the whole file - just the changed block. It's substantially more efficient.
Exactly, and it's why ZFS' transfer speed is so much faster and does not go up with the size of the file (as rsync does), as shown in the article.
Re: rsync and zfs do different things (Score:1)
Re: (Score:1)
rsync does the same thing (block level transfers). ZFS wins this race because it is the filesystem and keeps track of which blocks are changing. rsync has to read every block, compute a checksum, and communicate that checksum to determine which block(s) need to be transfered. That's an expensive process, and thus why rsync defaults to "whole-file" on local storage. (you should disable that on an SSD.)
VM Replication (Score:4, Interesting)
I was a little unexcited by (although interested in) the article, even by the general speedups until I got to the part about VM replication. This really makes an enormous difference.
ZFS licensing has kept this as a grey area for me, so I I've largely kept away from deployment (save for an emergency FreeNAS box I needed in a hurry), but I'd clearly benefit from looking here again. Thanks for the reminder.
Oh, I also appreciate the rsync.net advertisement. Good guys, good service ;-)
Re: (Score:3)
The article did feel like an advertisement.
They offer a VM with lots of a disk space, is that really that special ?
I know of at least one that offers something similar:
https://www.vultr.com/pricing/... [vultr.com]
I guess not at the same scale and with a bandwidth limit.
What I think is kind of funny is how people are surprised that ZFS works well for VM-images.
rsync is meant/optimized for transfering files, not blocks.
ZFS is meant for transfering filesystem blocks, VM-images are blocks too.
So ZFS works better than rsync
Re: (Score:3)
Re: (Score:3)
but what Linux calls "containers" are crappy attempts to containerize.
Not sure what you mean. Jails have been around for a long time, but LXC/LXD containers have almost identical functionality.
container templates...check
filesystem snapshot integration (ZFS, btrfs) with cloning operations...check
resource limits...check
unprivileged containers...check
network isolation...more flexible under LXC than Jails, in my opinion
bind mounts in containers...check
nice management utilities
Re: (Score:2)
Only difference I can see really is that LXC doesn't support nested containers...
It most certainly does. Linux can nest user namespaces to almost any depth.
Re: (Score:2)
Security can't be bolted on after the fact, it must be baked into the design
Re: (Score:2)
The difference is BSD Jails are entirely separate environments with their own unshared kernel datastructures, and the jail communicates with the host via an API. Linux namespaces is just metadata added to shared environments.
I'm sorry, but this notion is completely wrong. A BSD Jail is a forked process (the "jail process"), which calls the "jail" kernel system call and then executes a chroot. The jail syscall serves to attach the "prison" data structure to the "proc" data structure of the jail process, allowing the kernel to identify the process as "jailed" and treat it accordingly. The isolation of the environments is dependent entirely on the kernel recognizing that the process is jailed and putting the appropriate restrictio
Re: (Score:2)
This is how the FreeBSD kernel devs describe BSD Jails. Each jail get's it's own kernel network stack, kernel memory allocator, and almost every other kernel datastructure. They said this is nearly identical to paravirtualization. Breaking out of a jail requires a kernel flaw in both a system call and the paravirtualization layer.
Think KVM+QEMU, with most of the benefit an
Re: (Score:2)
This is how the FreeBSD kernel devs describe BSD Jails. Each jail get's it's own kernel network stack, kernel memory allocator, and almost every other kernel datastructure.
What you are describing is VPS (Virtual Private System), not Jails. VPS is the successor to Jails, written to address some of the shortcomings of Jails and make them more useful in situations where you want true virtual environments, rather than just the extra security that Jails has to offer. Incidentally, the mechanisms used to implement VPS in FreeBSD are nearly identical to the mechanisms for implementing containers on linux. Here is the relevant description from the whitepaper (http://2010.eurobsdcon.o
Re: (Score:2)
Re: (Score:2)
You're able to run as-root / Set-UID binaries with-in them? Nope. LXC emulates this by mapping UID-0 in the container to UID-x on the host via namespaces.
No, that is not correct. Root is root in an lxc container subject to some limitations (ex: making device entries), just like it is with BSD Jails. The mapping that you are referring to is a security mitigation feature, should an attacker manage to break out of the container. If a root-user within the container breaks out of the chroot (containers are essentially chroot with cgroups added in), but are still within the container process (iow, no buffer overflow or similar vulnerability), they will be subject
Re: (Score:2)
From some of the benchmarks in the article it didn't seem like rsync had any strength over syncoid, other than his tool requiring ZFS on both en
Re: (Score:2)
There is no grey area with respect to the licensing. It's CDDL, a free software licence. It's 100% Free.
It might be incompatible with the GPL, but that's a non-issue. The userland tools are fine under this licence. The kernel modules are fine under this licence. Now, it means that the kernel modules aren't going to appear in a kernel release anytime soon, but that in no way makes for any legal problems in using them as loadable modules, today. It works fine from a technical point of view, and it's als
ZFS + Linus is not a GPL violation (Score:2)
Don't let the licensing FUD scare you. Linus has publicly stated that licensing in a case that's a very near equivalent to ZFS' licensing is fine.
The anticipated problem with the license has always been on the Linux side. The license ZFS is released under doesn't in any way prohibit the ZFS code from being used in other places with other licenses (like the *BSD's). There has never been a concern that using ZFS with Linux violates the ZFS license (and thus could bring Oracle's well-fed lawyers down upon y
Charming (Score:2, Insightful)
Who cares, right? As the service itself states, If you're not sure what this means, our product is Not For You.
Ah, there's that welcoming open-source community spirit.
Re:Charming (Score:4, Informative)
there are things in this world that simply aren't meant for participation award winners. so go get offended somewhere else.
if somebody doesn't know what ZFS replication is, their product clearly isn't meant for them. why bother with explanation to a visitor that has no use for the product/service?
the attitude of these ZFS people is still quite welcoming compared to some connectivity providers i've dealt with. e.g. bogons.net will just politely tell you to f*ck off if you don't fully understand what you're purchasing from them (dwdm/cwdm rings).
Re: (Score:3, Informative)
their howtos hold newbies' hands sufficiently. they simply don't provide a free "Oracle ZFS Storage Appliance Administration course", which is what some people seem to expect. it seems i am discussing this with people who haven't even visited their website, so i'll stop here.
It's a subset not usable outside the set (Score:2)
Re: (Score:2)
Snapshotting has been in ZFS from (practically?) the beginning.
This article is about a cloud provider specifically providing a workable service to act as a ZFS snapshot receiver, which before required you to do some serious customization on a general-purpose compute environment like Amazon EC2.
At the prices that rsync.net charges for what it is, this is a pretty compelling off-site solution for my media storage, as it's already on a ZFS pool via FreeNAS.
The filesystem so fast... (Score:1)
Re: (Score:1)
That was ReiserFS, not ZFS.
Re: (Score:1)
Only after the Russian mail-order bride steals the money from your open source "wealth" to fund her new boyfriend's BDSM hobbies.She actually sounded a lot like my ex, the one with the website on breast feeding with nipple rings.
And no, I'm not making *any* of this up.
Rsync could have done this too! (Score:5, Informative)
Reading this article, it seems that this "ZFS replication" is very similar to rsync, with one straightforward addition:
Rsync works on an individual file level. It knows how to synchronized each modified file separately, and does this very efficiently. But if a file was renamed, without any further changes, it doesn't notice this fact, and instead notices the new file and sends it in its entirety. "ZFS replication", on the other hand, works on the filesystem level so it knows about renamed files and can send just the "rename" event instead of the entire content of the file.
So if rsync ran through all the files to try to recognize renamed files (e.g., by file sizes and dates, confirming with a hash), it could basically do the same thing. This wouldn't catch the event of renaming *and also* modifying the same file, but this is rarer than simple movements of files and directories. The benefit would have been that this would work on *any* filesystem, not just of ZFS. Since 99.9% of the users out there do not use ZFS, it makes sense to have this feature in rsync, not ZFS.
Re: (Score:3)
I was wondering what this offers over a (theoretical?) inotify+rsync app.
In the comments at the linked-to Ars article, Jim discusses just this approach.
Basically, and from memory, he determined that it would just be too much work to re-implement something that already works solidly (ZFS) and comes with a huge amount of other features out of the box.
Re:Rsync could have done this too! (Score:5, Insightful)
Re: (Score:2)
The crucial difference is ZFS send is unidirectional and as such is not affected by link latency. rsync needs to go back-and-forth, comparing notes with the other end all the time.
But this is *not* what the article appears to be measuring. He measured that the time to synchronize a changes were nearly identical in rsync and "ZFS replication" - except when it comes to renames.
Re: (Score:3)
Re: (Score:2)
In addition, when it comes to VM hosting in the filesystem, ZFS deduplication can offer a significant space savings by deduping all the common files in the VM images (operating system files).
If you are hosting Windows VMs, this effectively nullifies many gigabytes of storage bloat. This is, of course, a feature of ZFS, and has nothing to do with snapshotting other than the fact that your snapshots will be smaller.
Re: (Score:2)
deduplication takes an insane amount of RAM and is really only useful for static rarely written datasets, its strongly recommended against for VM images.
OTOH enabling lz4 compression is recommended - cpu/ram usage is minimal and the compression levels can be quite impressive, plus it can actually improve disk i/o as less data is read/written from disk. I have many VM's with compression enabled, compression usually reduces the image by about 30%
Re: (Score:2)
Depending on what your setup is and what the requirements are, it's fully feasible to have a 'storage server' where all it's RAM is handed over to ZFS for caching and dedup, and you export via NFS to your VM hosting systems on 10GbE. It adds a touch of latency, but if you can host a hundred machines that don't require super low latency and save 90% of the disk space by only having 1 copy of your server OS (for the most part), then you're probably doing better.
It's a viable config depending on what the need
Re: (Score:2)
Renames and changes to large files (VM images were the author's example).
Re: (Score:3)
Yet this is what the article says. Does he really have to measure read time to the millisecond instead of providing an estimate? How fast can your disk system read off 2TB of information, anyway?
"Virtualization keeps getting more and more prevalent, and VMs mean gigantic single files. rsync has a lot of trouble
Re: (Score:2)
Not quite - zfs needs to contact the destination zfs fs to compare with the last snapshot, but that is a very quick process. Once done zfs already knows whats blocks have changed since the last snapshot, whereas rsync has to scan the contents of each file at *both* ends which is where all the time comes in.
Re: (Score:2)
Not quite zfs needs to contact the destination zfs fs to compare with the last snapshot
Ehm, no, sorry. No communication with the destination machine is required while generating an incremental send stream. How can I claim this? Well besides being quite intimate with the ZFS source base (and I can point you to the relevant source files if you so desire), just a quick read through the zfs(1M) manpage will mention this example:
# zfs send pool/fs@a | ssh host zfs receive poolB/received/fs@a
As you are no doubt aware, pipes are by definition unidirectional. There is no way the zfs receive can tal
Re: (Score:2)
Re: (Score:1)
Not exactly.
rsync will always have to go through the files and check. Trying to identify stuff like renames will obviously make a difference, but as it's only really going to have any sizeable impact when you happen to have lots of renames, but not actual data changes, it's probably not even worth the effort of implementing it.
ZFS send/recv works at a very low level using the fundamental infrastructure in ZFS that makes snapshots work. When you send an incremental ZFS snapshot it doesn't have to check anyth
Re: (Score:3)
Not exactly.
rsync will always have to go through the files and check. Trying to identify stuff like renames will obviously make a difference, but as it's only really going to have any sizeable impact when you happen to have lots of renames, but not actual data changes, it's probably not even worth the effort of implementing it.
The rename issue is actually *very* important. It's not likely that you'll have a lot of independent renames, but something very likely is that you rename one directory containing a lot of files - and at that point rsync will send the entire content of that directory again. I actually found myself in the past stopping myself from renaming a directory, just because I knew this will incur a huge slowdown next time I do a backup (using rsync).
Re: (Score:3)
Re: (Score:3)
So if rsync ran through all the files to try to recognize renamed files (e.g., by file sizes and dates, confirming with a hash), it could basically do the same thing.
As a sibling comment points out, rsync does have a mode which handles this. As they don't point out, it is horrendously costly. Making this the default would be a pure idiot move. ZFS has metadata that permits detecting these sort of files, so it is possible to do it cheaply with ZFS.
What is really wanted IMO is for rsync to detect this stuff and use it when ZFS is present.
Re: (Score:2)
ZFS has metadata that permits detecting these sort of files
Side note for your entertainment in case it interests you, the way ZFS actually handles the rename case has nothing to do with trying to follow file name changes. In fact, in order to handle a rename, we don't need to look at the file being renamed at all. The trick is in the fact that directories are files too (albeit special ones) with a defined hash-table structure. ZFS send simply picks up the changes to the respective directories as if they were regular files and transfers those. The changed blocks the
Re: (Score:2)
Side note for your entertainment in case it interests you
It does
The trick is in the fact that directories are files too (albeit special ones) with a defined hash-table structure. ZFS send simply picks up the changes to the respective directories as if they were regular files and transfers those.
That does seem like functionality which rsync could be enhanced to use. At least, it could be used to more rapidly find duplicates when both ends are using ZFS. rsync ain't going away anytime soon.
I am interested in ZFS but will probably wait until a Linux distribution makes it trivial to implement. I am past the point where messing around with filesystems seems fun.
Re: (Score:2)
the scopes of what "zfs send" and "rsync" do are so profoundly different, it's almost silly to compare them. they're at completely different layers of storage stack. when i sync my local filesystem with a remote site (every hour), i sync snapshots, clones, (sub)filesystems while things are mounted and heavily in use. there's also compression and deduplication to consider.
the rsync feature you suggested isn't possible without a complete zfs rewrite or another layer of abstraction. too costly in either case.
Re: (Score:3)
The biggest difference is that ZFS has full knowledge of the state of the file system, rsync on the other side doesn't, it's stateless, it has to start from zero each time and regather the information on each and every run on both sides, which is a really slow and potentially error prone process (i.e. when files change while rsync runs). ZFS knows what's going on in the filesystem and its snapshots the filesystem at a single point in time, so it thus it can be be far quicker and won't produce inconsistencie
Re: (Score:2)
Re: (Score:2)
ZFS replication is for synchronizing file system snapshots. rsync is for syncing some files.
Entirely different purposes even if they seem the same.
ZFS encapsulates the entire storage channel. It is your volume manager all the way to your file system. It knows of every single change that occurs, when and where it occurs and what it changed. Sending a ZFS snapshot gets not only the snapshot being sent, but every one in between. ZFS does deduplication, compression, checksumming, and the snapshots stores ev
Re: (Score:2)
Re: (Score:2)
They definitely are. But it doesn't scale well. The time taken to scan the files and their contents on the source and destination system becomes overwhelming. The largest I've taken it to is a few terabytes, consisting of many thousands of directories each containing thousands of files (scientific imaging data). It ends up taking hours, where with ZFS it would take a few seconds. It also thrashes the discs on both systems as it scans everything, and uses a lot of memory. ZFS does none of these things-
Re: Rsync could have done this too! (Score:2)
Another problem is that rsync has to scan the entire file system, calculate hashes and transfer them and then do the same on the other side before it can transfer the difference.
If you have millions of files and directories that can take significant amount of time. I used to have rsync take a weekend to backup. With ZFS I can do hourly backups.
Re: (Score:2)
Well, sort of....
We switched from rsync to ZFS replication for our production environments and the difference in performance is rather extreme. (and why we made this change)
Medium sized file system, 12 TB and a few hundred million files. Doing a backup with rsync took days, and it was all just tied up in IOPs, even if the number of files changed was rather small. At this scale, it takes more than 24 hours just to get a listing of files.
Switching to ZFS with nighly snapshots and replication dropped backup t
Re: (Score:2)
The other advantage is that ZFS replication, unlike RSYNC, doesn't need to calculate diffs because ZFS it already keeps track of what blocks have changed since the last snapshot. This makes the entire process much faster less resource intensive.
Imagine the following scenario:
You are the sysadmin at a 24x7 company. You have a few hundred user's home directories (shared over NFS or SMB) on a fileserver that needs to be upgraded/replaced for some reason. You are tasked with migrating these home directories
Sheesh, the sheer low quality of TFA (Score:1)
For those who already understand rsync and zfs the article adds nothing new that is of value. 1/3 of the article is telling you what rsync is, which you can fill with lorem ipsum and still not lowering the next-to-none quality of the article. We already fucking know what rsync is. It's in the man pages for, like 10+ years. And why do you need a Jedi picture just for that?
Then the useless benchmark, taking another 1/3. No repeatable experiments. No statistics. Only one-shot timings. And the worst thi
Re: (Score:2)
I guess you missed the RESOLVED tag on that.
Re:BTRFS is the future (Score:5, Interesting)
Er, no. Btrfs may one day make feature parity with ZFS, and it may also achive the reliability of ZFS, but it has a long, long, way to go in both areas to get to those points.
The on-disc structures might have been declared "stable", but what does that mean, really? That you'll be able to mount current filesystems on future kernels, yes. That the frozen design was correct and contains no design flaws? No. Personally, I think they froze it way too early. There are a number of fairly fundamental issues with the Btrfs design which compromise its performance (fsync) and integrity (unbalancing, data loss on recovery), and in some cases place arbitrary limits upon things (e.g. the hardlink issue). Some can be mitigated, while others can not. These and other issues are easily found and researched.
Seriously, I've been using Btrfs since very near the beginning for a variety of tasks. But I've been objective about it, rather than a blinkered fanboi. It's an interesting filesystem with some good ideas. But it has /always/ been a case of "next year it will be stable", and the performance is dire. Progress has been painfully slow, and the bugs I've encountered along the way have been numerous and show-stopping. Maybe it will "get there", but I think your assertion that "once BTFS userland side gets stable" that it will replace ZFS is incredibly naive. It assumes that there are no major issues remaining on the kernel side, and it also assumes that the only thing needing doing on the user side is stability. Based on its history to date, the likelihood of the kernel side being bug-free is close to zero. On the user side the tools are primitive, feature-incomplete and almost completely undocumented, containing little information and no examples. On the ZFS side, the tools are feature complete and are properly documented, with examples, and with whole sets of training material on top of that.
If you needed to make a decision on which to use for a serious deployment, or even just for a smaller scale home NAS, right now if you objectively compare the two, the choice is quite clear, and it's not Btrfs. Based upon the development history of the two, it's unlikely that this will change much in the next few years. Remember also that ZFS development is very active, perhaps even moreso than Btrfs. But who knows, maybe by 2020 Btrfs will surpass it.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
ZFS is an ENTERPRISE file system, it will eat all the RAM you give it and get faster with more RAM as it can cache more I/O. It is designed run on a well spec'ed server with a UPS.
Of course you can run it on anything FreeBSD supports and try your luck, it works well even then for most people.
I wonder why Docker doesn't deploy to OpenIndiana (Score:2)
If btrfs has so many issues, I wonder why Docker doesn't have a deployment on Illumos [openindiana.org]. or SmartOS [smartos.org].
I would think that Docker enthusiasm would be damped by a beta filesystem and (the lack of) verifiable security in package content.
Re: (Score:2)
BTRFS is less mature than ZFS, but it has a lot of useful functionality and is in some ways more elegant. For example, the snapshot of a subvolume is a first class filesystem in itself without dependency on it's parent. It's also a lot better about handling replacement of physical volumes underneath it if you have mirroring turned on. In particular, you can arbitrarily increase the size of the filesystem by using a larger replacement or just adding on more drives.
On the other hand, I'm not touching the rai
Re:BTRFS is the future (Score:5, Insightful)
Are you for real AC, or just trolling?
Your Synology "reference" is a classic "appeal to authority", only it's a really bad choice of authority due to its complete lack of any technical detail or substance of any kind. That link is to a marketing page for a company which makes money selling hardware. It's just a few bullet points (snapshotting, checksumming in essence), without any discussion of the actual tradeoffs or comparison with other systems. It's worthless. It's only purpose is to tick a feature box to act as an incentive to purchase their systems; as for the actual performance and reliability of those features--that's the customer's problem. Caveat emptor.
I've done more than casual work and development with Btrfs. For example, from back when I was a Debian developer, here's the original inital support for Btrfs snapshotting in schroot [github.com]. This lets you create virtual environments from Btrfs snapshots, as well as other types such as LVM and overlays. You can then plug this into other tools such as sbuild, and then build the whole of Debian using snapshotted clean build environments. Doing this, Btrfs fails hard around every 18 hours, going read-only. Why? Creating and deleting 18000 snapshots for 8 parallel builds quickly unbalances the filesystem, requiring a manual rebalance. You don't see that unfortunate detail in the Synology fluff page, do you?
You can also get snapshots and decent recovery (albeit without block-level checksums) from LVM and mdraid. In my experience, its recovery behaviour after real hardware failure is vastly more reliable than Btrfs. Simply put, it has always resynched the data without problem, while Btrfs has caused irrecoverable data loss, despite it theoretically being much better. LVM snapshots have very different tradeoffs as well. And on modern Linux with udev, we had to abandon using them due to races in udev/systemd making them randomly fail.
The point I'm making is that the reality of the chosen tradeoffs between performance, reliability and featureset of the different filesystems is a subtle one. You can't reduce it down to "Btrfs is better" or "ZFS is better". That's marketing. But I have spent over seven years pushing Btrfs to its limits, and have found it sorely lacking. It's unacceptable that it unbalances itself to the point of unusability. It's unacceptable that it has led to irrecoverable dataloss on several occasions. It's also unacceptable that in its eight years of existence, none of the developers could be bothered to write any decent documentation. The dataloss was down to bugs, some of which are fixed, but it does leave you in a position of lacking trust in it in the face of such problems. If you compare this with ZFS, while it's not fair to say it has been totally bug free, it has been almost bug free, and the number of dataloss incidents is small. I've yet to encounter any problems with ZFS myself, but I've encountered many serious issues with Btrfs.
Anyone who uses Btrfs or ZFS on a NAS system does so at their own risk after researching the various options and their tradeoffs. Just because a vendor decides to make and market a system using Btrfs does not make that system the best choice. It just means they thought they could make some profit from it.
Re: (Score:2)
To be fair, the race existed in udev prior to the systemd merge as well. When lvremove randomly stops working, it's a bit surprising, and it took a while to pinpoint udev as the culprit keeping the snapshot devices open and preventing their removal. "Helpful" such behaviour is not. We had to move all the debian buildds from using lvm snapshots to unpacking tar files as a result (btrfs being too fragile as mentioned).
Re: (Score:3)
Re: BTRFS is the future (Score:2)
ZFS disk structures were stable a decade ago but frankly the userland is still a bit buggy today, and that's with ten times as many people working on it as btrfs and people knowing full well where the problems are and what needs to be done to fix them. btrfs hasn't gone through that discovery process yet.
Don't assume undone work is easy. I'll be delighted to be proven wrong in five years (I said the same thing five years ago).
ZFS vs BTRFS (Score:3)
Jim Salter writes some great pieces on file systems for Ars Technica.
At the linked article are Related Links. Of particular note is "Atomic Cows and Bit Rot" -- read that if you're interested in modern file systems.
never RAIDZ yourself, but run run run to get some (Score:2)
Yeah, he writes okay pieces, but it kind of annoys me when he throws up blanket advice and then practically trips over himself extolling the opposite.
ZFS: You should use mirror vdevs, not RAIDZ [jrs-s.net]
Guess what? The entire rsync.net service is built on top of RAID-Z3, if I read their promotional portal correctly.
One use case I can see for this is using ZFS to back up Postgres databases. I'm not the only person to think this might be a good idea. A while back, I listened to this talk, which I really enjoyed:
Keith [youtube.com]
Re: (Score:3)
Whereas /. is filled with people such as yourself...
I've been on /. & ars for close to 2 decades & the level of idiot posts is unfortunately much higher here.
Re: (Score:2)
ZFS is nice .. but it's just not been stable
By your definition of stable, nothing is stable. ZFS is not perfect, but it is closer to perfect than anything else.
Re: (Score:3)
Without some kind of incremental snapshot, with read-only privileges after the snapshot, straight replication is next to useless if someone does "rm -rf /". And it happens *all the time*.
So ... zfs covers that ... since it does exactly what you suggest.
Sure, if you can afford to buy 3 times as much disk
What? If you want mirroring or RAID like qualities, yes, you need to duplicate data, thats true of any mechanism like this... you do realize thats what things like NetApp do too ... right, just mirroring or raid?
and roughly 10 times as much network bandwidth as you ever really process with,
... this makes no sense? How does the network come into play here? You're just making random shit up?
ZFS is nice if you can afford one sys-admin/Terabyte of data to try to keep it up to date, but it's just not been stable.
The company I work at rolls over roughly 50tb of data PER DAY, several petabytes worth ... in ZFS ...
You'll have to pardon me if I
Re: (Score:2)
Fortunately, zfs also supports snapshots, and those can be sent/received as well.
Re: (Score:2)
Without some kind of incremental snapshot, with read-only privileges after the snapshot, straight replication is next to useless if someone does "rm -rf /". And it happens *all the time*.
So, exactly what ZFS provides then... You take periodic snapshots (hourly, daily, weekly, or whatever), then send the deltas between the snapshots to the destination system. You can easily put that in a cron job and have a regular push to a backup system (hey, exactly like what the tool in TFA is doing...). If someone does wipe out all their files, you have the snapshot(s) containing it on both the source and destination system, depending upon your schedule for dropping old snapshots. However you decide
Re: (Score:2)
1. ZFS Snapshotting is incremental, just like NetApp. In fact, it's so 'just like NetApp' that NetApp sued Sun Microsystems over it.
2. You don't know what the hell you're talking about. See #1.
Re: (Score:2)
Having trouble distinguishing between rsync, the tool, and rsync.net, the online service? Having never used either, the distinction was still perfectly clear to me.
"The cloud" (Score:2, Informative)
Anyone else getting tired of is term? All it means is "someone else's computer". All you're doing is renting server space and replicating your data there. There's nothing special about it.
Re: (Score:1)
Yep. 'The Cloud' is just shifting responsibility to someone else, who may or may not be doing a proper job of security or backups. This seems germane [textfiles.com].
Re: (Score:2)
Re: (Score:2)
Hmmm.... good point... perhaps we need "Smart cloud 2.0"
Re: (Score:2)
Anyone else getting tired of is term? All it means is "someone else's computer".
To be fair, that's kind of what it has meant for years. I have a networking textbook that's 15 years old that represents unspecified parts of a network in a network diagram as a cloud shape. So "piece of computer network that I don't care much about the details", e.g. the Internet, has been called "cloud" for a while.
Of course, this is not to be confused with "cloud computing", which has a more precise definition (basically distributed processing, but with on-demand virtual machines instead of physical n
Re: (Score:2)
Never heard of a private cloud then? We run a large virt cluster here & "the cloud" is the most straightforward & friendly way for me to refer to it to the higher ups. "Cloud" is just the same as "cluster", however the former is more widely recognised.
Discount for slashdot folks (Score:1)
We've had a very significant discount for HN readers for years and we'd be happy to extend that to /. readers. Just email and ask.
Really happy to be here - I am not sure why I am labeled as "new submitter" since I have been a slashdot user for ... 15 years ?
Happy to answer any questions about our service here as well.
Re: (Score:2)
Er, OpenZFS...
ZFS originated within Sun, which was bought by Oracle. Oracle then laid off most (all?) of the ZFS developers, who then went to work for other companies. The current ZFS development is no longer inside Oracle, and nor is it owned by them. They own the copyright on the original CDDL releases. Big deal. Not using it because of the historic association with Oracle would be a little... extreme.
Re: (Score:2)
You do realise that Btrfs originated within Oracle, right? ZFS was merely acquired by them.
Re: (Score:2)
Oh, so in your hatred of Oracle, you're recommending a filesystem project that was started by... Oracle.
Only reason Oracle isn't still the major contributor to btrfs is because they bought Sun and got a complete version of what they were trying to create with btrfs.