Backshift is not screaming fast - for raw speed you probably want rsync. But backshift is much less freewheeling about how much storage it uses, since it compresses everything with xz and can deduplicate across machines -- EG if
With backshift even most metadata is compressed with xz, but each directory is compressed separately, making small restores fast.
To get a feel for how fast/slow backshift is, last night it did about 3.9 Terabytes of data (incrementals) in a bit over 9 hours. That's using pypy3 on a pretty fast server.
Here's a comparison of many backups tools, some of which I wrote, most of which I did not: https://stromberg.dnsalias.org...
And here's how backshift works: https://stromberg.dnsalias.org...
Here's how to get started with backshift: https://stromberg.dnsalias.org...
As a reminder: I wrote all of these. But today I trust my personal digital media collection and other filesystems to backshift.
I've never done encrypted backups. They're probably important to some people, but they're somewhat at odds with the goal of preserving data. And you have the option of using an encrypted filesystem with about any disk-to-disk backup tool. Compression has this issue too, to a lesser extent, but backshift limits how much data it puts in a single file, even for huge input files, limiting how much you could actually lose to a bad block on a magnetic hard disk or SSD. This also allows things like backing up to s3fs.
JavaScript+the DOM is ugly. I'm really hoping that WASM is going to replace it soon, and Micropython's WASM target is a strong candidate, especially after WASM gets better support for dynamic languages.
With opensource software, you can pay someone other than the original authors to maintain software you don't want to give up on. The same is frequently not true of closed source software.
And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones