In the best case scenario some idiot terrorist without a gun gets shot a couple times in the center of mass with no over-penetration before setting off a bomb. Virtually any other scenario doesn't require firearms to handle; terrorists on a plane are literally at arms-length to a hoard of people who hate terrorists. Terrorists taking overt action need room to maneuver or the ability to barricade themselves, or else the ability to instill overwhelming fear and inaction in everyone surrounding them. Taking over a plane with the threat of personal violence is pretty much the hardest thing a terrorist could accomplish at this point.
1.) all the photos I took of her seemed incredibly important at the time but are never looked at any more
Yeah, photos have a weird W-shaped utility; They get shared and looked at a lot when brand new. After 6 months to a year they sit in boxes/drives for years and after about 20 years the utility climbs again until ~150 years later when no living relatives remember the people in the photos. Then after a few more decades they have historical value. Hence the need to plan for long-term storage.
Take UDF. Expand it to the PB realm, not the existing 2TB. Add some ZFS features like ditto blocks, 64-128 bit CRCs, cryptographically signed writes with public keys, standard encryption, standard compression, ability to duplicate the filesystem as an image (so rsync utilities are usable to preserve hierarchy), snapshot directories a la OneFS/WAFL,
ZFS is probably your best bet for now. Oracle built filesystem-level encryption into the Solaris offering, no luck for the free versions. No cryptographic signing of writes, but that is imho overkill when you have to trust the whole kernel and filesystem layer and so whole-disk encryption plus SHA256 checksums gives basically the same assurance that no data has been modified. You can hold snapshots in ZFS to prevent them from being accidentally deleted and treat them as basically WORM.
So from your smallest box 3x 3TB = 9 TB of data, and Glacier and Google nearline (maybe others too?) are charging $0.01/GB-month, so about $90/month if you back up the whole thing. I don't know how much you pay for electricity in both locations, but if a box can run/idle at 100W and you leave it on all the time you spend ~900KW a year. At $0.20/KWh that's about $180/year per server. Disks every 3 years (if you get HGST's warranty) is $140/year (using $0.035/GB rough cost today), or $27/month per server for ongoing costs not including replacing the other hardware periodically. $54/month vs. $90/month? Sure, it's a little cheaper. If you wanted one box and one online service it makes running your own look better; $120/month vs $54/month. What about connectivity at both sites? If you are already paying an ISP for other reasons at both ends that's one thing, otherwise throw another ~$50/month on top of at least the backup server cost. AWS and Google appear to currently charge $0/GB for incoming transfers. Of course if you can get deals on cheap drives and run them past the warranty in a state with cheap electricity (or in a dorm room with free Internet/electricity) it's a lot cheaper.
As for security, encrypt before copying anywhere. You might as well be running local disk encryption too so you never have to worry about returning a disk with plaintext for warranty repair. I don't trust any company to keep the data I upload secret (FISA courts, NSA, bla bla bla), so encrypting incremental ZFS snapshots and uploading them is an efficient way of maintaining an offsite backup. I only have 1TB I care to back up this way so it's less sticker shock each month, but I still find it amusing that the first box I built was 4*320GB RAID5 and now that costs $9/month.
That said, you could probably use a synchronized random number generator as the shared pad data. The other side would only be able to decrypt messages for as long as they buffer the random number data; after which the message is lost to everyone for eternity. This could work for a TLS session where messages are exchanged with only a couple minutes (or preferably seconds) delay so that the buffer does not need to be very big.
That's roughly the definition of a stream cipher (e.g. RC4 or a block cipher in Counter mode). Only a cryptographically secure random number generator works, which is why such a thing is called a stream cipher and not just a "pseudo-random one time pad". In any case it's not a true one time pad because the entropy of the stream of pseudorandom data is limited to the entropy of the internal state of the cipher, and further limited by the entropy of the key. That means stream ciphers can be broken given only the ciphertext, as opposed to using a one time pad. Stream ciphers also share the same weakness as one time pads; reusing the same stream cipher key is just as bad as reusing a one time pad (virtually automatic recovery of all plaintexts encrypted with the same pad/stream).
For high throughput/IOPS requirements build a Lustre/Ceph/etc. cluster and mount the cluster filesystems directly on as many clients as possible. You'll have to set up gateway machines for CIFS/NFS clients that can't directly talk to the cluster, so figure out how much throughput those clients will need and build appropriate gateway boxes and hook them to the cluster. Sizing for performance depends on the type of workload, so start getting disk activity profiles and stats from any existing storage NOW to figure out what typical workloads look like. Data analysis before purchasing is your best friend.
If the IOPS and throughput requirements are especially low (guaranteed < 50 random IOPS [for RAID/background process/degraded-or-rebuilding-array overhead] per spindle and what a couple 10gbps ethernet ports can handle, over the entire lifetime of the system) then you can probably get away with just some SAS cards attached to SAS hotplug drive shelves and building one big FreeBSD ZFS box. Use two mirrored vdevs per pool (RAID10-alike) for the higher-IOPS processing group and RAIDZ2 or RAIDZ3 with ~15 disk vdevs for the archiving group to save on disk costs.
Plan for 100% more growth in the first year than anyone says they need (shiny new storage always attracts new usage). Buy server hardware capable of 3 to 5 years of growth; be sure your SAS cards and arrays will scale that high if you go with one big storage box.
and your HR department is paying "competitive wages" at the 50th percentile?
Let me know how that works out for you.
I dunno, I'm happy enough with my voluntary free association with the United States. I'm free to leave if I stop liking it, as are you.
What anti-state people don't seem to grasp is that the very same people who you hate in the government, the people who want to control your life and take things from you, weren't made that way by big government. Just look at Mexico. Big drug cartels (who may or may not be entirely the creation of anti-drug big government) are more powerful than the government. Wherever there is an advantage to be had by banding together and robbing the weaker or more honest people, you'll find that niche being filled. The job of government is to fill that niche with the least harmful and most inept robbers. That overpaid, uncooperative, unfriendly civil servant that you despite? Give them a gun and a posse and see how well that turns out for you.
Yeah, assuming you're not doing anything at all with the array while it's rebuilding, and none of the sectors have been remapped causing seeks in the middle of those long reads/writes.
To throw out one more piece of advice; RAID6 is useless without periodic media scans. You don't want to discover that one of your drives has bit errors while the array is rebuilding another failed drive. RAID6 can't correct a known-position error and an unknown-position error at the same time. raidz2 has checksums that should detect the bit flip and reconstruct the stripe from the N-2 known good copies, but at these sizes you should probably start worrying about the possibility of two bit flips in the same stripe.
"There... I've run rings 'round you logically" -- Monty Python's Flying Circus