Um, sure ok, if you want to think about it that way. I see what you are saying, but the question relevant to the discussion is "How do you share a passphrase when its only representation on the system is as a hashed key?" That is the question that started off the whole discussion. For example, if I steal your desktop's hashed password list, I can't use that to break into your system (not easily) because you need to know the actual passphrase (not just the hash) to authenticate. With WPA-PSK this is not the case, but it's the risk everyone accepts when they use it.
Ok, since you are the second person to say this, I guess I was unclear. The way PSK works in WPA when you use a passphrase is:
(passphrase + SSID) * hash algorithm = pre-shared key (PSK)
The PSK is not the passphrase; it is a deterministic transformation of the passphrase, much like the way passwords on your local system are stored. Why is this distinction important? Well, for one the PSK is much less susceptible to dictionary-style brute-force attacks than the passphrase it is derived from. Second, if your key becomes compromised, you can do something as simple as changing the SSID, and that will generate a sufficiently different key without needing to change the passphrase.
So, the answer to the original question in this thread, "How do you securely share the wi-fi password with your contacts?" is "You don't, directly. You share the PSK, because that is all that is stored locally on your client." And the reason that works is because the way your computer authenticates locally with a password database is fundamentally different from the way your client authenticates with your AP.
Did you read my comment? The key is derived from the passphrase, it is not the passphrase itself. Neither the key nor the passphrase is ever transmitted. There is a handshake protocol where both the AP and the client demonstrate they both know the key and then a unique session key is generated from the key to encrypt traffic.
I was curious about this too. But the AC below gave a nice hint, so I went looking for a better explanation. Here is the blurb from the Wiki,
Also referred to as WPA-PSK (Pre-shared key) mode, this is designed for home and small office networks and doesn't require an authentication server. Each wireless network device encrypts the network traffic using a 256 bit key. This key may be entered either as a string of 64 hexadecimal digits, or as a passphrase of 8 to 63 printable ASCII characters. If ASCII characters are used, the 256 bit key is calculated by applying the PBKDF2 key derivation function to the passphrase, using the SSID as the salt and 4096 iterations of HMAC-SHA1. WPA-Personal mode is available with both WPA and WPA2.
So it seems the PSK can be passed around without revealing the passphrase. But if I also remember correctly, the PSK is supposed to rotate (or maybe that's WPA2).
zfsonlinux doesn't have built in CIFS export
It's not built in, but it is integrated. Just use
zfs set sharesmb=on pool/srv
If you are having perms issues, make sure you have acl support enabled in your kernel and userland, and then use the aclinherit property on your zfs pool. Samba should handle the translation between NT and posix ACLs seamlessly, but you may need to use Samba4 for the best results.
I don't disagree with anything you have said. We use ECC RAM in our servers. My contention was only with the statement,
Not just a lot of RAM, it MUST be ECC RAM for ZFS
implying that ZFS is somehow a special case from other filesystems. It's not.
I'll go with one of the co-architects of ZFS, Matthew Ahrens,
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
In other words, there is a non-zero chance that you will get silent data corruptions on disk if you don't use ECC RAM. It is the same risk with ZFS as with any other filesystem. And yet, personal computers have been running without ECC RAM for decades and it hasn't been a travesty. So yeah, if you are running in the type of situation where you absolutely must ensure the highest level of data integrity, then you must use ECC RAM. If you are running your own personal home media NAS, it is probably not an unmitigable risk to buy cheaper hardware. The storage gurus will argue, "Why use ZFS if you don't care about it's data integrity features?" My response is that ZFS has a ton of other very useful features that make it a great filesystem.
BTW, bad vs. good RAM is not the same thing as non-ECC vs. ECC RAM. While ECC RAM will protect you from bit flips, a bad stick of RAM is still a bad stick with or without the extra parity bit. Aaron Toponce has a good (non-sensational) discussion on the topic,
Owncloud is a poor substitute for a fileserver because everything has to be owned by the webserver user at the the filesystem level. All of the access controls and versioning is handled by the owncloud webapp. So if you need to operate outside of the owncloud environment you are screwed, like for example using owncloud's dropbox-like client for easy syncing and at the same time exporting the filesystem via Samba for people to map network drives.
Also, can you easily find all the snapshots for a single file?
If you export the filesystem via Samba, you can enable the VSS compatibility feature, which allows Windows users to access the "Previous Versions" tab. There is no equivalent for other Mac or Linux clients, or other network filesystems (NFS, etc) that I know of. It would be a nice feature to have.
Nonsense. ECC RAM may help avoid certain kinds of on-disk errors, but it's a heavily debated topic. Your statment,
even a single flipped bit can cause ZFS to corrupt the entire file system
is completely unsubstantiated BS.
Well, containers (some variants, at least) do offer the ability to constrain resources. You should be able to prevent your Apache container from using up all of the memory on your system, for example. But the real strength of a container is the ability to just pick it up and move it to another machine, even while it is running. So it helps your situation quite a bit. Instead of needing to reprovision everything every time you move to a newer bigger box, you just configure the base system and drop the container into it, done. Ideally, your users don't even notice and their running jobs don't get interrupted. There are still a few rough spots to work out, but it's getting there.
Another use of containers is isolation. For example, a shell for users to log in to vs. the webserver. Everything can run on a single box, but you can have different security policies for each.
Portability of the container, not portability of the program. Most container variants have the ability to migrate to another node or clone additional instances, but it's usually a bit rough and doesn't always go completely smoothly. Docker is really making an effort to polish this so that you can, say, configure an instance of your data analysis container, start it up on a single node, quickly expand it to 20 nodes under load, and then bring it back down to 1 node, or have it failover to different nodes if one crashes, etc.... That's the ideal that VMs have been able to do for some time, but hasn't quite worked out with containers yet.
Jeez, slashdot really is a shell of its former self. None of my containers run "a single application." The benefits of a container over a VM when you are running on the same core OS on the same architecture should be obvious to anyone who manages servers. What Docker containers bring over "ordinary" containers is superior portability. So, yeah, it is good for software deployment, but nobody is going to use it to bundle libreoffice.
But, that ignores the problem that modern containers really exist to solve dependency hell
Uh, no, that is not why containers exist at all. Containers are the linux equivalent of BSD jails and Solaris zones, which have many use cases. While you CAN use containers to manage dependencies, there are many other (better) ways to do that.
And, there's the annoying tendency of container users (and VM users) to treat everything as root within the context of the container/VM,
I don't know anybody who does this. Who do you work with?