Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Slashdot Deals: Prep for the CompTIA A+ certification exam. Save 95% on the CompTIA IT Certification Bundle ×

Comment Re:SubjectsInCommentsAreStupid (Score 1) 254

No, but it is in the areas that matter to most people. The thing is, people (at least the people I know) don't spend a lot of time working on older documents. They are working on new documents. And when they upgrade their Office version, usually everybody is upgrading their Office version, so Office 2003 vs. Office 2013 incompatibilities don't matter to very many people. But there is no LibreOffice version that will import a complex Word document of any version without some major flaws (minor flaws are usually ok). Floating figures that move around or disappear, captions that disappear, tables that get mangled, line art that doesn't render or renders incorrectly.... If you are writing a simple office memo, LibreOffice works perfectly fine, but many people use it to do complex formatting, and for those people the incompatibilities are a big problem.

Comment Re:SubjectsInCommentsAreStupid (Score 1) 254

There is perfect and then there is perfect. It is true, Microsoft Office compatibility is the last major remaining issue that most of the people I talk to care about. They will even use LibreOffice on Windows, they love the idea of LibreOffice, but Microsoft Office file formats are the currency of document exchange among many academics. It is usually not things like font substitution that matter to them, though. It is tables, charts, floating figures, line art, 3D arrows, OLE objects, etc. If the margins are bit off when you send a document to somebody, no big deal, but if the table with the latest data in a progress report to the NIH doesn't show up or gets mangled, that screws the pooch. LibreOffice is really close, but it is that last 1-2% that is really critical for many people to switch. OOXML has made this process a lot easier, but it is still a beast of a file format.

Comment Re:if that's true, (Score 1) 487

Um, sure ok, if you want to think about it that way. I see what you are saying, but the question relevant to the discussion is "How do you share a passphrase when its only representation on the system is as a hashed key?" That is the question that started off the whole discussion. For example, if I steal your desktop's hashed password list, I can't use that to break into your system (not easily) because you need to know the actual passphrase (not just the hash) to authenticate. With WPA-PSK this is not the case, but it's the risk everyone accepts when they use it.

Comment Re:if that's true, (Score 1) 487

Ok, since you are the second person to say this, I guess I was unclear. The way PSK works in WPA when you use a passphrase is:

(passphrase + SSID) * hash algorithm = pre-shared key (PSK)

The PSK is not the passphrase; it is a deterministic transformation of the passphrase, much like the way passwords on your local system are stored. Why is this distinction important? Well, for one the PSK is much less susceptible to dictionary-style brute-force attacks than the passphrase it is derived from. Second, if your key becomes compromised, you can do something as simple as changing the SSID, and that will generate a sufficiently different key without needing to change the passphrase.

So, the answer to the original question in this thread, "How do you securely share the wi-fi password with your contacts?" is "You don't, directly. You share the PSK, because that is all that is stored locally on your client." And the reason that works is because the way your computer authenticates locally with a password database is fundamentally different from the way your client authenticates with your AP.

Comment Re:if that's true, (Score 2) 487

Did you read my comment? The key is derived from the passphrase, it is not the passphrase itself. Neither the key nor the passphrase is ever transmitted. There is a handshake protocol where both the AP and the client demonstrate they both know the key and then a unique session key is generated from the key to encrypt traffic.

Comment Re:if that's true, (Score 4, Informative) 487

I was curious about this too. But the AC below gave a nice hint, so I went looking for a better explanation. Here is the blurb from the Wiki,

Also referred to as WPA-PSK (Pre-shared key) mode, this is designed for home and small office networks and doesn't require an authentication server.[9] Each wireless network device encrypts the network traffic using a 256 bit key. This key may be entered either as a string of 64 hexadecimal digits, or as a passphrase of 8 to 63 printable ASCII characters.[10] If ASCII characters are used, the 256 bit key is calculated by applying the PBKDF2 key derivation function to the passphrase, using the SSID as the salt and 4096 iterations of HMAC-SHA1.[11] WPA-Personal mode is available with both WPA and WPA2.

So it seems the PSK can be passed around without revealing the passphrase. But if I also remember correctly, the PSK is supposed to rotate (or maybe that's WPA2).

Comment Re:FreeNAS (Score 1) 212

Same here.

zfsonlinux doesn't have built in CIFS export

It's not built in, but it is integrated. Just use
zfs set sharesmb=on pool/srv

If you are having perms issues, make sure you have acl support enabled in your kernel and userland, and then use the aclinherit property on your zfs pool. Samba should handle the translation between NT and posix ACLs seamlessly, but you may need to use Samba4 for the best results.

Comment Re:FreeNAS (Score 1) 212

I'll go with one of the co-architects of ZFS, Matthew Ahrens,
http://arstechnica.com/civis/v...

There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.

In other words, there is a non-zero chance that you will get silent data corruptions on disk if you don't use ECC RAM. It is the same risk with ZFS as with any other filesystem. And yet, personal computers have been running without ECC RAM for decades and it hasn't been a travesty. So yeah, if you are running in the type of situation where you absolutely must ensure the highest level of data integrity, then you must use ECC RAM. If you are running your own personal home media NAS, it is probably not an unmitigable risk to buy cheaper hardware. The storage gurus will argue, "Why use ZFS if you don't care about it's data integrity features?" My response is that ZFS has a ton of other very useful features that make it a great filesystem.

BTW, bad vs. good RAM is not the same thing as non-ECC vs. ECC RAM. While ECC RAM will protect you from bit flips, a bad stick of RAM is still a bad stick with or without the extra parity bit. Aaron Toponce has a good (non-sensational) discussion on the topic,
https://pthree.org/2013/12/10/...

Comment Re:OwnCloud (Score 1) 212

Owncloud is a poor substitute for a fileserver because everything has to be owned by the webserver user at the the filesystem level. All of the access controls and versioning is handled by the owncloud webapp. So if you need to operate outside of the owncloud environment you are screwed, like for example using owncloud's dropbox-like client for easy syncing and at the same time exporting the filesystem via Samba for people to map network drives.

Comment Re:ZFS (Score 2) 212

Also, can you easily find all the snapshots for a single file?

If you export the filesystem via Samba, you can enable the VSS compatibility feature, which allows Windows users to access the "Previous Versions" tab. There is no equivalent for other Mac or Linux clients, or other network filesystems (NFS, etc) that I know of. It would be a nice feature to have.

Comment Re:For people who don't speak buzzwords (Score 2) 54

Well, containers (some variants, at least) do offer the ability to constrain resources. You should be able to prevent your Apache container from using up all of the memory on your system, for example. But the real strength of a container is the ability to just pick it up and move it to another machine, even while it is running. So it helps your situation quite a bit. Instead of needing to reprovision everything every time you move to a newer bigger box, you just configure the base system and drop the container into it, done. Ideally, your users don't even notice and their running jobs don't get interrupted. There are still a few rough spots to work out, but it's getting there.

Another use of containers is isolation. For example, a shell for users to log in to vs. the webserver. Everything can run on a single box, but you can have different security policies for each.

Comment Re:For people who don't speak buzzwords (Score 2) 54

Portability of the container, not portability of the program. Most container variants have the ability to migrate to another node or clone additional instances, but it's usually a bit rough and doesn't always go completely smoothly. Docker is really making an effort to polish this so that you can, say, configure an instance of your data analysis container, start it up on a single node, quickly expand it to 20 nodes under load, and then bring it back down to 1 node, or have it failover to different nodes if one crashes, etc.... That's the ideal that VMs have been able to do for some time, but hasn't quite worked out with containers yet.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...