Erm, "appending @@<version> to filenames".
DragonFlyBSD's HAMMERFS does much of this - you can examine the version history of files and directories using hammer history and undo commands, and reference versions directly by appending @@ to filenames.
You can control how long history is preserved for and in what level of detail, as well as efficiently replicate it all across the network to remote filesystems (which can have their own, different rules). All this in addition to the more traditional named snapshots approach you're limited to with, e.g. ZFS.
Their mobile client is open source: https://github.com/SpiderOak/S...
The desktop client is mostly unobfuscated Python bytecode and easily inspected, docstrings, symbol names and all, with a bytecode decompiler. Not good enough, but at least a bit more transparent than most.
Guess that has to be my main server, even though it's a few generations older than my desktop, it has more cores, more IO, more memory and more storage. It runs FreeBSD.
Motherboard: Supermicro X8DTN+
CPUs: 2 x 6-core Xeon L5639 @ 2.13GHz
RAM: 144GB - 9 x 16GB DDR3-1333 ECC Reg
Primary Storage: 2 x SanDisk Extreme Pro 960GB, ZFS mirror.
Mass Storage: 6 x 5TB Toshiba MD04ACA5, ZFS 3 x mirror.
Disk controller: IBM M1015, seems one of the most favoured HBA's these days.
Keyboard: NTC KB-6153EA with clicky White Alps.
I play with search engines and stuff, the memory comes in handy and I got it for a great price.
Desktop is a 32GB ECC quad core Haswell Xeon mumble mumble running Windows 8.1, with a pair of 30" 1600p monitors and a 20" 1600x1200. Nice having space to put stuff. Also nice having memory that doesn't silently corrupt itself every few months, you crazy kids and your non-parity.
It's not being used as a key. Key stretching would be pointless. You stretch to get a longer key if your goal is to derive a strong key
You want a strong key! Key stretching isn't just about making a physically longer key, it's about making a stronger one, such as by iterating your hash function a million times.
KDFs are for key derivation. That's why they're called key derivation functions. How is that hard to understand.
This is not in question. What is in question is why it's not exactly what you'd want out of a password hashing function - what difference does it make whether you're going to pass it to AES or to a comparison function?
A better choice is a properly vetted hash that's designed as a hash, such as SHA256
... which you then need to, at a minimum, apply salting and key stretching to. Good work, you just rewrote most of PBKDF2, just without the peer review, sane defaults, and for most people, probably in a language where the function call overhead exceeds the cost of the hashing.
Using a KDF as a hash is like using a butter knife as a screwdriver - it gets the job done, and professionals normally use the tool designed for the job rather than substituting.
Hashes are not designed for password storage, that's the entire reason we're having this conversation in the first place. People use KDF's for password storage because that's what they're made for. Anyone who uses a plain old hash has to make a KDF out of it. How are they different?
Yes, I used "computationally complex" to mean "takes a lot of steps to complete". You and your "words mean stuff", stop evading the point.
Why is a KDF like PBKDF2, bcrypt or scrypt, a poorer option for password storage than rolling your own? Please use words which mean stuff.
You want the hash algorithm to be SLOW, not "well optimized"
How do you make an algorithm that's slow without being computationally complex? Writing it all in PHP doesn't count.
The algorithm has to be slow because it's a lot of work. Your implementation has to be fast to maximise the security benefit of using it in the first place.
You don't care about turning it into an unpredictable number.
What else do I want a hash function to return?
In fact you sometimes enforce O(1) time, you don't want a longer or different password to take longer to hash, because that facilitates timing attacks.
Pad your inputs and use constant time comparison functions, kids.
Er, not really? You want a well-optimized function to turn a password into a very big unpredictable number in a way that's computationally complex, and that's precisely what KDFs are made to do. The entire crux of your argument against such use seems to boil down to "but they sometimes let you specify how big a number you want", as if this added complexity and risk somehow massively outweighed that created by rolling your own slow crappy little alternative.
I find it odd that the WD drives, at the 5400rpm speed, were able to write data faster than the 7200rpm Seagate drives.
Maybe the Seagates are more sensitive to vibration, either from making more of it when you shove 45 into a cheap metal box, or by being less tolerant to it because they're pushed harder.
I recall reading that the uncorrectable read error rate tends towards the 2TB mark.
12.5TB, assuming the specified 1-in 10^14 bit uncorrectable-read-error rate specified for most consumer drives is accurate. I certainly don't see rates anywhere near that high with my consumer drives, but I could just be lucky.
I still have two or three recent (i.e. last four or five years) devices that have problems seeking VBR files or displaying the proper duration.
Even foobar2000 has issues with seeking in MP3s. From the FAQ:
Why is seeking so slow while playing MP3 files?
The MP3 format doesn't natively support sample-accurate seeking, and sample accurate seeking is absolutely required by some features of foobar2000 (such as
You need weird-ass buggy fb2k plugins, but are only missing format support in WMP? Do you play a lot of ancient tracker music or something?
If you find the fb2k interface so intimidating perhaps you'd be better off with its much simpler cousin, Boom. Not sure if it's got much support for particularly oddball media formats though.
Multi-process architecture... I've not really noticed a problem with the threaded one, and Firefox already sticks flash objects in a separate process. So what's the real draw
Isolation. The same reason you want different apps to have their own processes instead of having the whole of userspace in one big blob. You can give processes reduced privileges to reduce the scope of exploits, hangs and crashes don't take down more than they have to, and leaks don't force you to restart the entire system to recover resources.
Plus it makes for simpler concurrency. Kind of handy when you've got a stop-the-world garbage collector if you can just split the world into many smaller independent units, each able to run at the same time and each with an order of magnitude less work to do and no synchronisation to worry about.
64bit... again, bragging points about how many bits you use, no functional difference to anyone
ASLR is a fuckload more effective when it has a reasonably sized address space to work with, and 2^32 is miles away from being reasonable. It's the difference between an attacker having to guess one of 8 locations and one of 8 billion. Plus, memory mapping things is awesome, and also a fuckload easier with a reasonably sized address space.
And hey, some of us actually use our browsers quite a lot. Mine's eating 5.5G right now. So many windows and tabs, and absolutely no fucking reason whatsoever why that should be considered even slightly unreasonable.