Are you familiar with ROS, http://www.ros.org/ ? It's basically a set of libraries for various robotics tasks and sounds like what you're describing.
Well, v4 doesn't require remembering IPv4 addresses either, but still it comes handy in a pinch to remember the gateway, the dns server, the WLAN access point, etc addresses. For some reason people really like to point out the Google's DNS server's IP, not its name.
Though remembering local gw and dns mostly comes to remembering the network prefix whcih won't be that large, assuming one has not used dynamic allocation for services in a network.
Not sure if trolling.. but I looked it up, and on the paper it seems interesting, but for use today it has limitations: 2 TB maximum device size, 8 TB maximum volume size. So that's a non-contender. Seems quite advanced for its day, though (introduced 1998).
Maybe the malware was deeply employed in all the install images, and now it is guaranteed that all the systems have it, even after re-deploying
Actually Google already gave the Wave to the Apache foundation, so I guess they're set from that point of view.
That aside, I don't think a company should be forced to provide any level of support for a ten-year-old product. They could even be up-front about ("this product will not be supported for longer than five years") and people still wouldn't care. Well, until the day came.
I think it would be more effective to time the attacks when the sums involved are significant. Now they mostly server as a cautious example that new exchanges will learn from.
And it's probably better they learn their lessons now rather than later.
A microphone array and some DSP should have no trouble distinguishing between sound sources at distinct points in space, in addition to removing background noise.
I wondered that as well, but the video makes it clear: this way the shelves can be queueing for the worker.
If you can reduce the number of candidates you need to evaluate and interview, you are saving plain money. More effective to have them do the filtering in a distributed manner.
Of course, you might miss the perfect candidate that way as well. But, you cannot really put a price to that.
There is a way to do dedup without having the key.
For example: the client takes an SHA256 of the data, then encrypts the data it with the lowest 128 bits of the SHA1, uploads the encrypted data to the server saying "this data has id ". The lowest 128 bits are then encrypted with the client key and uploaded as well. This way another party with the possession of the same data gets the server to merge their data (but the key still keeps hidden from the server). Servers makes an SHA1 sum of the encrypted data so that clients can quickly check if the data they are deduping is properly stored to dwarf data integrity attacks.
Not sure if this was the way Mega does it but it was something pretty close to it.
Just like your OS vendor can slip in an update that sends all your keys to them. (As Shuttleworth said, they have root.) You basically need to trust someone, as no one person is able to audit everything people typically use daily.
irc: FreeNode #btrfs
But be pacient, people are probably not responding immediately. Then there is also the mailing list.
I lose the btrfs functionality of it repairing bad blocks. I'm pretty sure md doesn't notice the error if it doesn't come from the backing device in the form of an IO error.. So if btrfs gets a bad block from md, there's not much to be done, except to make a guess which of the mirrors has the copy and manually resync (if btrfs manages multiple devices, it can just do that automatically). I don't know how that would be even done. But at least I would know which file has the error and I can restore it from the backups. Metadata is duplicated so that shouldn't be an issue.
I also lose the ability to change the data layout at ease; in btrfs going from raid10 to raid1 to raid0 or adding new devices is just a matter of running btrfs balance (and btrfs device add). Md doesn't even support adjusting the number of devices in raid10 array, nor conversions from raid1 to raid10 (that can be done manually, but you don't have redundancy during the process).
In future I may lose the forthcoming ability to set redundancy level per subvolume or possibly even per directory/file.
That is quite correct but now a week goes by without someone joining #btrfs and asking about how to recover. And personally I just restored my / just a week ago.. Technically I shouldn't have, because I was using btrfs raid10 and only one of the drives had had issues (it was the old drive I was putting back; btrfs scrub; btrfs balance somehow made all drives have errors). Btw, the system did not survive the death of the drive either, I had to reboot it without it, whereas linux md survives these things very nicely.
But I'm hopeful of the future. Btrfs IS really a nice thing to have, at least if you have some SSD and fragmentation isn't an issue for you (say, for virtual machine images). I'm planning to put it on some spinning media as well, I shall see how that'll work out..
Btrfs has tools for doing this. It also comes with find-new that allows to find exactly which files have been changed between snapshots, and it does it basically instantenously.
Though Btrfs might not be the solution for ensuring data integirity at this point.. But setting up hourly snapshots of your drives can be quite nice when you accidentally destroy something you've created after the last backup.