I recently had a "old" (cir 2008) 64gb SSD drive die on me. It's death followed this pattern:
- Inexplicable system slowdowns. In hindsight, this should have been a warning alarm.
- System crash, followed by a failure to boot due to unclean ntfs volume which couldn't be fixed by chkdisk
- Failed to mount r/w under Ubuntu. Debug logs showed that the volume was unclean and all writes failed with a timeout
- Successful r/o mount showed that the filesystem was largely intact
- Successful dd imaged the drive and allowed a restore to a new drive.
After popping a new disk in and doing a partition resize, my system was back up and running with no data loss. Of all the storage hardware failures I've experienced, this was probably the most pain-free as the failure caused the drive to simply degrade into a read-only device.
It's not the schedule. It's the process.
When chrome updates to a new version, I don't even know about it and everything just works (including all my addons). When Firefox updates, I have to wait an additional few seconds while it updates, I have to close out a splash page informing me of all the new features that I won't use and I have to figure out how to update and re-enable my all addons which have now magically turned off.
When I open a web browser, I want to do something. If you get in my way of me doing something for 30 seconds every few weeks plus spend 5 minutes trying to get selenium or other addons up and running again, you have failed at your purpose as a web browser.
It is even worse when you have a scenario where you have a few dozen firefox installs across various VMs.. I dread FF updates now because it means that I'm either reimaging test machines or going through a bunch of updates.
So who is going to work on developing the aperture science tech to improve the efficiencies of this method?
Weird. I guess I there's a bug in my ROT13 implementation. If I run my text through twice, I just get the original message.
Just do what they did with DES... use 3rot13 and you're much more secure than the original implementation.
Have both parties present documentation on their legal bills. The prevailing party, having also won the fees receives the lesser of the two amounts.
Assume we have Joe vs MegaCorp and Joe's legal bill is $1,500 and MegaCorp's is $400,000.
If Joe wins and is entitled to fees, he gets his entire $1,500 (in addition to any damages). If MegaCorp wins, they get their damages plus the lesser of the two legal bills ($1,500). This promotes efficiency throughout the system.
Corporations will be incentivized to match their legal spending with the size of their "target."
Surveys are inherently difficult to present in a neutral fashion, especially when attempting to determine correlation. Take the following (simplified) survey for example:
I like Cheerios:
[Yes] [No] [Sometimes]
Rate your proficiency at math:
[Excellent] [Good] [Average] [Poor]
Now, let's say you found a statistically significant correlation between people who like Cheerios and people who are excellent at math. Congratulations! You just did not find a correlation related to math proficiency at all.
What you did just find is a correlation between people who selected the first option in your survey.
Now, randomizing your answers is a good start and will resolve the above issue. However, there are hundreds of other things which can affect your results and there is an entire survey industry formed around these problems. The immediate problems that spring to mind about the survey in TFA is:
-Respondents must have internet access
-Respondents must have signed up to Amazon's mechanical turk
-Respondents were paid for the survey
-Respondent proficiency at math/language was self-assessed
-Respondents must be able to comprehend English
Anyway, I could go on but my point here is this: despite the fact that a statistically-significant correlation that was found, that correlation may not stem from the questions themselves.
...Proceeding to go forth and achieve!