It's the same argument (how do you, a stupid layman, interpret the results?) that got 23andMe knocked off the air.
Thing is, you can do your own blood tests in most states. It just happens to be expensive and not well known to most people. So Theranos didn't really change anything except improved the price point and increased availability. For those of us with the God-given common sense to (a) know how to use Google, and (b) to not panic when some number is 5% high or 5% low--and notice most blood tests nowadays provide a computer summary of the patterns the computer found on the test as well as bars indicating the usual high/low ranges, so there really is little guessing--lowering the price and increasing availability is a benefit, not a problem.
It's a shame Theranos is having so many problems, because to me it was never about blood testing using small volumes of blood, but about low cost DIY blood testing available at places like Walgreens. The ability to walk in and get a Cholesterol test for $3, and a comprehensive metabolic plane for $7 instead of going through a doctor (and paying several hundred dollars for the privilege of having that doctor cluck-cluck at me) is a big deal: it means I could (for example) try different diets and get a blood test monthly to see how those diets affect me.
... is that crystal meth is relatively easy to obtain, and it can be converted to Sudafed. Now all we need is for researchers to simplify the process and provide a practical process for the layman.
You're assuming, of course, that those who write the regulations come from this relatively rare species of intelligent people. The problem is, we have no way to guarantee this. And we run the risk of codifying in regulation something remarkably stupid instead.
I'm not suggesting not to use regulation. I'm suggesting that concluding we should use technically competent technocrats because there is a lack of technically competent people--especially in a world which seems to discount technical competence--runs the risk of creating single points of failure.
Pilots at those airports simply revert to the rules surrounding uncontrolled airports--which is to coordinate with other pilots at the same airport on the tower frequency in order to work out (according to some well defined rules) who has landing and takeoff priority.
Some information here: FAA: Operations at non-towered airports
It's still a valid question to respond to, if only because for every person who steps up to the plate asking questions to alleviate their ignorance, there are a hundred others out there implementing authentication on various public web sites who remain seeped in their own ignorance.
And programmers are an egotistical lot: when was the last time you ever told a programmer "leave that to the experts" and didn't get "fuck you, asshole; I know what I'm doing!!!" as a response?
"Use bcrypt. Just use bcrypt. Or PBKDF2 if you must. But really bcrypt. General hash (MD, SHA) != Cryptographic hash function. All that extra cleverness that you're doing with UUIDs is superfluous if you just use a proper HASH function (did I mention bcrypt?)."
The purpose of using a separate per-user token is so that when (not "if") someone takes your database, password similarity won't jump out at them. Meaning if a bunch of users use "123456" as their password, they won't be hashed to the same value in the database.
You have to assume if someone steals your database they're not stealing a single user record, but your entire database of 5 million users, and they now have 5 million data points in order to help them reverse engineer which hashing function was used. And even the best cryptographic one-way hashing function will hash the same input and generate the same output each time--meaning if 10,000 of your 5 million users used "123456", well, it will show up as 10,000 identical fields, giving you a hint as to how things are encrypted.
If you use a one way hash that has been properly salted (i.e., HASH(SALT + password) ), then you should never be able to retrieve forgotten passwords, ever. If you can retrieve a lost password for a user, then you've screwed it up, because if you can recover a lost password, someone who scraped your database can recover a lost password.
The worst, by the way, are web sites which require you to pick a super-secure password (at least 12 characters long, must contain punctuation, both upper and lower case letters, a number character, an Egyptian hieroglyph, and must not match the last 15 passwords used in the past and must be changed ever 30 days)--then stores the password and password history as plain text in the user database. Those are the guys I'd love to murder in cold blood.
Personally I've always liked using some element of a user record attribute as part of the SALT--such as having a UUID associated with each user record that becomes part of the salt for the hash (i.e., HASH(SALT + password + UUID) )--because this means if someone does scrape your database it's computationally a little more difficult to reverse engineer the passwords in the database because even a bunch of people use "123456" as their password, the hashes will be different for each of those users. Of course the UUID must never change or else you'll lock your users out.
I'm also a fan of the POP3 protocol's APOP authentication mechanism, where sending credentials over the 'net requires two transactions: (1) obtaining a unique token for that session, then (2) hashing the password against that token to transmit to the back end. Of course this means you wind up hashing the plain text password *twice*: since you don't have the password on the back end (but its hash) you can only compare against HASH(TOKEN + hashed_password), and on the front end you wind up calculating HASH(TOKEN + HASH(SALT + password + UUID) ). But that requires a lot of work in the client.
Simply sending HASH(SALT + password + UUID) rather than hashing the hash with an additional token means you're subject to a replay attack, where a third party could listen in on the conversation and simply replay the login packet you send to connect to the server.
And while I know a lot of folks claim that all of this is mitigated by using SSH, it doesn't protect against man-in-the-middle attacks, including incidental man-in-the-middle attacks created by certain proxy gateways which use their own certs in order to decrypt HTTP traffic to sniff for viruses or enforce corporate guidelines for acceptable use.
Ultimately security won't stop the most determined hackers; you're not stopping the NSA, for example. But you can stop the script kiddies and disgruntled employees by taking some precautions--such as never storing sensitive information in a database (like credit cards) unencrypted, and using one-way hashes to store passwords.
Oh, and as a footnote: unless you have a Ph.D. in cryptography, don't write your own random functions or hash functions. Yes, I've seen it in the field. Instead, use a cryptographically secure hash function. Heck, even MD-5 is going to be better than anything you try to roll on your own.
Okay, an internet connected thermostat does add functionality. An internet connected fire detector and an internet connected home security system also makes sense. (Though if you're working on a home security system that hooks up to the Internet and you don't think about software security, you're an idiot who needs to be put into protective custody and fed by a nurse so you don't accidentally poke your eyes out while eating with a plastic fork.)
But why do I need an internet connected oven, refrigerator, or toaster? Do I need an internet connected coffee maker? An internet connected microwave? What value do they add, really? Notifications?
Time is also a cost; if it takes me 20 minutes to drive somewhere by car but an hour to get there by mass transit, then the equation makes no sense. If, on the other hand, you live somewhere where driving is impractical and an hour drive can be replaced by 20 minutes on mass transit, then clearly I'd take mass transit almost regardless of the cost.
panic: can't find /