Forgot your password?

Comment: You're both absolutely, painfully correct. (Score 1) 344

by Vesvvi (#47419613) Attached to: Climate Change Skeptic Group Must Pay Damages To UVA, Michael Mann

This is the saddest part about the AGW debate. From my viewpoint, it looks like the pro-AGW people pushed back against criticism overwhelming their "opponents" with data and consensus, and tried to extinguish them via marginalizing them.

That was absolutely the worst thing that could have been done, for the reasons you note above. On the other hand, if they had embraced and extended, the whole debate could have been extinguished.

We could be working with alternate energy sources for reasons of dominance in international trade, national security / energy independence, etc. Instead, we've actively been pushed backward by the the pro-AGW agenda, and they deserve some of the blame for that.

Comment: Re:and how do they track users across muilt units? (Score 1) 43

by Vesvvi (#46425499) Attached to: Stanford Team Tries For Better Wi-Fi In Crowded Buildings

I assume this is something like Ubiquiti's "zero-handoff" system, where the wireless APs coordinate to form an virtual network that spans across all the APs. I'm not a networking pro, but I guess you could say it's like the opposite of a VLAN? Although based on the article, it sounds like there are effectively VLANs running on their mesh to partition it into per-user virtualized segments. In that situation, there is nothing that prevents you from having a static or even public IP.

Comment: Re:Solved. Next? (Score 1) 533

by Vesvvi (#46003059) Attached to: Ask Slashdot: What's the Most Often-Run Piece of Code -- Ever?

You could break it down further to look at the lower-level "operations" being performed.

In order to see if the next amino acid should be a glutamine instead of an asparagine, each of the two are going to be bounced into the active site to see if they "fit": if they fit, they're incorporated and if not, they bounce out and the next one is tried.

So you could think of this as an if, else repeated really ridiculously quickly: at the speed of molecular diffusion, minus just a tiny bit. That makes it both extremely fast and extremely parallel, as you observe.

As a side note, nearly ever molecular interaction operates in this way, unless a substrate is literally "handed-off" between two enzymes. That would definitely make a molecular recognition operation the most highly "executed" routine (at least down to that scale) by untold orders of magnitude.

Comment: Re:if you can access it on a website (Score 1) 107

by Vesvvi (#45637187) Attached to: Storing Your Encrypted Passwords Offline On a Dedicated Device

It doesn't specifically solve any of those problems (except forbidden punctuation mark), although it simplifies them a bit.

Required characters (uppercase, punctuation, numbers) can be added post-hash as an insecure suffix to meet site requirements. These don't add any security, so you can carry them around with you, put them on a public website, or leave them on a sticky note on your monitor: "work suffix: #U1_. Github suffix: (#$JHi/."

The same thing can be said for length issues, although I've found that most systems these days are happy with 16. Admittedly, with the character set restriction it would be better to keep it long, but I would argue that by avoiding sending plaintext to the servers, we're avoiding the vast majority of vulnerabilities.

Expiration is made more simple by making it easier to remember passwords: changing one isn't a big deal. This continues onto your next point as well: you'll never have an error message that your new password is too similar to your old one.

I think your last shows another benefit of terminating private passwords as soon as possible. On insecure hardware, your public (hashed) password is exposed, and of course it could be captured for future use. But that will limit exposure to a single service, and it won't reveal any hints about your password trends.

You actually overlooked the most important point: if we're hashing passwords on a secured and user-controlled device, it's very easy (space-, energy-, speed-efficient) to get the public/hashed passwords off (LCD), but it's still a bit annoying to get the private passwords onto the device. UI concerns are a problem: I can do it extremely fast an efficiently if I'm working on a desktop, but it's a bit slower even on a tablet. The further we go towards hardware which can be fully locked down (keyfob with a single chip), the harder it is to get the data onto the device.

Comment: Re:if you can access it on a website (Score 1) 107

by Vesvvi (#45634199) Attached to: Storing Your Encrypted Passwords Offline On a Dedicated Device

I don't understand why there is so much effort placed on storing passwords. We already know what to do with passwords from the perspective of the server: discard them as soon as possible!

The password should be salted and hashed immediately, and it should never be stored in plaintext. So let's not store them at all: let the user remember the risky password, and encrypt it as soon as possible. It's a validated methodology, and it removes many/most of the trust issues of the user/server relationship: I don't care if the server fails to salt my password if it's already encrypted.

Now take this to the next step. The user-side "passwords" can be pretty weak, since they need to be memorable but not high-entropy. We don't want to re-use the same "password" everywhere (different sites/services), since that's a risk, but we can come up with a weak per-site salt that's easy to remember. Combine that with a relatively weak password and we have a winner

Use-everywhere password: invsqrt
Site: "Salt": modmadness. Full password: invsqrtmodmadness
Password sent to server: "dee4ea048518f588"

Use-everywhere password: invsqrt
Site: "Salt": xyproblem. Full password: invsqrtxyproblem
Password sent to server: "be6065c67f055583"

Yes, I know it's just a hash, but this is a simple example. There's some loss of strength from key vs hash lengths, re-using "passwords" etc, and I've thrown in some complication, but I think the general idea is sound. The most important fact is that insecure, memorable, secret information never leaves my brain. Ok, in practice it does: I enter it onto an offline encryption device, but it never goes anywhere else.

  • There is no private key to lose.
  • I don't have to store private information.
  • The public-side "passwords" are high-entropy and pseudo-random.
  • The user-side "passwords" are highly memorable.
  • An offline encryption device adds security, but it isn't necessary: in an emergency I can generate hashes nearly anywhere, since I carry my secure passphrases around with my in my brain.

You can stack additional levels of complication to make it more robust, but even the crudest implementation put you in the top 0.01% of hardest-to-crack passwords. For example, your encryption fob can contain a private key: smash the fob and you have securely destroyed the ability to re-create passwords. It also would make the outgoing passwords much more secure.

Comment: Please don't re-invent the wheel. (Score 4, Insightful) 116

by Vesvvi (#45301783) Attached to: A Protocol For Home Automation

Please don't re-invent the wheel unless you need to. By that, I mean to say that automation and interconnection of "gadgets" is a well-established field in industry and tech. For example, vehicle ECU and sensor systems, factory automation, and data acquisition systems are all now decades old, and we should have a really solid idea of how to do these things properly.

Of course these existing systems aren't the same as what we're talking about here, with modules that span different physical link layers, protocols, etc. I just hope that we can take the best lessons from existing "gadget integration" attempts to make forward progress more successful and not just something doomed to rapid obsolescence.

For some fun and background, have a look at the old HPIB/GPIB physical/protocol standard (, which was used in many different pieces of scientific equipment. When that somewhat died out it was replaced by CAN ( Agilent uses that for their HPLCs (maybe test equipment, too?), and Waters uses the same physical link, but with a different protocol? Other vendors still work with contact-closure, and USB is becoming more popular, but that pushes so much onto the host computer and really enforces lock-in.

I will personally be watching this closely from the perspective of someone who operates a lot of data-acquisition equipment. Could this be the foundation for better interop between different vendors at the more commercial/research level, in addition to the consumer? I hope so.

Comment: what this will look like: (Score 4, Interesting) 27

by Vesvvi (#45217999) Attached to: PubMed Commons Opens Up Scientific Articles To User Comments

I'm going to go out on a limb and predict where this will go first: improved metadata and citation networking. I'm an eligible author with pretty good experience with the system.

The initial comments will not be excessively negative. As I've mentioned before on Slashdot, publications are a summary of findings and never the full story: the authors are always holding back. On average, if it looks like they've overlooked something (from the standpoint of the reader), it's more likely to be an error or oversight on the reader's part than the authors. I think people generally appreciate this point, so they'll be conservative in their criticism to avoid looking foolish.

Getting cited is a really big deal, and not being cited (when your work is highly relevant to the topic) is considered a serious slight. I've seen nasty phone and email messages bounced around because of this. So in the context of comments, you're going to see a lot of things along the lines of "They should have considered author X, work Y from 2003 because it is highly relevant." This is a safe comment to make, but it can also be used to make a subtle point, drawing attention to competing work the authors chose to ignore, etc.

There won't be a lot of novel observations/data/interpretations being presented. Online comment pages will not be considered a place to stake your claim on an idea. Hence, people won't want to be "scooped", and they will reserve key insights for themselves.

There will be a lot of referencing preprint sources as they become more popular. This will be a new form of citation: retroactive citation of "future" (current) works, and it will greatly improve the citation network. This is important because that network is critical (besides in-person networking) to follow the development of a research field.

Comment: Re:Right move (Score 1) 182

by Vesvvi (#45177893) Attached to: DNA Sequence Withheld From New Botulism Paper

There is a chance I'm wrong (I buy proteins/peptides, not DNA), but I doubt it.

Notice on the page you linked that they are always describing "genes" and not generic sequences. Also note that the two categories are "human/mouse/rat" and "other", and that they specify "for ORF genes present in existing NCBI database". This is not a coincidence: they can offer these products because the know that it can be cloned out of the host species, after which "mutagenesis is starting from $149/mutation".

To my knowledge there is still no magic bullet for long DNA synthesis, although it appears I was wrong about the scale. Genscript will sell oligos in the range of 15-60, not 5-20, so that will substantially reduce the amount of work to assemble a bunch of them together.

Comment: Re:I know the scientist... (Score 1) 182

by Vesvvi (#45175631) Attached to: DNA Sequence Withheld From New Botulism Paper

BSL-3 labs will attract DHS-type attention when they don't follow the rules carefully. Botulinum of any kind is a "select agent":

On the other hand, there are a lot of "loopholes" (maybe not the best term). I've been surprised to see how simple it was to get samples out of BSL-4 and into an unregulated environment, even while following all the rules to the letter.

Comment: Re:Terrists (Score 2) 182

by Vesvvi (#45175585) Attached to: DNA Sequence Withheld From New Botulism Paper

Sorry, that reference doesn't mean what you think it means. GP wants to know what it takes to go from arbitrary data to protein. The Science paper you linked describes what it takes (more than a decade ago) to take existing proteins and deposit them in an organized pattern onto a surface, which is a completely different topic.

I am not current on the data->protein problem, but to the best of my knowledge the current state of the art, at scale, is to engineer an organism to do it for you. All of the vitro work ("synthetic" protein production machinery in a test tube, without live cells) will not scale to useful quantities: it's still academic.

Them as has, gets.