Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:and how do they track users across muilt units? (Score 1) 43

I assume this is something like Ubiquiti's "zero-handoff" system, where the wireless APs coordinate to form an virtual network that spans across all the APs. I'm not a networking pro, but I guess you could say it's like the opposite of a VLAN? Although based on the article, it sounds like there are effectively VLANs running on their mesh to partition it into per-user virtualized segments. In that situation, there is nothing that prevents you from having a static or even public IP.

Comment Re:Solved. Next? (Score 1) 533

You could break it down further to look at the lower-level "operations" being performed.

In order to see if the next amino acid should be a glutamine instead of an asparagine, each of the two are going to be bounced into the active site to see if they "fit": if they fit, they're incorporated and if not, they bounce out and the next one is tried.

So you could think of this as an if, else repeated really ridiculously quickly: at the speed of molecular diffusion, minus just a tiny bit. That makes it both extremely fast and extremely parallel, as you observe.

As a side note, nearly ever molecular interaction operates in this way, unless a substrate is literally "handed-off" between two enzymes. That would definitely make a molecular recognition operation the most highly "executed" routine (at least down to that scale) by untold orders of magnitude.

Comment Re:if you can access it on a website (Score 1) 107

It doesn't specifically solve any of those problems (except forbidden punctuation mark), although it simplifies them a bit.

Required characters (uppercase, punctuation, numbers) can be added post-hash as an insecure suffix to meet site requirements. These don't add any security, so you can carry them around with you, put them on a public website, or leave them on a sticky note on your monitor: "work suffix: #U1_. Github suffix: (#$JHi/."

The same thing can be said for length issues, although I've found that most systems these days are happy with 16. Admittedly, with the character set restriction it would be better to keep it long, but I would argue that by avoiding sending plaintext to the servers, we're avoiding the vast majority of vulnerabilities.

Expiration is made more simple by making it easier to remember passwords: changing one isn't a big deal. This continues onto your next point as well: you'll never have an error message that your new password is too similar to your old one.

I think your last shows another benefit of terminating private passwords as soon as possible. On insecure hardware, your public (hashed) password is exposed, and of course it could be captured for future use. But that will limit exposure to a single service, and it won't reveal any hints about your password trends.

You actually overlooked the most important point: if we're hashing passwords on a secured and user-controlled device, it's very easy (space-, energy-, speed-efficient) to get the public/hashed passwords off (LCD), but it's still a bit annoying to get the private passwords onto the device. UI concerns are a problem: I can do it extremely fast an efficiently if I'm working on a desktop, but it's a bit slower even on a tablet. The further we go towards hardware which can be fully locked down (keyfob with a single chip), the harder it is to get the data onto the device.

Comment Re:if you can access it on a website (Score 1) 107

I don't understand why there is so much effort placed on storing passwords. We already know what to do with passwords from the perspective of the server: discard them as soon as possible!

The password should be salted and hashed immediately, and it should never be stored in plaintext. So let's not store them at all: let the user remember the risky password, and encrypt it as soon as possible. It's a validated methodology, and it removes many/most of the trust issues of the user/server relationship: I don't care if the server fails to salt my password if it's already encrypted.

Now take this to the next step. The user-side "passwords" can be pretty weak, since they need to be memorable but not high-entropy. We don't want to re-use the same "password" everywhere (different sites/services), since that's a risk, but we can come up with a weak per-site salt that's easy to remember. Combine that with a relatively weak password and we have a winner

Use-everywhere password: invsqrt
Site: slashdot.org. "Salt": modmadness. Full password: invsqrtmodmadness
hashlib.sha256(getpass.getpass()).hexdigest()[::2][:16]
Password sent to server: "dee4ea048518f588"

Use-everywhere password: invsqrt
Site: stackexchange.com. "Salt": xyproblem. Full password: invsqrtxyproblem
hashlib.sha256(getpass.getpass()).hexdigest()[::2][:16]
Password sent to server: "be6065c67f055583"

Yes, I know it's just a hash, but this is a simple example. There's some loss of strength from key vs hash lengths, re-using "passwords" etc, and I've thrown in some complication, but I think the general idea is sound. The most important fact is that insecure, memorable, secret information never leaves my brain. Ok, in practice it does: I enter it onto an offline encryption device, but it never goes anywhere else.

  • There is no private key to lose.
  • I don't have to store private information.
  • The public-side "passwords" are high-entropy and pseudo-random.
  • The user-side "passwords" are highly memorable.
  • An offline encryption device adds security, but it isn't necessary: in an emergency I can generate hashes nearly anywhere, since I carry my secure passphrases around with my in my brain.

You can stack additional levels of complication to make it more robust, but even the crudest implementation put you in the top 0.01% of hardest-to-crack passwords. For example, your encryption fob can contain a private key: smash the fob and you have securely destroyed the ability to re-create passwords. It also would make the outgoing passwords much more secure.

Comment Please don't re-invent the wheel. (Score 4, Insightful) 116

Please don't re-invent the wheel unless you need to. By that, I mean to say that automation and interconnection of "gadgets" is a well-established field in industry and tech. For example, vehicle ECU and sensor systems, factory automation, and data acquisition systems are all now decades old, and we should have a really solid idea of how to do these things properly.

Of course these existing systems aren't the same as what we're talking about here, with modules that span different physical link layers, protocols, etc. I just hope that we can take the best lessons from existing "gadget integration" attempts to make forward progress more successful and not just something doomed to rapid obsolescence.

For some fun and background, have a look at the old HPIB/GPIB physical/protocol standard (http://en.wikipedia.org/wiki/IEEE-488), which was used in many different pieces of scientific equipment. When that somewhat died out it was replaced by CAN (http://www.team-cag.com/support/theory/chroma/hplc_bas_at/system/cableConnections.html). Agilent uses that for their HPLCs (maybe test equipment, too?), and Waters uses the same physical link, but with a different protocol? Other vendors still work with contact-closure, and USB is becoming more popular, but that pushes so much onto the host computer and really enforces lock-in.

I will personally be watching this closely from the perspective of someone who operates a lot of data-acquisition equipment. Could this be the foundation for better interop between different vendors at the more commercial/research level, in addition to the consumer? I hope so.

Comment what this will look like: (Score 4, Interesting) 27

I'm going to go out on a limb and predict where this will go first: improved metadata and citation networking. I'm an eligible author with pretty good experience with the system.

The initial comments will not be excessively negative. As I've mentioned before on Slashdot, publications are a summary of findings and never the full story: the authors are always holding back. On average, if it looks like they've overlooked something (from the standpoint of the reader), it's more likely to be an error or oversight on the reader's part than the authors. I think people generally appreciate this point, so they'll be conservative in their criticism to avoid looking foolish.

Getting cited is a really big deal, and not being cited (when your work is highly relevant to the topic) is considered a serious slight. I've seen nasty phone and email messages bounced around because of this. So in the context of comments, you're going to see a lot of things along the lines of "They should have considered author X, work Y from 2003 because it is highly relevant." This is a safe comment to make, but it can also be used to make a subtle point, drawing attention to competing work the authors chose to ignore, etc.

There won't be a lot of novel observations/data/interpretations being presented. Online comment pages will not be considered a place to stake your claim on an idea. Hence, people won't want to be "scooped", and they will reserve key insights for themselves.

There will be a lot of referencing preprint sources as they become more popular. This will be a new form of citation: retroactive citation of "future" (current) works, and it will greatly improve the citation network. This is important because that network is critical (besides in-person networking) to follow the development of a research field.

Comment Re:Right move (Score 1) 182

There is a chance I'm wrong (I buy proteins/peptides, not DNA), but I doubt it.

Notice on the page you linked that they are always describing "genes" and not generic sequences. Also note that the two categories are "human/mouse/rat" and "other", and that they specify "for ORF genes present in existing NCBI database". This is not a coincidence: they can offer these products because the know that it can be cloned out of the host species, after which "mutagenesis is starting from $149/mutation".

To my knowledge there is still no magic bullet for long DNA synthesis, although it appears I was wrong about the scale. Genscript will sell oligos in the range of 15-60, not 5-20, so that will substantially reduce the amount of work to assemble a bunch of them together.

Comment Re:I know the scientist... (Score 1) 182

BSL-3 labs will attract DHS-type attention when they don't follow the rules carefully. Botulinum of any kind is a "select agent": http://www.selectagents.gov/Select%20Agents%20and%20Toxins%20List.html

On the other hand, there are a lot of "loopholes" (maybe not the best term). I've been surprised to see how simple it was to get samples out of BSL-4 and into an unregulated environment, even while following all the rules to the letter.

Comment Re:Terrists (Score 2) 182

Sorry, that reference doesn't mean what you think it means. GP wants to know what it takes to go from arbitrary data to protein. The Science paper you linked describes what it takes (more than a decade ago) to take existing proteins and deposit them in an organized pattern onto a surface, which is a completely different topic.

I am not current on the data->protein problem, but to the best of my knowledge the current state of the art, at scale, is to engineer an organism to do it for you. All of the vitro work ("synthetic" protein production machinery in a test tube, without live cells) will not scale to useful quantities: it's still academic.

Comment Re:Right move (Score 4, Informative) 182

You and the previous few generations of comments are both correct and wrong.

The comment 3-up is wrong that anyone can do it: even with the sequence, it would be extremely difficult for even top-level professionals to do it from scratch.

The comment 2-up is wrong to say that it's hard, because if you can get the DNA construct then it's extremely easy. This deserves clarification: nearly everyone here (Slashdot audience, not molecular biologists) is going to assume that there's a magic black box that will turn a sequence into a real physical DNA construct, and they are mistaken. Data/sequence to DNA construct, absent of anything else, is extremely hard.

You are correct about nearly everything, except that it is not simple to just buy big sections of DNA. If you want 5-20 bases, that's not a problem. But this protein is ~450 bases long. You can't just order something like that, and "stitching it together" is possible but would probably take years to get right, even for a pro.

But the idea behind your comment is still valid, because this gene will not be a from-scratch, random sequence. It's going to be 95+% identical to existing sequences, so instead of splicing together 60 synthetic sequences (purchased from a company), you only need to splice together maybe 2-4 big pieces. Those pieces could be purchased, or possibly isolated if you can get the bacteria.

Comment Re: Is this the right move? (Score 1) 182

No, it will slow down professionals as well.

Without the sequence, what can you do? It's pretty much guaranteed that the new strain produces a toxin with extremely high sequence homology to existing strains, so you know that to make the new toxin you just have to add/delete/exchange a few amino acids, or maybe add an insertion.

But there is no way to know or guess what should be altered. There are ways to create libraries of mutants, but then they will need to be screened, and that will not be a fast, simple, or safe process.

Without access to the original strain, there's not much you can do, and the few things you can attempt are no better than starting from scratch.

Slashdot Top Deals

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...