Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment what this will look like: (Score 4, Interesting) 27

I'm going to go out on a limb and predict where this will go first: improved metadata and citation networking. I'm an eligible author with pretty good experience with the system.

The initial comments will not be excessively negative. As I've mentioned before on Slashdot, publications are a summary of findings and never the full story: the authors are always holding back. On average, if it looks like they've overlooked something (from the standpoint of the reader), it's more likely to be an error or oversight on the reader's part than the authors. I think people generally appreciate this point, so they'll be conservative in their criticism to avoid looking foolish.

Getting cited is a really big deal, and not being cited (when your work is highly relevant to the topic) is considered a serious slight. I've seen nasty phone and email messages bounced around because of this. So in the context of comments, you're going to see a lot of things along the lines of "They should have considered author X, work Y from 2003 because it is highly relevant." This is a safe comment to make, but it can also be used to make a subtle point, drawing attention to competing work the authors chose to ignore, etc.

There won't be a lot of novel observations/data/interpretations being presented. Online comment pages will not be considered a place to stake your claim on an idea. Hence, people won't want to be "scooped", and they will reserve key insights for themselves.

There will be a lot of referencing preprint sources as they become more popular. This will be a new form of citation: retroactive citation of "future" (current) works, and it will greatly improve the citation network. This is important because that network is critical (besides in-person networking) to follow the development of a research field.

Comment Re:Right move (Score 1) 182

There is a chance I'm wrong (I buy proteins/peptides, not DNA), but I doubt it.

Notice on the page you linked that they are always describing "genes" and not generic sequences. Also note that the two categories are "human/mouse/rat" and "other", and that they specify "for ORF genes present in existing NCBI database". This is not a coincidence: they can offer these products because the know that it can be cloned out of the host species, after which "mutagenesis is starting from $149/mutation".

To my knowledge there is still no magic bullet for long DNA synthesis, although it appears I was wrong about the scale. Genscript will sell oligos in the range of 15-60, not 5-20, so that will substantially reduce the amount of work to assemble a bunch of them together.

Comment Re:I know the scientist... (Score 1) 182

BSL-3 labs will attract DHS-type attention when they don't follow the rules carefully. Botulinum of any kind is a "select agent": http://www.selectagents.gov/Select%20Agents%20and%20Toxins%20List.html

On the other hand, there are a lot of "loopholes" (maybe not the best term). I've been surprised to see how simple it was to get samples out of BSL-4 and into an unregulated environment, even while following all the rules to the letter.

Comment Re:Terrists (Score 2) 182

Sorry, that reference doesn't mean what you think it means. GP wants to know what it takes to go from arbitrary data to protein. The Science paper you linked describes what it takes (more than a decade ago) to take existing proteins and deposit them in an organized pattern onto a surface, which is a completely different topic.

I am not current on the data->protein problem, but to the best of my knowledge the current state of the art, at scale, is to engineer an organism to do it for you. All of the vitro work ("synthetic" protein production machinery in a test tube, without live cells) will not scale to useful quantities: it's still academic.

Comment Re:Right move (Score 4, Informative) 182

You and the previous few generations of comments are both correct and wrong.

The comment 3-up is wrong that anyone can do it: even with the sequence, it would be extremely difficult for even top-level professionals to do it from scratch.

The comment 2-up is wrong to say that it's hard, because if you can get the DNA construct then it's extremely easy. This deserves clarification: nearly everyone here (Slashdot audience, not molecular biologists) is going to assume that there's a magic black box that will turn a sequence into a real physical DNA construct, and they are mistaken. Data/sequence to DNA construct, absent of anything else, is extremely hard.

You are correct about nearly everything, except that it is not simple to just buy big sections of DNA. If you want 5-20 bases, that's not a problem. But this protein is ~450 bases long. You can't just order something like that, and "stitching it together" is possible but would probably take years to get right, even for a pro.

But the idea behind your comment is still valid, because this gene will not be a from-scratch, random sequence. It's going to be 95+% identical to existing sequences, so instead of splicing together 60 synthetic sequences (purchased from a company), you only need to splice together maybe 2-4 big pieces. Those pieces could be purchased, or possibly isolated if you can get the bacteria.

Comment Re: Is this the right move? (Score 1) 182

No, it will slow down professionals as well.

Without the sequence, what can you do? It's pretty much guaranteed that the new strain produces a toxin with extremely high sequence homology to existing strains, so you know that to make the new toxin you just have to add/delete/exchange a few amino acids, or maybe add an insertion.

But there is no way to know or guess what should be altered. There are ways to create libraries of mutants, but then they will need to be screened, and that will not be a fast, simple, or safe process.

Without access to the original strain, there's not much you can do, and the few things you can attempt are no better than starting from scratch.

Comment Re:The other issue with much of modern science (Score 3, Insightful) 316

It's resource intensive, but also just plain difficult. For example, publications are never a full description of an experiment, just the highlights. It takes a skilled researcher to fill in the gaps and then a second level of skill to accurately carry it out.

Looking at it from another perspective, ignoring scientific developments which are the result of inspired genius (which I would argue are rare), every new publication is the more novel and difficult work that has been conducted to date. If it weren't, it would have been done already.

So how can you expect someone else (who wasn't able or interested to carry out the work themselves) to immediately duplicate cutting-edge work based on an incomplete description?. It's a bit amazing that up to 50% of publications could be replicated at all.

Comment Re:Lord Forgive me, but (Score 2) 316

Scientists and researchers are not hamstrung by NDAs. If anything things are going the other direction: university libraries are setting up self-publishing, open-access projects to disseminate the work being conducted by the researchers.

I've only seen NDAs and similar come up in one situation: when a researcher employed by the university is a guest or collaborator with a private company. Then the company might try to introduce such things, but the university legal is very hostile to that. I can't think of a single situation in which I've seen a university require an NDA, even when dealing with inventions and IP. They do require disclosure to the university, especially after the Stanford case ("will assign" vs "hereby assigns").

I would have expected this to change substantially with the America Invents Act, but to date no lawyers I've talked to have indicated that anything has or will change, which I think is a bit odd.

Comment We are the ones in need of a network (Score 5, Insightful) 107

I like some of the more subtle details in the title and summary: new math "techniques", "researchers need new mathematical tools", etc.

I find it hard to believe that our sciences are driving the math fields, as mature and well-developed as the math community is. But it is true that existing knowledge and tools from mathematics drive huge advances in the sciences when they are brought to bear. The sad truth is that scientists just don't play terribly well with others (maybe no one does): interdisciplinary work is rare and difficult, and so we end up re-inventing the wheel over and over again. The reality is that the "wheel" being created by the biologist in order to interpret their data is a poor copy of the one already understood by the physicist across campus.

What can we do about this? I'm not sure, but I think it's safe to say that our greatest scientific advances in the next few decades will be the result of novel collaborations, and not novel math or (strictly speaking) novel science.

Comment Re:Sorry, this is a botched study. (Score 1) 67

I don't understand the point of your comments.

They averaged birds together by location, and compared that to song.

We already know that "location->song" probably has some kind of causation. We think that "PCB->song" may have a causation. So why would you try to stack "PCB->location->song"? There is no question that it introduces biases.

Can we remove those biases? Maybe, if we're careful. Can we remove those biases if we discard variability within location via averaging? NO!

Should the direct "PCB->song" relationship be presented? YES!!

Comment Re:Sorry, this is a botched study. (Score 1) 67

You're partially right, I did overlook their "analysis". Table 1 gives us their conclusions, but there is no data. There are regression plots and PCA for other comparisons, but they left out everything relevant to to PCB-vs-song.

So they didn't show the song data per bird. They did describe how they reduced song data to a high/low binary value ("trill performance"). They didn't show "trill performance" per bird. They didn't show the models. They didn't show any evaluation of the models. They did show the relative evaluation of the models.

I just don't understand why they left so much out! In the end they used a continous variable (PCB) to predict a binary high/low song value, when they could have just kept and used the original song data. Maybe that's what they did? It would make sense, but that's not what they described in the paper.

Furthermore, there are all kinds of oddities in the Supplemental Table 1. It's presented as averaged per-region, but the data is filtered according to their individual-bird LOD/LOQ: filtering should be at the bird level, not after averaging. The error in their quantitation just so happens to always top out at 100%, which shows they've massaged that as well. They used the LOQ to arbitrarily set values to zero: at minimum these need to be treated as exceptions in the analysis. The values below LOQ have errors set to zero, while these values should have the largest relative error of all.

None of this directly invalidates the analysis, but it's just bizarre and sloppy. Considering that it's the entire cornerstone of their hypothesis, I still think that it implies poor work or deception.

Slashdot Top Deals

For large values of one, one equals two, for small values of two.

Working...