Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Not Antibodies (Score 5, Informative) 149

Cathelicin-AM is an antimicrobial peptide not an antibody.

I just skimmed the paper (abstract: http://www.ncbi.nlm.nih.gov/pubmed/22101189), but it seems that the group was the first to find out that pandas produce this type of antimicrobial peptide (they are produced by other mammals and it seems that the sequence is similar to that of dogs). The peptide seems to be effective against multiple types of bacteria (Gram positive and Gram negative) and a couple strains of fungi. The researchers only tested the peptide in vitro, so it probably isn't known if purified peptide will be effective in vivo (they reported that it showed little lysis of human red blood cells though).

TL/DR: Don't pressure your doctor into giving you panda blood when you get sick.

Comment Re:Any immunologists about? (Score 3, Informative) 50

I only glanced through the paper and I have a fellowship application to finish, so I'll be quick with this response.

The process the researchers are trying to take advantage of is immune tolerance (https://en.wikipedia.org/wiki/Immune_tolerance). The authors state that the decrease in symptoms is partially due to the activity of regulatory T cells (https://en.wikipedia.org/wiki/Regulatory_T_cell). Regulatory T cells are a type of T cell that inhibits the immune response to certain types of antigens (foreign things that aren't harmful or parts of your self that your immune system shouldn't have responded to in the first place).

Viruses and bacteria (as well as cancer) can and do take advantage of immune tolerance (I'm not sure about this specific mechanism) in an attempt to avoid immune destruction and this is thought as a possible mechanism for the induction of autoimmune disease.

Comment Surprising (Score 2) 73

Something that surprised me was that "75% of all published papers appear in the journal to which they are first submitted."

I would be very interested in seeing the difference of this rate between junior faculty and senior faculty. With my limited sample size (and personal bias along with it), it has seemed that this number would be much lower for junior faculty. Possibly, junior faculty may be too eager to try to swing for the fences (Science and Nature) and miss (going down the ranks to PLOS ONE) while senior faculty already have favorite field-specific journals (where they may know editors) that will likely be accepted with revisions.

Comment Re:ReadCube Cost (Score 2) 74

Easier said than done. Keep in mind that research articles do not only have one author (at least I haven't seen any recent ones in my field). Assistant professors, graduate students, post-docs, and even tenured professors (with the funding situation these days) do not always have the luxury (guaranteed funding and job opportunities/security) to choose to publish in a lower impact open-access journal even if they preferred to.

Personally, I try to encourage others to favor open-access journals and sometimes make articles available to others that don't have access (other scientists and even non-scientists that are simply interested in primary research). That being said, I think going RMS is a little too extreme at the moment. Thankfully, the quality of open-access journals is improving and power is slipping from the non-free publishers and this is something that they can't stop.

Comment ReadCube Cost (Score 1) 74

"The library is charged under $6 for articles researchers decide to rent for a limited time and $11 or less (depending on the publication) for articles they buy. Researchers cannot yet print out the articles, and much like with iTunes, they cannot share the content with colleagues."

It is sad that renting articles and not being able to share them with colleagues/students almost seems like a deal compared to the current system. It is sicking to me that the publishing system gets in the way of scientific progress and selectively holds back faculty and students from smaller universities that can't afford access to high-impact journals.

Comment Re:The numbers (Score 2) 123

The first figure of the PNAS paper shows that less than 0.01% (maybe 0.008%) of all published papers are retracted for fraud or suspected fraud and it has been increasing since 1975 (maybe around 0.001%). The authors state that the current number is probably under-reporting because not all fraud is detected and retracted. It is possible that the 1975 numbers are less representative, since fraud might have been harder to detect (at least for duplicate publication and plagiarism).

Comment From the Study's Abstract (Score 3, Interesting) 114

They define spin as: "“spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment)" They also mention: "We considered “spin” as being a focus on statistically significant results ... an interpretation of statistically nonsignificant results for the primary outcomes as showing treatment equivalence or comparable effectiveness; or any inadequate claim of safety or emphasis of the beneficial effect of the treatment." (emphasis added) I understand the last two, but the first point doesn't make any sense at all. You can't really make conclusions (you can, but scientists will not believe it) about statistically insignificant results. "Spin" can be good in some cases (maybe not at all in clinical research): a research group that studies DNA repair might state, "Our findings on the function of the yeast homolog of SLHDT in dsDNA break recognition may represent a novel target for cancer therapeutics." In this case, the research group doesn't study cancer at all and have no business at all (from their results) mentioning it, but this might convince a cancer researcher to consider reading the paper and possibly looking into doing a quick/cheap experiment targeting SLHDT and testing this claim.

Comment Re:Ratios (Score 1) 74

First, I'd like to clarify what I meant to say when I said risky. I think "unprecedented" would have been a better word. Peer review does a pretty good (depending on the journal) job of making sure a paper is internally consistent and, as long as the data isn't faked, valid enough to base future hypotheses on them. That being said, many papers will overstate their findings and make conclusions in their discussion section (where it is perfectly fine to put this stuff) that aren't entirely supported by their data. Scientists are expected to evaluate results critically and often don't agree with the conclusions of the papers, but the results (limited to the experimental system) are reliable for the most part. I would assume that most of the "landmark" papers would fit this description (results are reliable, but the conclusions could be crap). As for the slowing down scientific progress part: I think that if the standard of what is acceptable to publish (for disease-focus research) is that it has to work in human patients, then progress will be slowed down. I could be wrong, but how I see what the study concluded is like this: A "landmark" paper is published that identifies Compound X that inhibits a certain signalling pathway in a particular type of tumor (derived from a human cancer cell-line) and prevents an inbred strain of mice from dying (within a certain time-frame) after injecting a certain amount of the cancer cells in a particular place. The authors then conclude that the compound cures cancer. Compound X is then used in a clinical trial involving multiple human patients with tumors made up of a hetergenous cell population (with a unique tumor micro-environment) and is found to not significantly alter the disease outcome (which could be tumor size and not survival). Compound X is considered a failure and the "landmark" paper is considered crap.

Comment Re:Statistical confirmation (Score 1) 74

Wakefield's study wouldn't have been fixed with independent statistical analysis because the results were faked. I do agree that many scientists could use some help with statistics and it would probably be a good idea if certain journals had a statistician on staff that could re-analyze raw data as a part of the review process.

Comment Re:Ratios (Score 1) 74

I think the word "crap" is a little harsh. "It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics" These might have been "landmark" papers (whatever that means), but that doesn't mean that the conclusions will hold up in every model or in every application. A finding in one type of cancer cell (or inbred strain of worm, fly, mouse, rabbit, ape, etc.) will not necessarily directly lead to an effective therapy for humans. If scientists are afraid to publish risky results that have never been observed before, then scientific progress will slow down.

Comment Re:So much for "peer reviewed" papers from academi (Score 1) 74

I think you are looking at this the wrong way. If anything it is more difficult to publish results that are consistent with other studies because there isn't much interest (unless it is controversial). Studies have a better chance of being published in high-impact (widely read) journals if they report something new that causes a change in the way the scientific field thinks.

Comment Who will pay for this? (Score 4, Insightful) 74

The article says that the "authors will pay for validation studies themselves" at first. This is a nice idea, but it is not practical in an academic setting. Academic labs would rather spend money on more people or supplies instead of paying an independent lab to replicate data for them. New ideas are barley able to get funding these days, so why would extra grand money be spent to do the same exact studies over again. There could be a use for this in industry, but they would probably pay their own employees to do this instead if it is worth it.

Comment Bacterial Lobster Traps (Score 4, Interesting) 73

If you think this is cool, then you should look up the work of Dr. Jason Shear at the University of Texas (http://jshear.cm.utexas.edu/jshear/). His laboratory designs cages/houses/traps for bacteria. One of his papers that I am familiar with is "Probing Prokaryotic Social Behaviors with Bacterial 'Lobster Traps'" (http://mbio.asm.org/content/1/4/e00202-10.full).

Comment Re:Going to wait for other labs to confirm this. (Score 2) 249

I can't find the reference, but there was a paper published that studied the stability of microRNAs with RNAses and found that they were more resistant than longer RNA species. There is a paper that was published earlier this year that reported an estimated miRNA average half-live of 119 hours, with some over 200 hours, inside cells (Gantier, M.P. et al. Analysis of microRNA turnover in mammalian cells following Dicer1 ablation. Nucleic Acids Research (2011). It is possible that the study could've underestimated the half-life since other groups have reported that microRNAs have enhanced stability in the presence of Dicer and the study I mentioned calculated the half-life in its absence. I haven't had a chance to read the paper referenced in TFA, so I can't speak to how believable it is yet. I'm sure that many labs (and companies) will start looking into this and its impact on their favorite disease or to try to modify their favorite food to knock-out harmful microRNAs or express helpful ones.

Slashdot Top Deals

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...