Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re:Good luck keeping the genie in the bottle (Score 1) 215

Those who say that genetically modified products are safe are not necessarily saying that all GMOs are safe.

Genetic modification is a process which leads to a food with a different genetic profile than the original stock it came from.
It's quite possible to introduce a toxin this way, or an allergen; it's also possible to increase production of a vitamin, or to make a change that has no effect on the food portion. And it may be possible to reproduce the genetic code of a different species (which is what most of the de-extinction efforts are trying.)
And since it is a process, there is not necessarily any genetic or phenotypic characteristic in common between two GMOs.
So the obvious answer is to test everything and approve what is found to be safe.

Now, to finish the point, a genetically modified product has already been tested. Those who say that it is safe are not stating that it never should have been tested; they are saying that the testing was sufficient.

Comment Re:seems a bit strange (Score 1) 341

That said, why not make the agro businesses that make huge profits pay for unbiased testing in order to license the product?

The problem is that if they fund it, how do you ensure that the "third party" is unbiased?
And how likely are opponents of GMOs to consider it unbiased? I suspect that even if it did reduce the level of bias, you would hear as many people complaining that it can't be trusted. And perceptions may be as important as facts when it comes to getting the regulations changed.

...some pro-GMO person claims "Well our vitamin A rice".. but they neglect the "Terminating seeds" which reap huge profits for these companies.

There's a couple of things I'd like to point out:
1: If someone objects to all GMOs, they object to even the most beneficial ones. Vitamin A rice is a reasonable argument against those who want to ban GMOs. It's not a good argument against testing, but I've not seen it used that way myself.
2: If you are referring to the "terminator" traits where F2 is infertile rather than male-sterile lines, those have not been included in many seeds. In fact, the USDA currently does not list a deregulated corn or soybean terminator trait.
My understanding is that Monsanto had developed such a trait, which they intended to use to prevent accidental cross-pollination; but when people objected to it, they dropped it.

Male-sterile is quite different from the "terminator" trait; it prevents production of fertile pollen, so that a hybrid seed breeder does not need to hire people to go through the whole field and remove the male flowers from every plant that's supposed to be a female parent in the cross. It does not influence fertility of seeds.

But the reason for not saving and replanting seeds is that almost all seed is hybrid. This means that the second generation is likely to give you a level of variability that renders mechanized harvest impractical, as well as having lower productivity. And hand-harvesting corn is not something that pays off.

The FDA is swamped, sure. They don't need to be the testing company, they could be the gatekeepers for smaller independent companies to do testing. In other areas, like pharmaceuticals the cost of testing is assumed in the product. The same thing should be done with GMO foods, because the majority of the purposes are not altruistic but profit driven.

I did not mention cost as an issue because I'm well aware that there's quite a bit of testing in development of any crop.
I interned at Pioneer one summer collecting soil moisture measurements for drought stress trials, and they mentioned the scale of the testing.
A crop is usually tested for at least five years. Trials runs about $2000 per acre per year for corn, and there are always
several evaluations (resistance to pests, drought tolerance, nitrogen use efficiency, and so on) and they are replicated at 4-5 sites.

In pharmaceuticals, you still hear people claiming that there is bias, and once in a while you hear about trials that were tampered with.

Comment Re:'no definitive conclusions can be reached' (Score 1) 341

Don't forget about Lenape potatoes. Even if the study was correct, the same sort of problem has happened with conventional breeding.

"Plant-incorporated pesticides," to use the ag term, are not new pesticides. They are old ones in a new place.
For example: BT corn. It gets its name, and its effectiveness, from Bacillus thuringiensis, a bacterium that is a selective insect killer (different strains target different insects).
B. thuringiensis has long been used as an organic pesticide.
Pesticide resistance and tolerance are also not new traits; they come from species that were exposed to the pesticide and turned out to be resistant or tolerant.

The reason for the focus is that a farmer can lose most of his crop to certain major pests and diseases. It makes more sense to prevent crop loss while keeping yield potential constant than to increase yield potential 20% while still risking 80% of the crop.

Besides, that's not all that GMOs are developed for, though most are. Drought tolerance research has been in progress for a while, and at least one of the varieties has been approved.
And there's high lysine corn, high oleic acid soybeans, soybeans modified for improved yield, soybeans modified to produce stearidonic acid or have a better fatty acid profile, reduced nicotine tobacco, and reduced lignin alfalfa.

Comment Re:seems a bit strange (Score 1) 341

And I think you did not read that paper thoroughly, or have no clue how it applies to biological research as conducted today.

What Meehl describes is a two-part issue.

First, there's a problem with using the point null hypothesis (two numbers are equal) instead of a more general null hypothesis (two sub-populations are within natural variation of each other).
The problem is that the point null is always false in fact, so a sufficiently precise test is guaranteed to prove it false and is thus likely to support a directional theory about half the time.

The second issue is that of taking support for a statistical hypothesis as support for a larger non-mathematical theory.
Other theories may well predict the same outcome, so a favorable result does not prove your own theory.
The two combine to make a scenario where, given precise enough measurements, half the time you will find support for your pet theory.

Now, if you don't use statistical analysis, you are essentially setting p=.99 and using a point null hypothesis.

In agricultural and biological research, standard practice is to use the null hypothesis that the two groups are within a certain amount of variation of each other.
And this is not necessarily false, so problem #1 goes away.
Problem #2 is a psychological problem you can always run into.
But ignoring p-values will not solve anything.

Comment Re:"Even though they had 200 rats" (Score 1) 341

mosb1000 answered most of this, but I wanted to address this bit:

And, lastly, they tested a longer time. That means any effect will be noted in a smaller group.

...assuming that the population characteristics of 2 year old rats are similar to those of 3 month old rats.
Which is not necessarily the case.

And if you actually read the graphs in the paper, you might notice a couple things:
1: There's no indication of a dose-dependent response.
If you have control and three treatments given increasing quantities of a toxin, the effects of the toxin should increase with dose.
If the effects just fluctuate, you didn't have enough numbers.

2. There's something missing on the graphs: error bars.

Comment Re:seems a bit strange (Score 1) 341

Likewise, would you be in favor of retracting any that reached a very shaky conclusion?

Except that the conclusion was not shaky. The number and type of rats was what was complained about, not the actual experiment or the results.

Number and type of experimental subjects is a part of an experiment.
And a conclusion is "shaky" if it is not adequately supported by the experiment. Any factors that reduce the statistical confidence in the results should be considered when evaluating whether it's adequately supported by the results.

If you read the paper, you'll see that there are sizeable differences between doses that do not fit the response patterns of toxicity; if a treatment is toxic, higher doses are more toxic.
So they didn't have enough numbers to check it.
While Seralini et al. used the same number as would have been used in conventional tests, their experiment ran about 4-8 times longer (they finished at 2 years with many rats dying before then; standard experiments are 3 months or less). And a much older population is likely to not have the same consistency as a younger population.

Now, whether a paper should be/have been retracted for shaky conclusions is a different question. And I can see arguments both ways.

And a third question is how we can actually fund an adequate and unbiased test.
Make the USDA or FDA do it?
They're swamped, and aren't likely to have the funding.
Have them charge a fee?
Now you just moved the bias into the bureaucracy.
Hand it over to existing nonprofits?
No, because they get funding from somewhere and usually have a position one way or the other.
It might be possible to have something that comes out unbiased if you can get both sides to fund it.
Maybe a 3-way RR/conventional/organic test could be funded by Monsanto and the folks who like organics.

Comment Re:Non-starter for me. (Score 2) 95

x86 has more OSs available.
The vendor supports DOS, Linux, and purportedly WIndows. From what I understand, "Windows" would be "XP or older", since a Vortex86EX appears to be 586-level or so.
Coincidentally, that's the same ISA as Galileo.

It's an option if you have some 16-bit code that you need to keep going...which is especially likely on any sort of continuation of an older hardware project.

The other aspect is that you can compile on your PC without setting up a cross-compiling environment. On the one hand, that's easier. On the other hand, you don't learn to cross-compile. And on the gripping hand, these processors are the sort where you don't want to compile natively.

Comment Re:vi (Score 1) 204

50% right.
The one true editor is vi (including alternate implementations such as nvi, vim, and busybox vi).
But bbcode? WRONG.
Troff is the right solution for multiformat documents. Including ones that need to be readable in word processors.

Half joking, half serious. I wrote my papers for Philosophy and Intro to UnixÂin troff. For Philosophy I converted them to RTF before submitting-which worked fairly well.
For Intro to Unix, I used -thtml and -tps. Again, it worked pretty well.

I can use Markdown, and have written a couple manpages.
(My favorite is for "segfault", a quick hack I threw together because someone was asking about example programs for a debugging presentation.)

By now you're probably thinking "Neckbeard!"...nope, I majored in agriculture; and those papers were for GE courses in the last couple years.
I used Ted for editing my longer papers, and found it to be generally satisfactory. Files are guaranteed to be readable on just about any computer, being RTF written properly. And the document actually ends up displaying the same in Word.
Ted runs quite happily on an 800-MHz processor, like the old PIII I used for a month or two after losing my laptop.

Submission + - Scientific American censors blog post for not being scientific enough 2

rogue-girl writes: The popular science magazine 'Scientific American' is getting hard time after it removed a blog post by contributor DNLee, blogging at Urban Scientist. DNLee's post discussed integrity in science and misconduct from science communicators. DNLee has been approached by BiologyOnline staff Ofek who invited her to contribute. When DNLee asked for compensation details and learned she'd be writing for free, she kindly turned down the offer. In response, Ofek called her a "whore". DNLee wrote a post on her Scientific American blog, but the post was removed. It also appears that Biology Online is SciAm's partner, but SciAm's editor in chief Mariette di Christina claimed the partnership has nothing to do with the removal, but pulling it down is due to insufficient scientific content. DNLee's original post has been reposted here, and a Storify with (outraged) reactions is also available.

Submission + - JavaScript-Based OpenRISC Emulator Can Run Linux, GCC, Wayland (

An anonymous reader writes: The jor1k is an interesting open-source toy emulator project to emulate a 32-bit OpenRISC OR1000 processor, 63MB of RAM, ocfb frame-buffer, and ATA-hard drive.... All in JavaScript. Though JavaScript based, there's asm.js optimizations and the performance seems to be quite decent in modern web-browsers. The jor1k OpenRISC emulator can do a lot even handle running the Linux kernel, GCC compiler, ScummVM Monkey Island, and the Wayland/Weston Compositor all from within the web-browser

Slashdot Top Deals

Some people claim that the UNIX learning curve is steep, but at least you only have to climb it once.