Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment An example (Score 2) 182

Having quickly skimmed the paper, I'll give an example of the problem.
I couldn't quickly find a real data set that was easy to interpret, so I'm going to make up some data.
              Chance to die before reaching this age
Age woman man
80 .54 .65
85 .74 .83
90 .88 .96
95 .94 .98

We have a person who is 90 years old. Taking the null hypothesis to be that this person is a man, we can reject the hypothesis that this is a man with greater than 95 percent confidence (p=0.04). However, if we do a Bayesian analysis assuming prior probabilities of 50 percent for the person being a man or a woman, we find that there is a 25 percent chance that the person is a man after all (as women are 3 times more likely to reach age 90 than men are.)

(Having 11 percent signs in my post seems to have given /. indigestion so I've had to edit them out.)

Comment Interpretation of the 0.05 threshold (Score 5, Insightful) 182

Personally, I've considered results with p values between 0.01 and 0.05 as merely 'suggestive': "It may be worth looking into this more closely to find out if this effect is real." Between 0.01 and 0.001 I'd take the result as tentatively true - I'll accept it until someone refutes it.

If you take p=0.04 as demonstrating a result is true, you're being foolish and statistically naive. However, unless you're a compulsive citation follower (which I'm not) you are somewhat at the mercy of other authors. If Alice says "In Bob (1998) it was shown that ..." I'll tend to accept it without realizing that Bob (1998) was a p=0.04 result.

Obligatory XKCD

Comment Selection effects (Score 3, Informative) 110

A great many of the known exoplanets are large, close to their star or both. It should be noted that this does not directly represent how common large close in planets actually are.

We find exoplanets in two ways - by Doppler shift of the star, or by transits.

When a planet orbits a star, the star also orbits their common center of mass, so it wobbles slightly. By looking for subtle Doppler shift in its spectral lines, we can try to detect this wobble. The larger (mass) the planet, the further the star wobbles, and the larger the Doppler shift. Similarly, the closer the planet, the faster (and so more detectable) the wobble. (Even though it has less distance to travel, this is more than compensated for by how much shorter the orbital period is.)

When a planet transits its star (moves between the star and us) we can detect a decrease in the received light, as some is blocked by the planet. The larger (radius) the planet, the greater the decrease, and so more likely we'll be able to detect it. The closer the planet, the more likely that chance alignment will allow us to observe a transit. Also, the closer the planet, the more frequent the transits, and so the more chance one will happen when we're observing the star.

So this weird planet was quite possibly thousands of times easier to detect than an Earth-like planet in an Earth-like orbit. (In this case, discovery was by transit, targeted observations measured the Doppler shift. The combination allowed an estimate of its density.)

Comment Re:Source material is unreliable (Score 3, Insightful) 61

The article does comment on this.

If you're using a maximum likelihood analysis, your model can allow for unreliable data. E.g. you could assign a 10% chance that the paternity is not as recorded. Then you would have probability calculations like
P(child inherited gene from father)=0.9*P('father' (according to genealogy) had the gene)+0.1*P(random male in the population had the gene).

You can even make the 'false paternity rate' a parameter in your model, so the data itself will tell you what value is best. However, if the data is too unreliable, all that your maximum likelihood analysis will tell you is "we can't conclude anything from this data". (Assuming you correctly model the unreliability. If you don't, your analysis is liable to give false results.)

Maximum likelihood is not always computationally feasible, depending on the model you're trying to fit and how much data you have.

Comment I choose to act as if I had free will (Score 1) 401

I don't believe in free will, but I choose to behave as if I have it.

"If you answer Yes to questions 1,2,3, and No to question 4, then you are likely to believe that you have free will,” says Lloyd.
I answered like that, yet I confound his determinism by not believing in free will. (Although question 1 lacks definition of "decider". I decided I probably was one.)

Comment Re:A grander plan (Score 1) 141

Thanks. I work with genetics at the applied maths/algorithms/programming end, so the biological complexities can escape my notice.

I'm not so sure about your example. While GUU and GUA are synonymous, GAU and GAA are not. The genetic code almost entirely treats codon 3rd position A/G as synonymous and U/C as synonymous. (The only exception in the 'standard' code is UGA=stop, UGG=Tryptophan.)

If we treat third position as having only two distinguishable letters, we can still make the process work, but with more generations. (I'll call this a 'reduced codon'.) We have 32 reduced codons (4x4x2) and 21 meanings (20 amino acids plus 'stop') leaving 11 to spare. So we remove 11 reduced codons from the genome by replacing them with synonymous codons, and make new tRNA for the removed codons. Breed a few generations. Recode the genome to use the 11 new reduced codons (so now genome is running on 11 new codons, 10 old codons.) Recode 11 other reduced codons to synonymous ones (the new ones from previous step) and give them new tRNA. Breed a few generations. Recode the final 10 old codons to the second set of new tRNA, and make new tRNA for those 10 codons. I.e. same process, but it takes 3 recoding steps instead of two, because we can't change as many codons at once.

Also, we'll be keeping our bacteria in a very friendly environment. It doesn't need to out compete other bacteria, just to do well enough to reproduce. We can tolerate the occasional dud protein, but if the error rate is several amino acids per protein, the bacterium won't be viable.

Comment A grander plan (Score 4, Interesting) 141

Some years ago, when Venter's synthetic genome bacteria was created, I came up with a plan to do this on a more extensive scale.
(1) Sequence the genome of a bacterium, and edit the genome (on computer) to use no codons ending in 'T" or "A". (The redundancy of the genetic code allows this.)
(2) Also edit genome so that it has tRNA for the codons ending T or A which entirely change their meanings (but still using the standard amino acids.) (Transfer RNA - tRNA - are the mechanism by which the codon code is decyphered to amino acids.)
(3) Synthesize the edited genome, and replace the genome of a living bacterium with it. Breed for a few generations, to check that all is well, and to eliminate any of the old tRNA.
(4) Edit the genome to use entirely the new codons. Also edit replacement tRNA for the remaining codons, ending G or C.
(5) Replace the genome of one of our modified bacteria with this one.

Result: a bacterium which has an entirely rewritten genetic code, and is incapable of reading the old code.

However, I don't think I was the first to think this all up. In any case, Science didn't accept my letter proposing it.

Comment Three levels of break-even (Score 4, Interesting) 429

There are different ways to break-even.
Scientific break-even means the energy you've provided to the fuel's environment is less than the energy the reaction liberates. This is what is claimed here, although even then they're squinting a bit by only counting the light absorbed by the fuel pellet.

Engineering break-even accounts for the inefficiency in providing energy to the reaction (losses in laser beam generation and transmission, in this case) and inefficiency in converting the reaction energy into electricity (or other useful form.) Once you've reached engineering break-even, you have a facility which, provided with fuel, will provide you with electricity.

Economic break-even is when the amount of electricity generated is sufficient to pay for the capital, consumables and maintenance (and perhaps waste disposal and decommissioning) cost of the facility.

Incidentally, I thought magnetic confinement fusion reactors had reached scientific break-even a decade or two ago. I haven't found any support for this belief in a quick web search, so maybe I'm delusional.

Comment Changing the US voting system (Score 4, Interesting) 330

I think a large part of the problem is the primary voting system. A would-be presidential candidate first has to appeal to the extremists in their own party before they have a chance to try to appeal to the general public.

I have a proposal to fix this.
Step 1: To be on the presidential ballot, you must have reached some threshold number of votes in the primaries. This threshold should be set so that there will be about 4 to 6 presidential candidates. (Primaries are not party-based. All presidential hopefuls appear on the one ballot.)

Step 2: Voters rank the presidential candidates in their order of preference. These preferences are processed by a Condorcet method. This ensures that if one candidate would win a two candidate election against any other candidate, they are elected.

With 4 to 6 candidates, there is room for at least two from each main party, plus the occasional independent/minor party candidate. The Condorcet voting encourages moderates rather than extremists. (In turn, this will encourage the selection of moderates in the primaries.) It also gives independents a decent chance.

(Note: I am not a US citizen, nor am I living there.)

Slashdot Top Deals

If the aborigine drafted an IQ test, all of Western civilization would presumably flunk it. -- Stanley Garn

Working...