Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Been there, done that. (Score 5, Interesting) 52

I'll just leave this here:

http://science.slashdot.org/co...

Noontime clear-sky sun measures 9500, blue light through office window with indirect daylight is 250, a desk lamp measures 45, and an LCD TV up close measures 7 uW/cm^2 in the frequency range of the retinal ganglia (480 nm) which is thought to be the part of the eye that senses daily cycles. (Mammalian Eye [wikipedia.org] on Wikipedia.)

So far as I can tell laptops and related devices don't generate an appreciable amount of energy in this range, it's more the artificial indoor lighting.

As an experiment, I've started wearing red-tinted wrap-around sun glasses 2 hours before bedtime. I can still work, read, watch TV and all that, but the glasses mask off the blue frequencies, telling the brain that the sun has gone down.

It had an almost immediate effect. I'm a long-time sufferer of insomnia who has tried everything, but wearing the glasses fixed the problem in the first week.

I'm also a lot more "peppy" during the day, and I wonder if long term exposure to late-night artificial lighting (and low level during the day) is a cause of depression. Depression meds take about 6 weeks to have an effect, so I'm guessing that it would take about 6 weeks for the glasses to have an anti-depressive effect as well. I'm on week 3 with the glasses.

Comment Re: What is wrong with SCTP and DCCP? (Score 1) 84

Having seen the result of design-by-committee (i.e. design by politics instead of designing to fit a functional need), I can say that it doesn't work.

The outcome is almost always better when the protocol has actually been implemented, the kinks worked out, and then you ask others to use it. ...You know, useful is a necessary component of reusable...
But, if you're interested in FUD and a lack of progress instead of something which actually works, by all means do design-by-committee and get nearly useless protocols that implementers ignore.

Comment Re:vs. a Falcon 9 (Score 1) 75

They can carry about 110kg to LEO, compared to the Falcon 9's 13150kg. That's 0.84% of the payload capacity. A launch is estimated to cost $4 900 000, compared to the Falcon 9's $61 200 000. That's 8.01%. That means cost per mass to orbit is nearly an order of magnitude worse.

Yes, this is a really small rocket. If you are a government or some other entity that needs to put something small in orbit right away, the USD$5 Million price might not deter you, even though you could potentially launch a lot of small satellites on a Falcon 9 for less.

And it's a missile affordable by most small countries, if your payload can handle the re-entry on its own. Uh-oh. :-)

Comment Re:Three puzzles (Score 1) 208

He writes his paper and submits for publication: "Rats prefer to turn left", P 0.05, the effect is real, and all is good.

There's no realistic way that a reviewer can spot the flaw in this paper.

Actually, let's pose this as a puzzle to the readers. Can *you* spot the flaw in the methodology? And if so, can you describe it in a way that makes it obvious to other readers?

I guess I don't see it. While P 0.05 isn't all that compelling, it does seem like prima facie evidence that the rats used in the sample prefer to turn left at that intesection for some reason. There's no hypothesis as to why, and thus way to generalize and no testable prediction of how often rats turn left in a different circumstances, but it's still an interesting measurement.

Another poster got this correct: with dozens of measurements, the chance that at least one of them will be unusual by chance alone is very high.

A proper study states the hypothesis *before* taking the data specifically to avoid this. If you have an anomaly in the data, you must state the hypothesis and do another study to make certain.

You have a null hypothesis and some data with a very low probability. Let's say it's P 0.01. This is such a good P-value that we can reject the null hypothesis and accept the alternative explanation. ...

Can you point out the flaw in this reasoning?

You have evidence that the null hypothesis is flawed, but none that the alternative hypothesis is the correct explanation?

The scientific method centers on making testable predictions that differ from the null hypothesis, then finding new data to see if the new hypothesis made correct predictions, or was falsified. Statistical methods can only support the new hypothesis once you have new data to evaluate.

The flaw is called fallacy of the reversed conditional".

The researcher has "probability of data, given hypothesis" and assumes this implies "probability of hypothesis, given data". These are two very different things which are not always both valid.

Case 1: Probability that person is woman, given that they're carrying a pocketbook (high), Probability that person is carrying a pocketbook, given that they are a woman (also high).

Case 2: Probability that John is dead, given that he was executed (high), Probability that John was executed, given that he is dead (low).

In case 1 it's OK to reverse the conditional, but in case 2 it's not. The difference stems from the relative populations, which about equal in case 1 (women and pocketbooks), and vastly unequal in case 2 (dead people versus executed people).

Given a low P value (P of data, given hypothesis) does not in general indicate that the probability of the null hypothesis is also low (P of hypothesis, given data).

Comment Three puzzles (Score 4, Interesting) 208

It is the job of the reviewer to check that the statistic was used ion the proper context. not to check the result, but the methodology. It sounds like social journal simply either have bad reviewer or sucks at methodology.

That's a good sentiment, but it won't work in practice. Here's an example:

Suppose a researcher is running rats in a maze. He measures many things, including the direction that first-run rats turn in their first choice.

He rummages around in the data and finds that more rats (by a lot) turn left on their first attempt. It's highly unlikely that this number of rats would turn left on their first choice based on chance (an easy calculation), so this seems like an interesting effect.

He writes his paper and submits for publication: "Rats prefer to turn left", P<0.05, the effect is real, and all is good.

There's no realistic way that a reviewer can spot the flaw in this paper.

Actually, let's pose this as a puzzle to the readers. Can *you* spot the flaw in the methodology? And if so, can you describe it in a way that makes it obvious to other readers?

(Note that this is a flaw in statistical reasoning, not methodology. It's not because of latent scent trails in the maze or anything else about the setup.)

====

Add to this the number of misunderstandings that people have about the statistical process, and it becomes clear that... what?

Where does the 0.05 number come from? It comes from Pearson himself, of course - any textbook will tell you that. If P<0.05, then the results are significant and worthy of publication.

Except that Pearson didn't *say* that - he said something vaguely similar and it was misinterpreted by many people. Can you describe the difference between what he said and what the textbooks claim he said?

====

You have a null hypothesis and some data with a very low probability. Let's say it's P<0.01. This is such a good P-value that we can reject the null hypothesis and accept the alternative explanation.

P<0.01 is the probability of the data, given the (null) hypothesis. Thus we assume that the probability of the hypothesis is low, given the data.

Can you point out the flaw in this reasoning? Can you do it in a way that other readers will immediately see the problem?

There is a further calculation/formula that will fix the flawed reasoning and allow you to make a correct inference. It's very well-known, the formula has a name, and probably everyone reading this has at least heard of the name. Can you describe how to fix the inference in a way that will make it obvious to the reader?

Comment Re:You Can See (Score 1) 113

Microminiature accelerometers are really cheap and very very light, and you don't have to wait for them to spin up or deal with their mechanical issues. I doubt you will see a gyro used as a sensor any longer.

Similarly, computers make good active stabilization possible and steering your engine to stabilize is a lot lighter than having to add a big rotating mass.

Comment Re:New product (Score 1) 342

A video from the barge is now online here. If you step through the final frames, you can see that the camera mount ends up knocked over and pointing at the ocean, but the lens and its cover are unbroken and all we see flying appear to be small debris. So not a really high-pressure event.

Comment Re:incredibly close to target is far from success (Score 1) 342

It's very tempting to think this should work like an airplane. Lots of people wrote that it was "too hot", etc. But it isn't an airplane. The plan was really to approach at 1/4 Kilometer Per Second, then brake at the very last second.

Obviously Crew Dragon, which carries people, will approach differently. But it's a lot lighter.

Slashdot Top Deals

What this country needs is a good five dollar plasma weapon.

Working...