Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment F.Lux helps with that (Score 2) 52

I've used F.Lux and it does everything it says. It's a polite program, I've got no problem with it per-se, but I removed it from my system.

For one, the sunset transition happens in a couple of seconds, and it's quite noticeable. The speed isn't a problem, nor is the "noticing", but I think a slower sunset might be more effective.

The bigger issue was "length of day". F.Lux synchronizes to the local length of day (based on your latitude and the current date), so in the winter you're still seeing short days and sunset at 5:00 PM. If you're subject to SAD, then F.Lux won't help with that.

(But, granted, it does feel good on the eyes when it kicks in.)

Part of the problem with light therapy is that it doesn't always work, or only works a little, or doesn't work for everyone. As a scientific result, this fairly shouts "not the complete explanation", so I've played around with this a bit to see what's really happening.

I'm convinced that "length of day" plays a big part in our internal clocks, and things like heavy blue video has an effect. For example, "The Daily Show with Jon Stewart" has a lot of blue and is shown late at night. Watch it with red sunglasses and see if you feel more tired/ready to sleep after watching.

In terms of scientific discoveries, I think there's some low-hanging fruit here. Straightforward hyotheses and studies could be done which would completely characterize the issue, and would point to simple, inexpensive, and drug-free cures to a handful of issues.

Comment Been there, done that. (Score 5, Interesting) 52

I'll just leave this here:

http://science.slashdot.org/co...

Noontime clear-sky sun measures 9500, blue light through office window with indirect daylight is 250, a desk lamp measures 45, and an LCD TV up close measures 7 uW/cm^2 in the frequency range of the retinal ganglia (480 nm) which is thought to be the part of the eye that senses daily cycles. (Mammalian Eye [wikipedia.org] on Wikipedia.)

So far as I can tell laptops and related devices don't generate an appreciable amount of energy in this range, it's more the artificial indoor lighting.

As an experiment, I've started wearing red-tinted wrap-around sun glasses 2 hours before bedtime. I can still work, read, watch TV and all that, but the glasses mask off the blue frequencies, telling the brain that the sun has gone down.

It had an almost immediate effect. I'm a long-time sufferer of insomnia who has tried everything, but wearing the glasses fixed the problem in the first week.

I'm also a lot more "peppy" during the day, and I wonder if long term exposure to late-night artificial lighting (and low level during the day) is a cause of depression. Depression meds take about 6 weeks to have an effect, so I'm guessing that it would take about 6 weeks for the glasses to have an anti-depressive effect as well. I'm on week 3 with the glasses.

Comment Re:Three puzzles (Score 1) 208

He writes his paper and submits for publication: "Rats prefer to turn left", P 0.05, the effect is real, and all is good.

There's no realistic way that a reviewer can spot the flaw in this paper.

Actually, let's pose this as a puzzle to the readers. Can *you* spot the flaw in the methodology? And if so, can you describe it in a way that makes it obvious to other readers?

I guess I don't see it. While P 0.05 isn't all that compelling, it does seem like prima facie evidence that the rats used in the sample prefer to turn left at that intesection for some reason. There's no hypothesis as to why, and thus way to generalize and no testable prediction of how often rats turn left in a different circumstances, but it's still an interesting measurement.

Another poster got this correct: with dozens of measurements, the chance that at least one of them will be unusual by chance alone is very high.

A proper study states the hypothesis *before* taking the data specifically to avoid this. If you have an anomaly in the data, you must state the hypothesis and do another study to make certain.

You have a null hypothesis and some data with a very low probability. Let's say it's P 0.01. This is such a good P-value that we can reject the null hypothesis and accept the alternative explanation. ...

Can you point out the flaw in this reasoning?

You have evidence that the null hypothesis is flawed, but none that the alternative hypothesis is the correct explanation?

The scientific method centers on making testable predictions that differ from the null hypothesis, then finding new data to see if the new hypothesis made correct predictions, or was falsified. Statistical methods can only support the new hypothesis once you have new data to evaluate.

The flaw is called fallacy of the reversed conditional".

The researcher has "probability of data, given hypothesis" and assumes this implies "probability of hypothesis, given data". These are two very different things which are not always both valid.

Case 1: Probability that person is woman, given that they're carrying a pocketbook (high), Probability that person is carrying a pocketbook, given that they are a woman (also high).

Case 2: Probability that John is dead, given that he was executed (high), Probability that John was executed, given that he is dead (low).

In case 1 it's OK to reverse the conditional, but in case 2 it's not. The difference stems from the relative populations, which about equal in case 1 (women and pocketbooks), and vastly unequal in case 2 (dead people versus executed people).

Given a low P value (P of data, given hypothesis) does not in general indicate that the probability of the null hypothesis is also low (P of hypothesis, given data).

Comment Three puzzles (Score 4, Interesting) 208

It is the job of the reviewer to check that the statistic was used ion the proper context. not to check the result, but the methodology. It sounds like social journal simply either have bad reviewer or sucks at methodology.

That's a good sentiment, but it won't work in practice. Here's an example:

Suppose a researcher is running rats in a maze. He measures many things, including the direction that first-run rats turn in their first choice.

He rummages around in the data and finds that more rats (by a lot) turn left on their first attempt. It's highly unlikely that this number of rats would turn left on their first choice based on chance (an easy calculation), so this seems like an interesting effect.

He writes his paper and submits for publication: "Rats prefer to turn left", P<0.05, the effect is real, and all is good.

There's no realistic way that a reviewer can spot the flaw in this paper.

Actually, let's pose this as a puzzle to the readers. Can *you* spot the flaw in the methodology? And if so, can you describe it in a way that makes it obvious to other readers?

(Note that this is a flaw in statistical reasoning, not methodology. It's not because of latent scent trails in the maze or anything else about the setup.)

====

Add to this the number of misunderstandings that people have about the statistical process, and it becomes clear that... what?

Where does the 0.05 number come from? It comes from Pearson himself, of course - any textbook will tell you that. If P<0.05, then the results are significant and worthy of publication.

Except that Pearson didn't *say* that - he said something vaguely similar and it was misinterpreted by many people. Can you describe the difference between what he said and what the textbooks claim he said?

====

You have a null hypothesis and some data with a very low probability. Let's say it's P<0.01. This is such a good P-value that we can reject the null hypothesis and accept the alternative explanation.

P<0.01 is the probability of the data, given the (null) hypothesis. Thus we assume that the probability of the hypothesis is low, given the data.

Can you point out the flaw in this reasoning? Can you do it in a way that other readers will immediately see the problem?

There is a further calculation/formula that will fix the flawed reasoning and allow you to make a correct inference. It's very well-known, the formula has a name, and probably everyone reading this has at least heard of the name. Can you describe how to fix the inference in a way that will make it obvious to the reader?

Comment Re:WTF (Score 2) 114

the binge-y sprawl of the Netflix format

The fucking what, man?

[Aside: I'll have a pint of what he's been snorting]

Netflix sometimes releases the entire season of a show all at once, allowing people to download the entire season and binge-watch. Hence "binge-y".

IIRC, they first started with "House Of Cards" as an experiment, and found that a lot of people liked the ability to watch it all in a weekend, or 2-3 episodes per night for a week, or whatever.

Having to wait a week to see the next episode allows peoples' interest to wane. Also, for complex plotlines (see: "Lost"), people tend to forget important events that happened weeks prior and have trouble keeping up with the plot. If the event 5 episodes ago was last night (or the night before), people have a better time keeping immersed in the plot.

Comment Re:Out of curiosity (Score 2) 297

if you don't understand the concepts involved, do not comment on a topic you don't understand

You stated - quite plainly - that this was "no coaching, no suggestion", obviously some strange legal definition of ""no coaching, no suggestion", of which I am unaware.

And of course, this is only coming from the complaint, which is the FBI's version of events.

If the FBI's version is this sketchy, what do you imagine the real situation was?

Or are you one of those people with "relatives in law enforcement", who have inside information about all officers being honest, forthright persons?

(Except for the ones caught on video, of course!)

Comment Out of curiosity (Score 1) 297

it's not entrapment

it really isn't

entrapment is getting you to do something you don't want to do

if the guy expresses his sincere, original desire to do something, no coaxing, no suggestion, that's 100% on him

i don't know why so many people don't understand what entrapment is

Huh. You don't say. And here I was reading some excerpts from the original complaint:

[The FBI supplied, what Booker understood was, the explosives (actually inert material) needed for the bomb, then:]

CHS 1(*) provided Booker with a list of supplies that they needed to purchase in order to build the bomb.

Booker understood that CHS 1 and CHS 2 would build the VBIED

CHS 2 explained the function of the inert VBIED to Booker and demonstrated how to arm the device.

Out of curiosity, does this look like "no coaxing, no suggestion, that's 100% on him" to you?

Because, it doesn't to me...

(*) CHS stands for "Confidential Human Source", and means "FBI undercover agent"

Comment Privacy implications (Score 4, Interesting) 37

Lest we forget our current state of affairs wrt privacy, note:

If the police can access the data, they can use it to determine lots of things about you. For example, they can probably detect if there's a meth lab upstream from the current location, and use this as a guide for the placement of more sensors. Eventually they'll narrow it down to a single household, and know where the meth lab is.

They could do this with drug use as well. They could find evidence of, say, cocaine use in the stream and use this to place more sensors, then narrow it down to an individual household. Then see if the household member is in a critical job, such as ambulance driver or surgeon.

...or any job, really. They could just alert your employer to the fact that "someone in your household" uses drugs.

They could determine the ethnic profile of individual homes from the food eaten.

They could determine the health of individuals living in individual homes in several ways - detecting diabetes, or obesity, or diet for example. Insurance companies would probably want this information.

And legally, their response would probably be "you have no right to privacy for anything that you flush into the public sewers", or "just as with driving or flying, you can choose not to do it" or some such.

I can see a lot of benefit from doing this (sewer monitoring in India is being used to show that polio has been eradicated), but we really need to get a handle on the privacy implications from the start, before the big abuses begin.

This will be like video cameras: expensive at first, then ubiquitous. Look to see a sensor at the outlet from each home in a couple of decades.

Comment So do I... (Score 1) 185

Firstly, not all manhole covers are round. I've seen triangular ones in Nashua and Japan, and there are a lot of rectangular ones in Italy.

Secondly, the reason manhole covers are round generally is that during the industrial age the four major machining operations were casting, cutting, turning, and drilling, and since the covers had to be reasonably accurate while being mass produced they were made by turning (ie - on a lathe).

Thirdly, this is a variation of a "Fermi problem", after Enrico Fermi who famously used it to determine whether an interview candidate could think logically and make back-of-the-envelope questions. However, this question in particular is famous, available to anyone who could look it up on the internet. Along with the answer.

That 'kinda defeats the purpose, doesn't it?

Since the question and answer are so readily available, I have to assume that you, the interviewer didn't actually make up your own question. But it looks like *you* happen to enjoy these sorts of questions, and I'm sure that you had to answer your share of these when you interviewed for the company.

That being said, I'm also interviewing your company, to see if I actually want to work here. Since you like questions like this, here's one for you...

(NB: I don't like working for idiots.)

Comment Even worse. (Score 5, Insightful) 289

What in the actual fuck? It is now illegal to donate to fund someone that has not been convinced of anything, and who has done great justice exposing criminal things our government has been up to?

It's much worse than that.

The president, by himself, created and enacted a law which carries a criminal penalty.

(My outrage meter is pretty much pegged, and I had a polemic about secret laws, secret courts, ordering US citizens killed, and such... but I think that one statement above stands by itself. The US is well and truly fucked.)

Comment Re:Wow, a whole 1%? (Score 4, Informative) 163

Check out the actual bump.

Anecdotal of course, but it sure seems like the announcement caused a massive spike in trading.

Also note that TSLA is up $4 over yesterday's close, so that's a total of 3%.

This is not nothing, given the scope of effort they made (a simple blog post and twitter announcement).

Submission + - April Fool's Joke spoofs market algorithms (zerohedge.com)

Okian Warrior writes: Yesterday, Tesla's twitter feed and blog announced the new "W" Model. Meaning "Watch" (as in "wristwatch"), the announcement Included a photo of a watch spouting a cumbersome "Big Ben" glued to the face and including this text:

"This incredible new device from Tesla doesn't just tell the time, it also tells the date. What's more, it is infinitely adjustable, able to tell the time no matter where you are on Earth. Japan, Timbuktu, California, anywhere! This will change your life. Reality as you know it will never be the same."

Clearly, this was an April fool's joke as anyone who reads more than just the headline would immediately guess. The problem is that Bloomberg's fast response team did not. The algos, on massive volume, spiked TSLA stock higher by nearly 1%....

Comment Poor quality of courses (Score 4, Interesting) 145

The extremely low pass rate for free online courses provides some evidence for this.

This is what's known as a "rationalization". Pick the one explanation you like, and then find some evidence to support it.

To really choose the best answer without experimentation, you write down *all* the possible explanations, and then pick the one that seems most likely.

(If you can do experiments you can eliminate explanations directly - but when you can't do this, the best course is to list all explanations and pick the simplest one.)

A simpler explanation of the low pass rate is that the online courses are of poor quality.

And indeed, many of the online courses are very low quality - especially the ones from high-end players.

The "Probabalistic Graphical Models" course by Stanford is known as a weeder (students get caught off guard with the difficulty), and the online version demonstrates this: the video shows Daphne Koller standing at a lectern droning on and on(*) with no vocal variety, reading the text of the online slides to the viewer... completely uninteresting and making a simple course boring as hell. (sample video.)

I thumbed through the edX course listing and hit on a course I liked - and the introductory video contained absolutely *no* information about the course! The full text of the course description read something like: "Join me as we explore the boundaries of $subject". (Is it a difficult course? Is it introductory or advanced? What level of math is required? What's the syllabus?)

I mentioned it to the head of edX in a private E-mail, and he responded by saying "that's an affiliate course [ie - from an affiliate institution] and we don't have control of the quality or content".

(WTF? You're running a startup and you don't have control over the quality? And he seemed to intimate that he was more interested in building the scope of their selection than the quality.)

Kahn academy is trying to get feedback from students to improve their presentation and make their lectures more effective, but I don't see any other players doing this.

Everyone's just taping their lectures and putting them online(**). The situation won't change until everyone burns through all the seed money and has to start making a profit based on results. For example, edX got $60 million in seed money, and they're burning through it with no viable business plan.

(*) Keep in mind that I'm critiquing the course, and not Professor Koller.

(**) For a counterpoint example, consider Donald Sadoway's Introduction to Solid State Chemistry, which is *not* a MOOC lecture series but is free for online viewing. Light years ahead of any MOOC course and well worth viewing.

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...