Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Bayes prior (Score 5, Interesting) 398

It doesn't look like he was under the influence at the time, but the term "driving out of his lane" does kind of give reasonable cause for drug use, but maybe thats profiling.

The problem with this logic is that it fails the "prior probability" test.

Suppose a policeman searches and finds the suspect carrying a large amount of cash, say $4000. That's consistent with a (supposed) drug purchase, so the cash can be confiscated under asset forfeiture laws (assets used in the commission of a crime).

Suppose a policeman notes a youtube video of a chemistry experiment showing a balance scale, some beakers, and jars of chemicals. Those are consistent with "meth lab", so the policeman can search and confiscate all the equipment in the poster's house (this has happened).

The problem with each of these, and your position, is that there is significant prior probability that the behaviour in question is *not* indicative of criminal activity. You are reversing the conditional probabilities.

To put it in words, you are equating "probability of driving out-of-lane, given that he's on drugs" (quite high), with "probability that he's using drugs, given out-of-lane driving" (actually, quite low).

People temporarily drive out-of-lane a great deal to avoid animals and small obstacles, and people temporarily drive out-of-lane because they're distracted. The number of people out-of-lane because they're on drugs is vanishingly small.

Taken to extremes (and we know the police will do this), pretty-much *any* behaviour can be considered consistent with drug use.

In the case of the home lab above, it doesn't matter that the poster is missing key components, nor that he only has some of the ingredients. "Meth makers use glassware, he's got glassware, therefore he's a meth maker".

You see where this leads?

If a policeman observes a crime, take the appropriate action - that's fine. If he *observes* another crime while dealing with it, that's fine too.

But that's not a justification to rummage around in a person's rights just to see what can be pinned on the suspect.

If he doesn't observe a crime, he shouldn't go looking for one.

Comment: F.Lux helps with that (Score 2) 51

by Okian Warrior (#49513415) Attached to: Colors Help Set Body's Internal Clock

I've used F.Lux and it does everything it says. It's a polite program, I've got no problem with it per-se, but I removed it from my system.

For one, the sunset transition happens in a couple of seconds, and it's quite noticeable. The speed isn't a problem, nor is the "noticing", but I think a slower sunset might be more effective.

The bigger issue was "length of day". F.Lux synchronizes to the local length of day (based on your latitude and the current date), so in the winter you're still seeing short days and sunset at 5:00 PM. If you're subject to SAD, then F.Lux won't help with that.

(But, granted, it does feel good on the eyes when it kicks in.)

Part of the problem with light therapy is that it doesn't always work, or only works a little, or doesn't work for everyone. As a scientific result, this fairly shouts "not the complete explanation", so I've played around with this a bit to see what's really happening.

I'm convinced that "length of day" plays a big part in our internal clocks, and things like heavy blue video has an effect. For example, "The Daily Show with Jon Stewart" has a lot of blue and is shown late at night. Watch it with red sunglasses and see if you feel more tired/ready to sleep after watching.

In terms of scientific discoveries, I think there's some low-hanging fruit here. Straightforward hyotheses and studies could be done which would completely characterize the issue, and would point to simple, inexpensive, and drug-free cures to a handful of issues.

Comment: Been there, done that. (Score 5, Interesting) 51

by Okian Warrior (#49512889) Attached to: Colors Help Set Body's Internal Clock

I'll just leave this here:

http://science.slashdot.org/co...

Noontime clear-sky sun measures 9500, blue light through office window with indirect daylight is 250, a desk lamp measures 45, and an LCD TV up close measures 7 uW/cm^2 in the frequency range of the retinal ganglia (480 nm) which is thought to be the part of the eye that senses daily cycles. (Mammalian Eye [wikipedia.org] on Wikipedia.)

So far as I can tell laptops and related devices don't generate an appreciable amount of energy in this range, it's more the artificial indoor lighting.

As an experiment, I've started wearing red-tinted wrap-around sun glasses 2 hours before bedtime. I can still work, read, watch TV and all that, but the glasses mask off the blue frequencies, telling the brain that the sun has gone down.

It had an almost immediate effect. I'm a long-time sufferer of insomnia who has tried everything, but wearing the glasses fixed the problem in the first week.

I'm also a lot more "peppy" during the day, and I wonder if long term exposure to late-night artificial lighting (and low level during the day) is a cause of depression. Depression meds take about 6 weeks to have an effect, so I'm guessing that it would take about 6 weeks for the glasses to have an anti-depressive effect as well. I'm on week 3 with the glasses.

Comment: Re:Three puzzles (Score 1) 208

by Okian Warrior (#49498733) Attached to: Social Science Journal 'Bans' Use of p-values

He writes his paper and submits for publication: "Rats prefer to turn left", P 0.05, the effect is real, and all is good.

There's no realistic way that a reviewer can spot the flaw in this paper.

Actually, let's pose this as a puzzle to the readers. Can *you* spot the flaw in the methodology? And if so, can you describe it in a way that makes it obvious to other readers?

I guess I don't see it. While P 0.05 isn't all that compelling, it does seem like prima facie evidence that the rats used in the sample prefer to turn left at that intesection for some reason. There's no hypothesis as to why, and thus way to generalize and no testable prediction of how often rats turn left in a different circumstances, but it's still an interesting measurement.

Another poster got this correct: with dozens of measurements, the chance that at least one of them will be unusual by chance alone is very high.

A proper study states the hypothesis *before* taking the data specifically to avoid this. If you have an anomaly in the data, you must state the hypothesis and do another study to make certain.

You have a null hypothesis and some data with a very low probability. Let's say it's P 0.01. This is such a good P-value that we can reject the null hypothesis and accept the alternative explanation. ...

Can you point out the flaw in this reasoning?

You have evidence that the null hypothesis is flawed, but none that the alternative hypothesis is the correct explanation?

The scientific method centers on making testable predictions that differ from the null hypothesis, then finding new data to see if the new hypothesis made correct predictions, or was falsified. Statistical methods can only support the new hypothesis once you have new data to evaluate.

The flaw is called fallacy of the reversed conditional".

The researcher has "probability of data, given hypothesis" and assumes this implies "probability of hypothesis, given data". These are two very different things which are not always both valid.

Case 1: Probability that person is woman, given that they're carrying a pocketbook (high), Probability that person is carrying a pocketbook, given that they are a woman (also high).

Case 2: Probability that John is dead, given that he was executed (high), Probability that John was executed, given that he is dead (low).

In case 1 it's OK to reverse the conditional, but in case 2 it's not. The difference stems from the relative populations, which about equal in case 1 (women and pocketbooks), and vastly unequal in case 2 (dead people versus executed people).

Given a low P value (P of data, given hypothesis) does not in general indicate that the probability of the null hypothesis is also low (P of hypothesis, given data).

Comment: Three puzzles (Score 4, Interesting) 208

by Okian Warrior (#49495277) Attached to: Social Science Journal 'Bans' Use of p-values

It is the job of the reviewer to check that the statistic was used ion the proper context. not to check the result, but the methodology. It sounds like social journal simply either have bad reviewer or sucks at methodology.

That's a good sentiment, but it won't work in practice. Here's an example:

Suppose a researcher is running rats in a maze. He measures many things, including the direction that first-run rats turn in their first choice.

He rummages around in the data and finds that more rats (by a lot) turn left on their first attempt. It's highly unlikely that this number of rats would turn left on their first choice based on chance (an easy calculation), so this seems like an interesting effect.

He writes his paper and submits for publication: "Rats prefer to turn left", P<0.05, the effect is real, and all is good.

There's no realistic way that a reviewer can spot the flaw in this paper.

Actually, let's pose this as a puzzle to the readers. Can *you* spot the flaw in the methodology? And if so, can you describe it in a way that makes it obvious to other readers?

(Note that this is a flaw in statistical reasoning, not methodology. It's not because of latent scent trails in the maze or anything else about the setup.)

====

Add to this the number of misunderstandings that people have about the statistical process, and it becomes clear that... what?

Where does the 0.05 number come from? It comes from Pearson himself, of course - any textbook will tell you that. If P<0.05, then the results are significant and worthy of publication.

Except that Pearson didn't *say* that - he said something vaguely similar and it was misinterpreted by many people. Can you describe the difference between what he said and what the textbooks claim he said?

====

You have a null hypothesis and some data with a very low probability. Let's say it's P<0.01. This is such a good P-value that we can reject the null hypothesis and accept the alternative explanation.

P<0.01 is the probability of the data, given the (null) hypothesis. Thus we assume that the probability of the hypothesis is low, given the data.

Can you point out the flaw in this reasoning? Can you do it in a way that other readers will immediately see the problem?

There is a further calculation/formula that will fix the flawed reasoning and allow you to make a correct inference. It's very well-known, the formula has a name, and probably everyone reading this has at least heard of the name. Can you describe how to fix the inference in a way that will make it obvious to the reader?

Comment: Re:WTF (Score 2) 114

by Okian Warrior (#49455445) Attached to: Daredevil TV Show Debuts; Early Reviews Positive

the binge-y sprawl of the Netflix format

The fucking what, man?

[Aside: I'll have a pint of what he's been snorting]

Netflix sometimes releases the entire season of a show all at once, allowing people to download the entire season and binge-watch. Hence "binge-y".

IIRC, they first started with "House Of Cards" as an experiment, and found that a lot of people liked the ability to watch it all in a weekend, or 2-3 episodes per night for a week, or whatever.

Having to wait a week to see the next episode allows peoples' interest to wane. Also, for complex plotlines (see: "Lost"), people tend to forget important events that happened weeks prior and have trouble keeping up with the plot. If the event 5 episodes ago was last night (or the night before), people have a better time keeping immersed in the plot.

Comment: Re:Out of curiosity (Score 2) 297

if you don't understand the concepts involved, do not comment on a topic you don't understand

You stated - quite plainly - that this was "no coaching, no suggestion", obviously some strange legal definition of ""no coaching, no suggestion", of which I am unaware.

And of course, this is only coming from the complaint, which is the FBI's version of events.

If the FBI's version is this sketchy, what do you imagine the real situation was?

Or are you one of those people with "relatives in law enforcement", who have inside information about all officers being honest, forthright persons?

(Except for the ones caught on video, of course!)

Comment: Out of curiosity (Score 1) 297

it's not entrapment

it really isn't

entrapment is getting you to do something you don't want to do

if the guy expresses his sincere, original desire to do something, no coaxing, no suggestion, that's 100% on him

i don't know why so many people don't understand what entrapment is

Huh. You don't say. And here I was reading some excerpts from the original complaint:

[The FBI supplied, what Booker understood was, the explosives (actually inert material) needed for the bomb, then:]

CHS 1(*) provided Booker with a list of supplies that they needed to purchase in order to build the bomb.

Booker understood that CHS 1 and CHS 2 would build the VBIED

CHS 2 explained the function of the inert VBIED to Booker and demonstrated how to arm the device.

Out of curiosity, does this look like "no coaxing, no suggestion, that's 100% on him" to you?

Because, it doesn't to me...

(*) CHS stands for "Confidential Human Source", and means "FBI undercover agent"

Comment: Privacy implications (Score 4, Interesting) 37

by Okian Warrior (#49450069) Attached to: 'Smart Sewer' Project Will Reveal a City's Microbiome

Lest we forget our current state of affairs wrt privacy, note:

If the police can access the data, they can use it to determine lots of things about you. For example, they can probably detect if there's a meth lab upstream from the current location, and use this as a guide for the placement of more sensors. Eventually they'll narrow it down to a single household, and know where the meth lab is.

They could do this with drug use as well. They could find evidence of, say, cocaine use in the stream and use this to place more sensors, then narrow it down to an individual household. Then see if the household member is in a critical job, such as ambulance driver or surgeon.

...or any job, really. They could just alert your employer to the fact that "someone in your household" uses drugs.

They could determine the ethnic profile of individual homes from the food eaten.

They could determine the health of individuals living in individual homes in several ways - detecting diabetes, or obesity, or diet for example. Insurance companies would probably want this information.

And legally, their response would probably be "you have no right to privacy for anything that you flush into the public sewers", or "just as with driving or flying, you can choose not to do it" or some such.

I can see a lot of benefit from doing this (sewer monitoring in India is being used to show that polio has been eradicated), but we really need to get a handle on the privacy implications from the start, before the big abuses begin.

This will be like video cameras: expensive at first, then ubiquitous. Look to see a sensor at the outlet from each home in a couple of decades.

Comment: So do I... (Score 1) 185

by Okian Warrior (#49435009) Attached to: The Key To Interviewing At Google

Firstly, not all manhole covers are round. I've seen triangular ones in Nashua and Japan, and there are a lot of rectangular ones in Italy.

Secondly, the reason manhole covers are round generally is that during the industrial age the four major machining operations were casting, cutting, turning, and drilling, and since the covers had to be reasonably accurate while being mass produced they were made by turning (ie - on a lathe).

Thirdly, this is a variation of a "Fermi problem", after Enrico Fermi who famously used it to determine whether an interview candidate could think logically and make back-of-the-envelope questions. However, this question in particular is famous, available to anyone who could look it up on the internet. Along with the answer.

That 'kinda defeats the purpose, doesn't it?

Since the question and answer are so readily available, I have to assume that you, the interviewer didn't actually make up your own question. But it looks like *you* happen to enjoy these sorts of questions, and I'm sure that you had to answer your share of these when you interviewed for the company.

That being said, I'm also interviewing your company, to see if I actually want to work here. Since you like questions like this, here's one for you...

(NB: I don't like working for idiots.)

Comment: Even worse. (Score 5, Insightful) 289

What in the actual fuck? It is now illegal to donate to fund someone that has not been convinced of anything, and who has done great justice exposing criminal things our government has been up to?

It's much worse than that.

The president, by himself, created and enacted a law which carries a criminal penalty.

(My outrage meter is pretty much pegged, and I had a polemic about secret laws, secret courts, ordering US citizens killed, and such... but I think that one statement above stands by itself. The US is well and truly fucked.)

Comment: Re:Wow, a whole 1%? (Score 4, Informative) 163

by Okian Warrior (#49392819) Attached to: Tesla's April Fool's Joke Spoofs Market Algorithms

Check out the actual bump.

Anecdotal of course, but it sure seems like the announcement caused a massive spike in trading.

Also note that TSLA is up $4 over yesterday's close, so that's a total of 3%.

This is not nothing, given the scope of effort they made (a simple blog post and twitter announcement).

You can't take damsel here now.

Working...