Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Can't say I love it *yet*. (Score 1) 106

by jcr (#49515131) Attached to: Swift Tops List of Most-Loved Languages and Tech

Coming from many years of Obj-C development, I can acknowledge several ways in which Swift is superior, but the learning curve is somewhat steeper than the transition from C to Objective-C was.

Aside from the language itself, Swift playgrounds are wonderful. We're getting closer all the time to a Smalltalk way of writing code.

-jcr

Comment: p-value research is misleading almost always (Score 5, Interesting) 184

by SteveWoz (#49495363) Attached to: Social Science Journal 'Bans' Use of p-values

I studied and tutored experimental design and this use of inferential statistics. I even came up with a formula for 1/5 the calculator keystrokes when learning to calculate the p-value manually. Take the standard deviation and mean for each group, then calculate the standard deviation of these means (how different the groups are) divided by the mean of these standard deviations (how wide the groups of data are) and multiply by the square root of n (sample size for each group). But that's off the point. We had 5 papers in our class for psychology majors (I almost graduated in that instead of engineering) that discussed why controlled experiments (using the p-value) should not be published. In each case my knee-jerk reaction was that they didn't like math or didn't understand math and just wanted to 'suppose' answers. But each article attacked the math abuse, by proficient academics at universities who did this sort of research. I came around too. The math is established for random environments but the scientists control every bit of the environment, not to get better results but to detect thing so tiny that they really don't matter. The math lets them misuse the word 'significant' as though there is a strong connection between cause and effect. Yet every environmental restriction (same living arrangements, same diets, same genetic strain of rats, etc) invalidates the result. It's called intrinsic validity (finding it in the experiment) vs. extrinsic validity (applying in real life). You can also find things that are weaker (by the square root of n) by using larger groups. A study can be set up in a way so as to likely find 'something' tiny and get the research prestige, but another study can be set up with different controls that turn out an opposite result. And none apply to real life like reading the results of an entire population living normal lives. You have to study and think quite a while, as I did (even walking the streets around Berkeley to find books on the subject up to 40 years prior) to see that the words "99 percentage significance level" means not a strong effect but more likely one that is so tiny, maybe a part in a million, that you'd never see it in real life.

Comment: Re:Nope, now it's breast implants (Score 1) 173

by jcr (#49460537) Attached to: Microsoft Pushes For Public Education Funding While Avoiding State Taxes

The gov't saw the problem and reacted to it. Problem solved.

Yes, they scrambled their Potemkin squad and rounded up some TP to show to foreign journalists. Great job, eh comrade?

Are they doing something about the rest of these?

http://en.wikipedia.org/wiki/S...

waiting for some phantom invisible hand to solve it

Funny, but that invisible hand seems to be delivering a far better standard of living in countries that aren't ruled by commie rat bastards.

-jcr

365 Days of drinking Lo-Cal beer. = 1 Lite-year

Working...