## Comment Hey (Score 1) 153

That's 1507 systems to you and he's 11! ;)

Check out the new **SourceForge HTML5 internet speed test!** No Flash necessary and runs on all devices.
×

That's 1507 systems to you and he's 11! ;)

but Prof. Andrew Ng says in his video for the machine learning class "If you successfully complete this class you also get from me a signed statement of accomplishment stating how you did on the class that you can put on your resume." that's got to mean something to some people given his reputation in the field. Especially those who are trying to scoop up as many machine learners as possible in this whole Big Data rush.

Except wasn't Tesla the victim of such tactics courtesy of Edison?

obliv!on writes: *"As previously reported the White House has started to reply to petitions on the "We the people" website. They recently replied to the "Promoting Innovation and Competitive Markets through Quality Patents" petition.
The response mentions the **America Invents Act* and encourages the use of the USPTO's open implementation website. "There's a lot we can do through the new law to improve patent quality and to ensure that only true inventions are given patent protection. But it's important to note that the executive branch doesn't set the boundaries of what is patentable all by itself. Congress has set forth broad categories of inventions that are eligible for patent protection. The courts, including the U.S. Supreme Court, have interpreted the statute to include some software-related inventions."
The response goes on to denote some open source and open data initiatives in government. While it is nice to hear that the administration understands "concerns that overly broad patents on software-based inventions may stifle the very innovative and creative open source software development community." Yet the overall response redirects action to the petitioners through participating in the open implementation site and contacting Congress instead of a promise by the administration to prepare additional legislative measures for the Congress to consider on behalf of the petitioners to address the software patent issue."

wiredmikey writes: *Anyone who argues that their website is too small or obscure for anyone to test for flaws isn’t paying attention to the fact that everyone’s website is being tested, all the time. If it’s accessible on the Internet, it’s a target.*

There are thousands of script kiddies, launching hundreds of thousands of automated attacks all the time.

There are, in fact, an amazingly large number of script kiddies in the world, each running automated vulnerability tools against blocks of IP address blocks. These IP address blocks are chosen for coverage, not potential. Note that script kiddies are scanning arbitrary IP addresses, not specific website or ‘visible’ web applications — any website that is Internet accessible is a target.

Another argument for "Security through Obscurity" goes along the line that most website owners don’t believe their site has any value to a hacker. This, unfortunately, misses the mentality of a script kiddie – they are not out for specific information nor are they targeting a specific company. The script kiddie is just someone looking for the easy target, often just for the sake of finding and exploiting security flaws because he or she can.

Even if your site has no commercial value, it can be used for attacks on other sites, or defaced because it was on someone’s mindless scanning list.

There are thousands of script kiddies, launching hundreds of thousands of automated attacks all the time.

There are, in fact, an amazingly large number of script kiddies in the world, each running automated vulnerability tools against blocks of IP address blocks. These IP address blocks are chosen for coverage, not potential. Note that script kiddies are scanning arbitrary IP addresses, not specific website or ‘visible’ web applications — any website that is Internet accessible is a target.

Another argument for "Security through Obscurity" goes along the line that most website owners don’t believe their site has any value to a hacker. This, unfortunately, misses the mentality of a script kiddie – they are not out for specific information nor are they targeting a specific company. The script kiddie is just someone looking for the easy target, often just for the sake of finding and exploiting security flaws because he or she can.

Even if your site has no commercial value, it can be used for attacks on other sites, or defaced because it was on someone’s mindless scanning list.

jfruhlinger writes: *"HP declared that it was going to dump its PC business, then changed its mind, leading fans of the webOS operating system it acquired from Palm hoping for a similar reversal. A close reading of HP VP Todd Bradley's comments on the subject hints that, while webOS tablets are not in the cards, the company seems interested in embedding the operating system in printers and similar devices."*

Did MS have similar restrictions placed on it when it bought Farecast?

You have no idea what you're talking about. The abstract clearly illustrates their type I probability as **at most** .05 which is pretty standard.

Using other figures from the abstract I approximate their type II probability using the following

approximate critical t value (from the t-table in Wackerly's Mathematical Statistics) 1.96 for 46 df

t confidence interval

4.2 = 2.435 + 1.96*se

implies se = .9005

since se = std.dev./sqrt(n) (recall n=47)

std. dev. = 6.1736 (approximately)

The effect size (d) is approximately .3934 under equal variance

This leads to the following approximate power

For a One-Tailed (Directional) Hypothesis

Observed Power: 0.778

For a Two-Tailed (Non-Directional) Hypothesis

Observed Power: 0.671

They had a directional hypothesis (mu phone on greater than mu phone off) and since I'm only using approximate values, numbers from their abstract, and I assumed by their use of ANOVA that equal variance was satisfied (so I used the same value for both) that that .778 is pretty close to the standard .8. So I'm willing to bet their experimental design was such that it their power is actually at least .8 and that the difference is from my rounding and approximation and not actually using their recorded standard deviations (since I didn't see them in the abstract and I don't have access to the JAMA article itself).

If any methodological challenge can be had from just reading an abstract (which seems unlikely since all of their methods are not expressed in it) its that they don't explain which version of ANOVA they used (it would seem repeated measures is most appropriate and a quick review of other studies suggests they likely used this, but since they don't say it explicitly its possible they used another ANOVA). One might also question why a regression model with mixed effects wasn't used given the repeated measures and potential for significant mixed effects in longitudinal study, but given the relationships between ANOVA and regression it isn't really a fatal flaw.

Now posting that ridiculous (see *) Science News article could only have one of a few purposes: (1) You're a Bayesian and you're rejecting a frequentist approach to this study. (2) You're against the use of all statistics based on their uncertainty versus deterministic/certain models from mathematics. (3) You think compounding errors effects this study. If I'm missing you're real point please feel free to elaborate it.

If (1) I'm not going to dive into the pro/con's of Bayesian vs Frequentist, but I will say Bayesian models are built off of Frequentist its not like they were independently developed in a vacuum. As such while Bayesian methods act generally under different assumptions they do inherit and are confined by some of the same restrictions as frequentist models. A real problem is that Bayesian models work well when you can effectively incorporate prior knowledge (as in domain specific knowledge) which a Statistician isn't likely to have, and how realistic do you really think it is to teach scientists Bayesian methods when they all require some understanding of frequentist models that they already have shown they don't fully understand?

If (2) well if you have a neat proof for PvsNP that P=NP tucked away you might want to get on with submitting it and claiming your million dollar prize and possible fields medal if not other accolades or if you have many previously unpublished exact methods please publish them we're at a point where computing power could actually do the necessary calculation and that too could possibly net you a fields medal and more. Since most methods used are based on maximum likelihood, most powerful test, or some other "at least this good" type method you don't need to fear the uncertainty and without a way to map probabilistic methods to deterministic ones meaningfully and with the infeasibility of conducting a census to actually have population parameters for most studies I believe these are the best tools available for the job. If that's not good enough try developing your own methods and see how well they measure up maybe you're onto something or maybe you have no appreciation for how sophisticated these methods actually are.

If (3) well there is a difference between sensitivity and specificity. With that in mind once you've collected the data for the sample no one is stopping you from implementing a quality control method to verify the validity of each entry. Also many statistical procedures (including most implementations of ANOVA -- because really who is calculating this by hand?) automatically adjust for changes in errors and minimizing error effects. Mistakes and errors can and do happen, but no one said there aren't ways to deal with such things. Statisticians, quality control engineers, and even artificial intelligence / machine learning researchers have been happily using such methods to deal with this problem with confidence. ;)

So either you gravely misunderstand statistics or you're trying to fool other people for some reason. Either way you need a better statistics background badly!

(*) e.g. "Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions." -- No they aren't inconsistent or mutually inconsistent and the author [who as a journalist and at best an undergraduate level chemist/physicist is hardly qualified to formally judge the merits of Statistical methods or their application in the sciences] shows his own misunderstanding of statistics. Stupidity such as "Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials" as if there were clinical trials**before** statistics. Something akin to "damn statistics muddled my flawless clinical trial design" uhm... *no* you'd have no clinical trial designs without statistics. Statistics can be misused by someone who doesn't understand the underlying model assumptions in such a ways as the article describes. The only "truth" (if such a thing even exists) in that article is that many people (who aren't statisticians) use statistics without understanding model assumptions, frequently misunderstand what type I and type II errors even are, statistical versus practical significance is not fully understood by these people, and as a result results are frequently misinterpreted. So I'd take that all to mean all scientists would benefit from having handy access to actual statisticians more than making them take a single semester course in statistical methods that leaves them hopelessly lost and unprepared to do statistical analysis. Which I can agree with, but doesn't mean the problem is with statistics, its with its use by non-statisticians.

Using other figures from the abstract I approximate their type II probability using the following

approximate critical t value (from the t-table in Wackerly's Mathematical Statistics) 1.96 for 46 df

t confidence interval

4.2 = 2.435 + 1.96*se

implies se =

since se = std.dev./sqrt(n) (recall n=47)

std. dev. = 6.1736 (approximately)

The effect size (d) is approximately

This leads to the following approximate power

For a One-Tailed (Directional) Hypothesis

Observed Power: 0.778

For a Two-Tailed (Non-Directional) Hypothesis

Observed Power: 0.671

They had a directional hypothesis (mu phone on greater than mu phone off) and since I'm only using approximate values, numbers from their abstract, and I assumed by their use of ANOVA that equal variance was satisfied (so I used the same value for both) that that

If any methodological challenge can be had from just reading an abstract (which seems unlikely since all of their methods are not expressed in it) its that they don't explain which version of ANOVA they used (it would seem repeated measures is most appropriate and a quick review of other studies suggests they likely used this, but since they don't say it explicitly its possible they used another ANOVA). One might also question why a regression model with mixed effects wasn't used given the repeated measures and potential for significant mixed effects in longitudinal study, but given the relationships between ANOVA and regression it isn't really a fatal flaw.

Now posting that ridiculous (see *) Science News article could only have one of a few purposes: (1) You're a Bayesian and you're rejecting a frequentist approach to this study. (2) You're against the use of all statistics based on their uncertainty versus deterministic/certain models from mathematics. (3) You think compounding errors effects this study. If I'm missing you're real point please feel free to elaborate it.

If (1) I'm not going to dive into the pro/con's of Bayesian vs Frequentist, but I will say Bayesian models are built off of Frequentist its not like they were independently developed in a vacuum. As such while Bayesian methods act generally under different assumptions they do inherit and are confined by some of the same restrictions as frequentist models. A real problem is that Bayesian models work well when you can effectively incorporate prior knowledge (as in domain specific knowledge) which a Statistician isn't likely to have, and how realistic do you really think it is to teach scientists Bayesian methods when they all require some understanding of frequentist models that they already have shown they don't fully understand?

If (2) well if you have a neat proof for PvsNP that P=NP tucked away you might want to get on with submitting it and claiming your million dollar prize and possible fields medal if not other accolades or if you have many previously unpublished exact methods please publish them we're at a point where computing power could actually do the necessary calculation and that too could possibly net you a fields medal and more. Since most methods used are based on maximum likelihood, most powerful test, or some other "at least this good" type method you don't need to fear the uncertainty and without a way to map probabilistic methods to deterministic ones meaningfully and with the infeasibility of conducting a census to actually have population parameters for most studies I believe these are the best tools available for the job. If that's not good enough try developing your own methods and see how well they measure up maybe you're onto something or maybe you have no appreciation for how sophisticated these methods actually are.

If (3) well there is a difference between sensitivity and specificity. With that in mind once you've collected the data for the sample no one is stopping you from implementing a quality control method to verify the validity of each entry. Also many statistical procedures (including most implementations of ANOVA -- because really who is calculating this by hand?) automatically adjust for changes in errors and minimizing error effects. Mistakes and errors can and do happen, but no one said there aren't ways to deal with such things. Statisticians, quality control engineers, and even artificial intelligence / machine learning researchers have been happily using such methods to deal with this problem with confidence.

So either you gravely misunderstand statistics or you're trying to fool other people for some reason. Either way you need a better statistics background badly!

(*) e.g. "Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions." -- No they aren't inconsistent or mutually inconsistent and the author [who as a journalist and at best an undergraduate level chemist/physicist is hardly qualified to formally judge the merits of Statistical methods or their application in the sciences] shows his own misunderstanding of statistics. Stupidity such as "Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials" as if there were clinical trials

Do any Neurologists read /. ?

Would a 2.4 micromole per minute change in glucose metabolism in the orbitofrontal cortex and temporal pole region be of any practical significance? What is the expected value and what is considered "normal" (perhaps not in the statistical sense) variation for glucose metabolism you'd see in a PET scan in this part of the brain in general?

I get that the study shows that it is statistically significant (they use a two sample t-test and some version of ANOVA for multiple comparisons, hopefully they used repeated measures ANOVA since that's more appropriate (but maybe regression with mixed effects would be even more appropriate still) since at least some subjects are in both groups by their randomized crossover design.

I'm just curious if this is a case of a result that is statistically significant, but not really of any practical significance or if 2.4 micromoles per minute change from expectation would be something that alarmed a Neurologist after looking at a PET scan for this region.

Would a 2.4 micromole per minute change in glucose metabolism in the orbitofrontal cortex and temporal pole region be of any practical significance? What is the expected value and what is considered "normal" (perhaps not in the statistical sense) variation for glucose metabolism you'd see in a PET scan in this part of the brain in general?

I get that the study shows that it is statistically significant (they use a two sample t-test and some version of ANOVA for multiple comparisons, hopefully they used repeated measures ANOVA since that's more appropriate (but maybe regression with mixed effects would be even more appropriate still) since at least some subjects are in both groups by their randomized crossover design.

I'm just curious if this is a case of a result that is statistically significant, but not really of any practical significance or if 2.4 micromoles per minute change from expectation would be something that alarmed a Neurologist after looking at a PET scan for this region.

I'm pretty sure a large and dense neural network even using something as well known and "simple" as backpropagation can generate a net that's not comprehensible to the human's that programmed it.

This will probably burn karma points, but I have a strong feeling this needs to be said.

On some level your statement that "If you can't do (or follow) the math, then you don't know the physics" is true, but only in the case of a reduction to the absurd. I'm talking about someone who literally has no grasp of any basic number skill, geometry concept, or ability to observe naturally occurring macroscopic phenomenon. In that case though I think there needs to be a reasonable explanation as to why such a person would even be interested in or exposed to the topic to begin with?

"Physics IS mathematics" - more like physics is EXPLAINED with mathematics. I fixed that for you.

You don't need to know how to compute an integral or moreover know the details of the underpinning logic statement represented by an integral expression or how to prove one mathematically (not evaluate one) to appreciate the generalization that integrals measure areas, volumes, and higher dimensional analogies.

You don't need to know the chain rule, the general power rule, the product or quotient rule let alone the terribly complicated logical statements underpinning the notation to appreciate the generalization that derivatives are rates of change.

You don't need to know how to work with vector valued functions to understand that they can represent the objects and their attributes both macro and micro in a wide variety of circumstances.

You don't need to know how to express mathematics as a provable logical statements in order to appreciate that it works. Being able to evaluate an expression is not the same thing as proving the fundamental underpinnings that expression is talking about.

Physicists aren't generally sitting around proving new theorems (unless you're Witten, Werner, maybe a couple of others, or some of their grad students) and publishing them in math journals. Usually physicists are implementing an instance of some already established area of mathematics that seems to suit their theoretical needs so they can predict something and go look for it in a lab. Or they are just asking mathematicians for assistance like Weber and Gauss which is neither the first or last of such an academic coupling.

The point is most physics models are implementations of existing maths under the assumption that the math is logically sound and meaningfully applies to what they want to know. Such as using an integral to measure energy stored in capacitance. That's a task much more akin to computer programming than it is to the activity of mathematicians.

If someone can't adequately break down what they are doing to help someone who knows very little about the subject matter "get it" or at least appreciate its marvel on some level without following through the mechanics of some mathematical algorithm than I'd be worried that that person doesn't really know what they're doing and is resorting to "blue box" strategies.

Thought experiments (or summaries on scientific research and problems -- how do you expect to attract young scientists who aren't reading academic journals because they don't have access or are too young?) have been crucial to the development of the sciences so the outright dismissal of their merits because there is some sentiment that there should be some necessary technical threshold to be met before one can contribute shows a complete lack of understanding or appreciation of how much such discourse has progressed our understanding of mathematical and scientific principles. Even if it means addressing silly questions seriously sometimes.

On some level your statement that "If you can't do (or follow) the math, then you don't know the physics" is true, but only in the case of a reduction to the absurd. I'm talking about someone who literally has no grasp of any basic number skill, geometry concept, or ability to observe naturally occurring macroscopic phenomenon. In that case though I think there needs to be a reasonable explanation as to why such a person would even be interested in or exposed to the topic to begin with?

"Physics IS mathematics" - more like physics is EXPLAINED with mathematics. I fixed that for you.

You don't need to know how to compute an integral or moreover know the details of the underpinning logic statement represented by an integral expression or how to prove one mathematically (not evaluate one) to appreciate the generalization that integrals measure areas, volumes, and higher dimensional analogies.

You don't need to know the chain rule, the general power rule, the product or quotient rule let alone the terribly complicated logical statements underpinning the notation to appreciate the generalization that derivatives are rates of change.

You don't need to know how to work with vector valued functions to understand that they can represent the objects and their attributes both macro and micro in a wide variety of circumstances.

You don't need to know how to express mathematics as a provable logical statements in order to appreciate that it works. Being able to evaluate an expression is not the same thing as proving the fundamental underpinnings that expression is talking about.

Physicists aren't generally sitting around proving new theorems (unless you're Witten, Werner, maybe a couple of others, or some of their grad students) and publishing them in math journals. Usually physicists are implementing an instance of some already established area of mathematics that seems to suit their theoretical needs so they can predict something and go look for it in a lab. Or they are just asking mathematicians for assistance like Weber and Gauss which is neither the first or last of such an academic coupling.

The point is most physics models are implementations of existing maths under the assumption that the math is logically sound and meaningfully applies to what they want to know. Such as using an integral to measure energy stored in capacitance. That's a task much more akin to computer programming than it is to the activity of mathematicians.

If someone can't adequately break down what they are doing to help someone who knows very little about the subject matter "get it" or at least appreciate its marvel on some level without following through the mechanics of some mathematical algorithm than I'd be worried that that person doesn't really know what they're doing and is resorting to "blue box" strategies.

Thought experiments (or summaries on scientific research and problems -- how do you expect to attract young scientists who aren't reading academic journals because they don't have access or are too young?) have been crucial to the development of the sciences so the outright dismissal of their merits because there is some sentiment that there should be some necessary technical threshold to be met before one can contribute shows a complete lack of understanding or appreciation of how much such discourse has progressed our understanding of mathematical and scientific principles. Even if it means addressing silly questions seriously sometimes.

So how does Wolfram's own creation Wolfram Alpha do in comparison against the other search giants?

c0lo writes *"Not only did China decline to attend the upcoming Nobel peace prize ceremony, but urged diplomats in Oslo to stay away from the event warning of 'consequences' if they go. Possibly as a result of this (or on their own decisions), 18 other countries turned down the invitation: Pakistan, Iran, Sudan, Russia, Kazakhstan, Colombia, Tunisia, Saudi Arabia, Serbia, Iraq, Vietnam, Afghanistan, Venezuela, the Philippines, Egypt, Ukraine, Cuba and Morocco. Reuters seems to think the 'consequences' are of an economic nature, pointing out that half of the countries with economies that gained global influence during recent times are boycotting the ceremony (with Brazil and India still attending)."*

An anonymous reader writes *"Remember, about a month ago, when a researcher claimed he had a proof that P != NP? Well, the proof hasn't held up. But blogs and news sites helped spur a massive, open, collaborative effort on the Internet to understand the paper and to see if its ideas could be extended. This article explains what happened, how the proof was supposed to work, and why it failed."*

Well they are not one in the same sure, but The Swedish Pirate Party also hosts The Pirate Bay itself so you can't completely separate them from each other either.
The Pirate Party Becomes The Pirate Bay’s New Host

Obviously both sites and the Swedish Pirate Party are betting (pretty hard) on the election next month which a successful outcome would as previously posted put TPB and perhaps now wikileaks inside the Swedish Parliament.

Obviously both sites and the Swedish Pirate Party are betting (pretty hard) on the election next month which a successful outcome would as previously posted put TPB and perhaps now wikileaks inside the Swedish Parliament.

Backed up the system lately?