Link to Original Source
Link to Original Source
As a result [a] pure statistician is not very useful - generic analysis can be performed by software, in-depth analysis requires specific knowledge.
In-depth analysis requires a real understanding of statistics as well as of the domain. CS knowledge, at least as commonly taught, is not a substitute for for the statistics requirement.
This is not unlike complaining that assembly coding is dying. Well, yes, we now have less need to code everything that way because we have better tools.
This is not a valid analogy. HLLs automated some of the rote, mechanical aspects of implementing algorithms. They do not automate away the need for a higher-level understanding of what you are doing.
You don't need to ask for permission to test your car with simulations.
Agreed. Google is being misleading in its arguments, which raises the question of whether it is being dumb or acting dumb. I have my opinion as to which it is, but neither inspires confidence in Google's judgement and motives, and confidence is of the essence when it comes to getting self-driving cars accepted.
Simulations can only test for what the simulation programmers have accounted for.
And they are also based on assumptions about the response of the cars' sensors to the real world.
Furthermore, good programmers often anticipate problems that lesser ones are oblivious to. On account of this, the former may show higher signs of stress (which is actually concentration) early, while the latter don't realize things are going wrong until they see tests failing in ways they don't understand, and only then will the stress levels reflect actual competence.
Two areas where this is particularly prevalent are concurrency and security - though often, in the case of security, the problems are not found until after deployment.
In the 21st century, people are screaming for the government to regulate their lives in order to protect them, to provide "security", and to "make people feel safe". It's the fag end of the smoldering socialist experiment.
It has nothing to do with socialism. There are a lot of self-described conservatives in favor of restrictive and intrusive regulation in the name of security.
The paper makes it clear that this is about remote sensing, and more about getting the response back from the remote location than getting the probe beam to it.
The list of other potential uses seems to have been added by the linked article's author, who does not seem to have asked himself why, if you are sending guide beams to the destination, can't you just modulate them?
The word 'weapon' does not appear in the paper, and the researchers do not seem to have attempted to guide powerful beams by this method. Given that the guide beams can create this channel, perhaps attempting to send an equally or more powerful beam through that channel would dissipate it.
From the last paragraph of the article:
"The study reveals nothing about the nature of the link between socialism and dishonesty. It might be a function of the relative poverty of East Germans, for example."
In other words, the study failed to control for the value each participant placed on the monetary gain from cheating, rendering it of little value.
Nevertheless, this conclusion didn't dissuade the Economist from ignoring it in the very next sentence:
"All the same, when it comes to ethics, a capitalist upbringing appears to trump a socialist one."
It is the mirror that attracted my attention. Someone who cannot keep his attention on the road while he is driving shouldn't be driving, let alone raising kids.
It is far from clear that studying the arts in college will improve your creativity, let alone whether it will do so to a greater extent than some other field. On the other hand, studying can definitely expand your knowledge, and the right sort of knowledge will allow you to apply your creativity. For example, an understanding of technology will not necessarily guarantee a lifetime job in engineering, but if we assume that technology will be important in the foreseeable future, then that knowledge will, in general (and other things being equal), put you in a better position than someone whose education consisted of watching and discussing old movies.
Two rules of thumb (and nothing more): study things that are important, and not too narrowly (at least to start with.)
The apparent discrepancy of the total volume of large boulders being greater than that of the visible craters they have supposedly come from is not resolved by the BNE. In the paper, this paradox is only mentioned in passing, and no definite resolution is offered. No-one seems to have ruled out the possibility that there are additional craters beneath the rubble, or that the excess includes remnants of the impactors. Perhaps there is an assumption that, absent the BNE, the boulders formed by early impacts should now be buried.
I believe the tradeoff of CLI is between working more efficiently (by typing commands and not having to use your mouse too often to interrupt your flow)
and a steeper learning curve (learn commands and their params, config file locations and their syntax etc.).
For me, the primary benefit of a CLI, when presented by a decent shell, is the flexibility and power of being able to write and run tiny programs whenever it helps.
A CLI not backed by a decent shell is miserable, as was demonstrated by ms-dos.
See, this is what I thought as well. The Higgs was well predicted and made sense in the standard model, and our measurements at the LHC seem to back up what physicists were speculating. On the other hand, BICEP2 is a much newer result and there's considerable controversy about whether it's a real result or a mistake.
So why would you automatically jump to the conclusion that the HIGGS was the problem?
The last paragraph of the Royal Astronomical Society press release seems to be agreeing with you, suggesting that an error in the BICEP2 result (or, rather, its interpretation) is the most likely explanation:
"If BICEP2 is shown to be correct, it tells us that there has to be interesting new particle physics beyond the standard model" Hogan said.
IIRC, the BICEP2 result, if interpreted as resulting from inflation, indicates a surprisingly strong inflation event. The above quote suggests that inflation with the strength suggested by other measurements (e.g. the level of inhomogeneity in the CMB?) would not create this problem.
"The most ambitious aim of the project is to create a feature that would efficiently highlight the most relevant and pertinent reader comments on an article, perhaps through word-recognition software."
The object of the game is to get a complete load of bollocks accepted as the most relevant and pertinent reader comment on as many articles as possible. Extra points for the front page and headline articles.
Web browser maker decides to create a disqus competitor, instead of working on their web browser.
It probably has something to do with the money:
"The two-year development project will be funded by a $3.89 million grant from the John S. and James L. Knight Foundation, the Miami-based philanthropic organization that specializes in media and the arts."
First, that the "natural language" requirement was gamed. It deliberately simulated someone for whom English is not their first language, in order to cover its inability to actually hold a good English conversation. Fail.
Agreed. It is easier to trick someone when he wants to believe, and the organizer of this event comes across as a gullible media whore in his eagerness to claim that the Turing test had been passed.
Second, that we have learned over time that the Turing test doesn't really mean much of anything. We are capable of creating a machine that holds its own in limited conversation, but in the process we have learned that it has little to do with "AI".
For its time, it was a pretty good stab at the issue, and one that implicitly recognized that intelligence is a generalized skill. It is a better measure than using chess-playing or mathematical theorem-generating. The fundamental problem with these alternative measures, and others like them, is that they are based on the fallacy that just because humans use their intelligence to perform them, they necessarily require intelligence.
As there was nothing remotely resembling AI when Turing formulated the test, it is not surprising that he overlooked the degree to which ordinary conversation can be manipulated, and also the amount of effort people would put into doing so. I imagine he thought of his test as a scientific experiment, not a competition.