Here is a quick summary of the main ideas in the article:
Every time a customer purchases ice at Burning Man, a volunteer must walk to the ice truck, retrieve the ice bags, and bring them to the customer. This wastes time because each customer must wait for his or her ice to be retrieved from the truck. Transactions that require returning change to the customer also take extra time. Therefore, the ice purchasing process would be faster if a) the ice were already at the counter so the customer could pick it up immediately, and b) there were a “turbo line” for people who don't need change. Some nonexperts that BH talked to thought that Nevada health regulations might prohibit a), but they do not.
That's just over 100 words. Does using 1700+ words to communicate these relatively simple ideas really help anyone understand them better?
Hackers have shaken the free-software movement that once symbolized the Web’s idealism.
And then fails to provide any real evidence that this is true. It should take strong evidence to reach the conclusion that an entire "movement" has been "shaken" to the point that it has lost its symbolic meaning. I skimmed the rest of the article, but the authors pretty much lost me after that bit of nonsense.
People (both good and bad) have been finding flaws in open source software for decades. No one in the "movement" was surprised or "shaken" to hear about a few new discoveries. These bugs earned extra attention because of the ubiquity of the software, but still -- nobody has ever said that open source software is somehow, magically, bug free. The "idealism" is that a) people can actually find the bugs by looking at the source rather than reverse engineering; and b) once a bug is found, anyone is free to modify the code to fix it, rather than waiting on a business to decide that it merits patching, perhaps weeks or months later. And, as far as I could tell, this all worked very well with the "Shellshock" vulnerabilities. The bugs were found, and the patches were written and released not long after.
Thanks for posting that. I had the same thought. After writing at length about the dangers of making logical errors in argumentation, Haselton ends with this bizarre, irrational outburst. So, if a woman dresses modestly, she 1) is not a "real woman", 2) is "a moron", and 3) subscribes to some fringe, ultra right wing version of Christianity. Methinks he is violating "the rules of consistency and logic". Perhaps he thought this was a joke, but if so, it falls pretty flat given the tone of the rest of his essay.
Then, there's this nugget. Haselton claims that an objective cost/benefit analysis "is, in fact, the only rational defense of any action, ever." No. Doing something because it's the ethical or moral course of action can be perfectly rational, even if it would fail a straightforward cost/benefit analysis. I'd be suspicious of anyone who believes that the only way to make every decision is by approaching it strictly as an economics problem.
The limiting factor, it would seem to me, is that the ideal course to minimize speed has not been constructed.
As a starting point, I'd suggest making the entire course uphill, covered either with loose scree or extremely dense vegetation (machete not allowed), and have it include at least a few river crossings.
It hurts the palm of my hand to hold it the same way all the time.
Anyway, if this "improved" mouse ever becomes widespread, your face might be hurting, too, because you will probably be holding the palm of your hand there most of the time.
The patterns in data are not data. The data is not the analysis of the data which would be a pattern in the data.
Okay. Against what, exactly, are you arguing? When, at any point, have I claimed that "the data are the analysis of the data" or any such nonsense?
Let me remind you of one of your original claims:
Correlative statistics are not evidence.
Do you not understand that "patterns in data" includes correlative statistics? If not, let me make this clear: You originally claimed that neither data, nor the patterns in data, are "evidence". I've tried to explain why, to scientists, patterns in data, including correlative statistics, most certainly are evidence. That is all. For some reason, instead of responding to any of that, you want to keep arguing about the definition of "data", which, as I've also explained, was never in dispute.
Your lack of basic reading comprehension...
Cheap insults aren't necessary. Look -- your original post simply reflected a misunderstanding of how a particular bit of terminology is used by scientists. I thought it might be helpful to explain why. I apologize if I inadvertently offended you somewhere along the way.
No, I don't have a different definition of "data". My point was that your original post repeatedly confuses "evidence" and "proof". As I said, data, and more specifically, the patterns in data (correlative statistics are one example), are used as evidence all of the time in science. That is, in a nutshell, how science works. Data provide evidence, not proof, for or against alternative hypotheses. The strength of the evidence depends on the strength of the data, which encompasses all of the potential data problems you discussed in your original post. None of this has anything to do with disputing the definition of "data". Data are pieces of information (just as your Wikipedia article says), and collectively, they can provide evidence for or against scientific hypotheses. Another way to state it is to say that in science, evidence comes from data.
Your blanket statements that "data isn't evidence" and that "correlative statistics are not evidence" are not supported by the way real scientists actually use data. Scientists frequently use the results of statistical analyses, including correlative statistics, as evidence. Evidence does not imply proof of causality or any other underlying explanation, and that is where your original post seemed to get things mixed up. Evidence for a particular hypothesis simply means that the patterns observed in some data are consistent with the hypothesis.
The problem with data driven science... is that data isn't evidence.
Correlative statistics are not evidence.
I think you are confusing "evidence" with "proof". Data, and more specifically, the patterns in data, most certainly are evidence. If that were not true, then there would be no reason to even try doing science.
Having data isn't an accomplishment.
Any scientist who has spent years obtaining a hard-won dataset would strongly disagree with you. Consider, for example, the ground-breaking data generated a few years ago by the Human Genome Project, or the current explosion of data about exoplanets. These data most certainly do represent substantial intellectual and technical accomplishments. Now, if what you mean is that simply downloading someone else's data from the Web is not an accomplishment, then I agree with you.
Scientists need to be willing to get their hands dirty and get the data themselves.
I think you will find that, in the hard sciences at least, that's usually how it's done. The researchers who write the papers are usually the same people who were involved in collecting the data. However, for very large-scale studies (e.g., global biodiversity research), there is no way that a single scientist, or even a single research team, could gather all of the necessary data. In these cases, the only way to make the research tractable is to integrate multiple datasets.
Your points about the importance of understanding where the data one uses in a study came from, how they were collected, and any potential biases are all well taken. However, ignoring any of these factors is simply sloppy science, no matter whether the researcher collected the data him or herself, or if someone else collected it.
You mean all those stickers on (mostly) trucks that show Calvin pissing on something aren't licensed?
Nope. Watterson never allowed his characters to be licensed for any merchandise beyond his books and a few calenders. Those stickers you see on trucks are all unlicensed ripoffs: http://en.wikipedia.org/wiki/C....
If someone offered you a guaranteed paycheck of $125,000 for 5 weeks of your time, and a possibility of $345,000 for 13 weeks if you make it to the finals, what would you do?
Watterson could have made a lot more money than that, doing a lot less. All he'd have needed to do was agree to commercial licensing deals for his Calvin and Hobbes characters, but he always refused because he felt doing so would cheapen his creations. Even now, such deals would probably still be lucrative. And he could no doubt make plenty of money off of speaking engagements. For some people (who, I admit, are very rare), accumulating large amounts of money really is less important than their pride in their work or their privacy.
...I think there's a kernel of reason in the idea that someone of renown -- someone who has made a lot of money and become a familiar name in the process -- is expected to give a little bit back to their "fans" in return for benefiting them so much financially.
That's a good point, for sure. I can certainly understand why Calvin and Hobbes fans might wish that he were more accessible.
Hey, I'm not a developer/coder/programmer...
And that explains your comments quite nicely.
I don't think there is anyone here who would seriously disagree with you that the best possible scenario for the end user is dedicated UI design that is customized, when appropriate, for each target OS. But in the real world, those of us who must develop and maintain cross-OS applications with limited resources and small teams often don't have the luxury of building custom UIs for each target platform. Off-the-shelf cross-platform UI toolkits, such as Qt, are the only realistic way to support multiples OSs in these cases. It's either that, or only develop for one platform, which will usually have to be windows due to the size of the customer base. That is a far worse option, in my opinion.
So no, the "unified GUI look" that toolkits like Qt strive for across platforms is not ideal. Any serious programmer already knows that. But often times, it's the only practical way to support more than one OS, and for that reason, such toolkits are indispensable. Furthermore, those of us who don't use windows or OSX appreciate the benefits of these toolkits even more (GNU/Linux, in my case), because it means that we have far more software options than we might otherwise.