IIUC, his lawyers requested that certain materials not be produced, and in doing so quoted a section of the state law which exhempted a particular category of material from being required to be produced. If you don't like the phrasing, talk to the people who wrote the law. His lawyers were just doing their job, and making it easy for the judge.
I don't think they count as science...until the make predictions that match the later observed results. Then they do.
Unfortunately, as you pointed out actually recreating the simulation can be absurdly difficult. And if it's not reproducable, then it's not science.
That said, when I worked at a transportation study commission, we used models all the time. We never deceived ourselves that they were correct, but they were a lot better than just guessing. Policies were built based around their 20-year projections. Often we'd have several very different 20-year projections based on different assumptions about what would be done in between. (Would this transit project be successful? Would that bridge be built? What effect would building the other highway have on journey-to-work times?) The results were never accurate. They were subject to political manipulation...but so was what projects would be built. It was a lot better than just guessing, but it sure was a lot short of science.
I think of this frequently when I read about the models, and the problems that people have with accepting their projections. Usually the problems aren't based in plausibility, but rather in what beliefs make them comfortable. And in those cases I tend to believe the models. But I sure don't think of them as "sound science".
OTOH: Do you trust the "Four Color Theorum"? It's a mathematical proof that any map can be colored with four colors, with no two adjacent patches having the same color except at a single point. The proof is so complex that no human can follow it. Do you trust it? Would you trust it if a lot of money was riding on the result?
Even math is less than certain. Complex proofs are only as trustworthy as every step in them multiplied, and both people and computers make mistakes. There are lots of illusions that prove that people will frequently dependably make the same mistake. So you can't really trust math. But just try to find something more trustworthy. You need to learn to live with less than certainty, because certainty is always an illusion.
Who's going to tell the judge no? Who's going to enforce it?
Sometimes a judge will be so egregiously corrupt that the higher courts will discipline them, but it's quite infrequent, and I've never heard of it happening when he was acting to support the local politicos. (And even then the "discipline" is generally trivial in comparison to the offense.)
Are you certain about the "state secrets act"? It seems to me that National Security Letters cover the same ground...and then some.
The Republicans who were responsible for emancipation (as an act of war against the rebellious South) is only vaguely related to the current Republican party. The Democrats have a closer link, and again, the civil rights movement was a political attack against the Dixiecrats, who pretended to be Democrats, but actually had an independent agenda.
P.S.: Given what the Federal Govt. has become, are you so sure states' rights was a bad idea? You can trace the current Federal Govt. back to the centralization imposed (by both sides!) during the Civil War.
P.P.S.: Under privitization, prisons have become defacto sources of slave labor. So don't claim that slavery has been eliminated. It's nature has been changed, but it isn't gone.
Just because you are convinced a particular technology is going to succeed doesn't mean that you want it to do so. Betting can, thus, be better than investing.
I think there is a qualitative difference between notifying large end users like Facebook in advance, and notifying people in the distribution system for a general release. It's the former that inherently means the people who aren't large end users with privileged access get left exposed for longer than necessary, and that's what I'm objecting to.
You're latching onto this specific case, perhaps because you have some connection to it, but I'm talking about the general principle here. In general, it is not unreasonable to assume that if a vulnerability has been found by two parties in rapid succession, there may be a common factor involved, which may mean that other parties will also find it in the same time frame, and that an extra day may therefore be very significant.
Obviously most serious security bugs don't sit there for years, then have two groups discover them at almost the same time, as seems to have happened in this case, and need half the known Internet to update their systems as a precaution because no-one really knows whether they've been damaged by the vulnerability at any time over the past couple of years.
ROTFL. Yep, large corporate bureaucracies, they ALWAYS do exactly the right thing, in a matter of hours.
If it's that funny to you, why are you defending giving them a day of advanced warning? Some of us did have a patch rolled out within a couple of hours of the public announcement, but presumably we could have had the patch rolled out a day earlier in the alternative situation. Once again, in this case, one day in two years obviously isn't that significant as we're all going to have to assume keys were compromised and set up new ones anyway. But if this was something that only got committed three days ago, it's a different story.
Since "people" cannot be negative, by necessity (dev team) + (other people) >= (dev team)
You're still assuming that the dev teams, or to be more precise the parts of the dev teams who will actively review new code, are the same size. That isn't necessarily true at all, so the "provided everything else is equal" part of your last sentence is the problem here.
My point is there's no "might" about it - as long as the arbitration clause applies to both parties and the arbiter is a neutral one, it's a perfectly legal and enforceable clause...
It's still highly uncertain whether a court would find a contract to exist at all under these conditions.
Even if it does, you can always go to court and argue for your right to be there because the other guy's term about arbitration is unenforceable for whatever reason. The court might disagree and send you back to arbitration, but they won't stop you coming in the door in the first place.
What I really don't like about the whole statement behind it is the implied assumption that closed source offered any kind of better protection.
Which statement do you think implied that? I don't see anything about it in this thread.
The most wear sensitive part of a laser printer is the copy drum. If I recall correctly the old LaserJets had the drum integrated with the toner cartidge, so you replace to most quickly wearing part of the printer four or five thousand pages. It's no wonder they lasted so long. The mechanical parts that move the paper through the printer are pretty robust, so I wouldn't be surprised if the printers go until the capacitors in the electronics dry up, or the internal power connectors go bad.
This isn't a case "insisted upon by a conservative group". This is Mann suing a journalist for libel, and the journalist requesting info from the university under FOIA to prove his case.
That would be interesting, if it were true. Here's what TFA says:
The ruling is the latest turn in the FOIA request filed in 2011 by Del. Robert Marshall (R-Prince William) and the American Tradition Institute to obtain research and e-mails of former U-Va. professor Michael Mann.
"Del." I assume is short for "delegate". According to their website, the American Tradition Institute's tag line is "Free Market Environmentalism through *Litigation*" I assuming this means they aren't pals with Greenpeace, or even The Sierra Club, any more than the National Socialists in Germany were pals with the socialist Republicans in 1930s Spain.
Depends on what you consider "hiding the research". A fishing expedition through a scientist's personal correspondence is an invitation to judge his work on *political* grounds.
In science your personal beliefs, relationships, and biography are irrelevant. There are evangelical Christian climate scientists who believe climate won't change because that would contradict God's will as expressed in the Bible. These scientists may be regarded as religious crackpots by their peers, but that hasn't prevented them from publishing in the same peer-reviewed journals as everyone else. Since their papers invariably are climate-change skeptic, clearly they are publishing work which supports their religious beliefs. But their motivations don't matter. What matters is in their scientific publications.
In 1988, Gary Hart's presidential bid and political career were ruined when he was photographed cavorting on a yacht named "Monkey Business" with a woman that wasn't his wife. Now I didn't care how many bimbos he was boinking, but a lot of people *did*, which made it a political issue (albeit a stupid one in my opinion). Do we really want to use the coercive power of the state to dig through the private lives of controversial scientists?
It's a pretense that that would serve any scientific purpose. Maybe Mann is intent on overthrowing capitalism and creating a socialist utopia. That would be relevant if he were running for dogcatcher, but it's irrelevant to what's in his scientific papers. Scientists publish papers all the time with ulterior motives, not the least of which is that they're being paid to do research that makes corporate sponsors happy. As long as what's in the paper passes muster, it's still science.
What about acting? Or fiction? These are artificial experiences that evoke real emotional responses. Once the right buttons in your brain are pushed, most of your brain can't tell the difference between what is real and what is synthetic.
Granted, authenticity in human interactions is important, but it's overrated. Fake engagement often is a perfectly acceptable substitute. Situations where people put considerable effort into *seeming* pleasant usually *are* more pleasant than they would be if everyone felt free to paste their indifference to you right on their faces.
So this is a very interesting technology. What's disturbing about it isn't that people might be fooled into thinking the user is truly interested; it's that the user himself no longer puts any effort into creating that illusion. What if that effort is in itself something important? What if fake engagement is often the prelude to real engagement? Maybe you have to start with polite interest and work your way up to the real thing; I suspect the dumber parts of your brain can't tell the difference. If that's true, taking the user's brain out of the interaction means that interaction will automatically be trapped on a superficial level. This already happens in bureaucratic situations where employees are reduce to rules-following automatons. Take the brain out of the equation and indifference follows.
I suspect that the researchers are well aware of these issues; I believe that I discern a certain deadpan, ironic puckishness on their part. People who truly view engagement with other people as an unwelcome burden don't work on technologies that mediate between people.