The burden of proof is to prove a negative?
Absolutely. This isn't a matter of publishing a scientific paper, it's a matter of public policy. We consider this normal when it comes to other matters -- for instance, suppose a pharmaceutical company wants to market a new drug. The burden of proof is on them to demonstrate to the FDA that the drug is safe. The burden of proof is not on the FDA to prove that the drug is dangerous or harmful.
Furthermore, consider what we know about carbon dioxide in the atmosphere:
None of the above statements should be controversial. What is controversial, is how the increased carbon dioxide is going to interact with the rest of the environment, and for that we use computer models. These models may be flawed, but the worst models will always be better than sticking your head in the sand and hoping for the best. The best computer models are saying that we can expect temperatures to rise.
Unlike the pharmaceutical example above, we already know from the non-controversial facts of the matter that any result other than an increase of temperatures would be surprising. If, say, a pharmaceutical company wished to market a drug containing cyanide, the FDA would and should certainly be interested whether there were some mechanism in place that is expected to render the cyanide harmless.
If you need an additional reason why we should not demand positive proof, consider the form which positive proof would take: simply, to let things be as they are, and to wait and see if an environmental catastrophe ensues. If we were to take that course of action, and a catastrophe did ensure (as we have every reason to believe it would), we would have lost much time in implementing a solution, and the harm would be far greater.
"Noticeable impact" is not the bar that was raised. "Cause significant and irreparable harm to life on earth" is the bar.
What constitutes "significant harm"? The IPCC is predicting average surface temperatures to go up by between 2.0 to 11.5 degrees F in the 21st century (according to wikipedia). If we're to assume the impact is on the low end, would 2 degrees be significant? If you live in a northern climate, probably not. The weather might actually be nicer. If you live in Africa or India, it may be very significant, as in widespread famines and droughts. Even if things get bad, we the rich will still be able to buy ourselves food, but it isn't our place to think about ourselves only, but also the poor who won't be able to buy food when their crops fail, and can't afford to move to a nicer climate. Just because America isn't going to become one big Death Valley, doesn't mean we shouldn't be concerned.
Also, quite aside from global warming, there is also the ocean acidification that goes along with increased carbon dioxide.
It's (mostly) the politicians (both sides!) I blame. They decided to fool the people to agree with their respective intractable positions, instead of engage them and bring consensus about an issue that may or may not be politically treatable. We still don't have good public information, that can be digested by laypeople
You will never get consensus from people who are religiously opposed to the idea that human activity might cause environmental harm, and who believe that global warming is just a socialist plot to institute a world government. (Not that all global warming sceptics are like that, but many are.) There may be many who just need convincing, but for them the information is out there.
The first three examples given are, to me, fairly straightforward errors. Perhaps you'd rather that ghc said, "Hey dummy, Char doesn't belong to the typeclass Num, so don't try to use it as if it was."
The last one is questionable. Fortunately, ghc 6.12.1 (which I just tried it on) no longer refers to the monomorphism restriction in its error message. The inferred typeclass is still confusing, but the message tells you that you're using the wrong type for that context, from which it should be straightforward to diagnose the problem, or at least add some type annotations to give the compiler a better shot at providing a better error message.
I will concede that some ghc errors can be confusing, and the example given is certainly not the worst, but overall I'm pretty happy with the errors I get. If you consider Haskell errors to be particularly bad, perhaps you could provide us with an example of a language with clear, concise errors, such that all languages should aspire to?
If you want to learn something new without throwing away all your java experience, you might try Scala. I've heard good things about it (though I have no personal experience with it myself). As functional languages go, I prefer Haskell [1] as my default problem-solving language. You might have trouble finding a Haskell job, but it will teach you things that will be relevant in other languages.
Erlang is an interesting language. I view it as kind of a one-trick pony, but for distributed systems I've not seen anything better.
Define wealthy for me. Then prevent that definition from getting broader and broader as cash-strapped governments seek to acquire more money.
The definition of "wealthy" that I think is the most useful at present, is: any person who makes sufficient income from capital gains to be relatively unaffected by variations in the income tax. (People specifically excluded from this those who are unaffected by changes in income tax on account of having no job, or such a low income that their tax rate is close to zero.)
If the capital gains tax were made to be comparable to income tax, then we'll perhaps need a more specific definition with a dollar amount attached.
Why can't it just shut down one of the two normal cores, and run the other core at a highly reduced rate to get the same power savings?
I'm not really the expert at hardware design, but I'd guess that the energy savings from reducing the clock on a high-speed chip aren't all that dramatic. If you have a 1.5 ghz chip, it has to be designed around circuits that can reach a stable state in less than a nanosecond. A chip clocked at a third the speed can use longer wires and more complex circuits, and probably use lower voltages because it has a lot more time between clock cycles. The optimal design for the slower chip may differ considerably from the optimal design of the fast chip. Similarly, the fast cores might possibly be simpler if they don't have to be capable of running at a slower clock.
Additionally, I've seen plenty of benchmarks where a higher-power draw chip that can get done with a task quickly and drop back to low-power idle mode is actually more energy efficient than a lower-power chip that takes longer to get the task done.
I'd guess that the slow core is designed for tasks that really aren't cpu constrained at all, but which might have real-time requirements, such as logging gps coordinates or accelerometer readings.
Reflections and shadows are easy from an implementation point of view, but they aren't "free" from a performance point of view, and as MidoriKid noted, rays don't all cost the same - you usually do approximately log(N) ray intersection tests for each ray, with a good acceleration structure, where N is the number of polygons. There are also problems with very large numbers of independently moving geometry - rebuilding the acceleration structure is generally N*(log(N).
In practice, ray tracers slow down as you increase the amount of screen space taken up by complex objects, but they tend to be fairly insensitive to the total amount of geometry in the scene, whereas graphics cards tend to be limited by the total number of polygons in the scene.
Scientists will study your brain to learn more about your distant cousin, Man.