Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:please no (Score 2) 423

One knows this because one studies nonlinear chaotic systems (in systems with far simpler coupled DEs), learns about things like the Kolmogorov scale, turbulence, Lyupanov exponents, one monkeys about with solving nonlinear coupled ODEs with both adequate and inadequate integration stepsize. From this one learns that the climate models are arguably some 30 orders of magnitude shy of a spatiotemporal step that one might reasonably expect to be able to integrate over some significant time to get an actual solution.

This gap is bridged two ways. One of the two ways is to make pure assertions about the physics in between the Kolmogorov scale and the scale we can afford to integrate. For example, forget local dynamics of thunderstorms -- thunderstorms are phenomena that are basically invisible on a 100x100x1 km grid. Assume that one can use some sort of probability distribution of thunder-storminess in the dynamics and that this is adequate to describe all of the violent and rapid heat transport vertically and laterally in thunderstorms with sizes distributed on length scales of 2 to 10 km and with time scales of significant variation of a minute or longer (the time required to get out of your car and reach the house, of course). Do this repeatedly, with everything -- tornadoes (and other small scale velocity fields with nonzero curl) -- gone, replaced with and assertion regarding averages. Don't worry about the fact that none of these assertions can be formally derived and that we know perfectly well that we won't get the right answer for any other chaotic system studied by mankind (for example, try this for a simple damped driven rigid oscillator, replace the driving force with an average of almost any sort and see what happens) thus far if we do this, but don't forget to shout that the models are based on physics if anybody dares to point this out.

The other is even better. When the models are run, they are still nonlinear iterated maps, even if they are integrated with approximated dynamics and an enormous spatiotemporal step, so they still exhibit chaos and make lots of nifty patterns that "look like" weather (and even are a theoreticaly and empirically defensible approximation to weather, for integration periods of a week or so from reasonable well-known initial conditions before the chaotic trajectories diverge to fill phase space and render them worthless for weather prediction any more). One gets, from even tiny perturbations of the initial conditions and/or physical parameters, butterfly-effect divergences that create an entire bundle of "possible microtrajectories" for the model system being solved which is, note well, not even arguably the actual equation of motion for the coupled Earth-Sun-Atmosphere-Ocean system, it is a pure toy model that nobody sane would expect to actually work. And of course it empirically does not work, not even close. The microtrajectories produced, which generally only work across a reference period (trial data) by carefully choosing large, cancelling forcing terms in the approximated dynamics, end up having far too much variance (compare to the actual climate), the wrong autocorrelation spectrum (direct evidence of the wrong physics but who is counting), and range from (for CMIP5 models) a handful that actually cool over very long time scales to some that go sky high.

The actual Earth, of course, only has one trajectory and it doesn't look anything like any of these model trajectories. So now comes the best part. The "ensemble" of microtrajectories is actually averaged and used as a prediction for the trajectory.

Words fail me. Again to fall back on a trivial example, imagine taking a damped driven rigid rod oscillator operating in the chaotic regime, starting it from an "ensemble" of slightly perturbed different initial conditions, integrating it on so coarse a timestep that one gets chaos but perhaps chaos that is not even qualitatively similar to the chaos observed with an adequate timestep, and then take the numerical average over the trajectories on obtains to assert that this is a good approximation to the long time behavior of the system!

And this is before one does something even more striking. Linearize the driving force in some way, and predict that the derivative of this average of many chaotic trajectories is a valid predictor of some property of the actual, single trajectory of the actual chaotic system.

And don't forget, it is physics based. Or was, sort of (but not really), before you did the averaging. Now I don't have any idea what the basis or justification is for the result that is obtained. Trivial counterexamples demonstrate that the entire approach is so unbelievably flawed that it would literally take a numerical miracle for the result at any given integration scale to have the slightest relevance to any actually observed trajectory of the actual system being modelled.

But of course, they are still not done. After doing this averaging over some unspecified number of microtrajectories (well, they are specified, but not anywhere where the models are collectively presented such as in Chapter 9 of AR5, lest it cause people to call into serious doubt the statistical treatment of the model results singly and collectively) the per-model average trajectories still have too much variance, the wrong autocorrelation and spectrum, produce utterly nonphysical distributions of atmospheric heat (tropical tropospheric hot spot, anyone?) and spend far, far too much time with temperatures higher than the observed temperature rather than below the observed temperature everywhere outside of the reference period (training set data) for the last 165 years of thermometric data even after 32 adjustments have been made that spectacular increased the warming of the present relative to the past 31 times and left it unchanged 1 time (odds 1 in 4 billion, at least if one assumes errors from the past are at worst unbiased, odds absolutely astronomical if one considers the UHI effect ignored in the evaluation of e.g. HADCRUT4 and that somehow fails to cool the present relative to the past even in GISS, where they claim to have one.

So they average all of the models in CMIP5 together and call that the best prediction -- oops, I mean "projection" because predictions can be falsified and predictions have to be at least arguably physics based and this superaverage of averages of individually badly failed microtrajectories of individual models that are not even approximately mutually independent, which each have very different numbers of contributing microtrajectories and so are not even equally weighted in that regard, and which use different spatiotemporal grid sizes, entirely different ways of treating the ocean, and which have to balance things like the radiative balance between CO_2 and aerosols and water vapor feedback in different ways to fit the reference period is most definitely not a prediction. In fact, as far as I can tell, it is a mere statistical abomination. But don't forget! Somewhere back in there there is actual physics!

The wonderous virtue of this is that one can plot the envelope of the average of the individual model microtrajectories (not the actual microtrajectories themselves, or their actual variance singly or collectively, as that would instantly reveal this for the nonsense that it is) and pretend that this variance is somehow a normal predictor according to the central limit theorem so that as long as the bottom of this range doesn't get too far above the actual observed trajectory it doesn't falsify any of the contributing, non-independent, incorrectly weighted, individual model with their structurally absurd microtrajectories contributing to it!

Finally, one can ignore the fact that this average of averages of failed individual model microtrajectories visibly spends roughly 90% of it is time warmer than the aforementioned e.g. HADCRUT4 everywhere outside of the reference period, both in the past and in the future of that period (and that the underlying single model average trajectories are visibly oscillating all over the place with far too great a variance even after being averaged) and then write the Summary for Policy Makers. In this summary, not one tiny bit of this enormous tower of unproven stack of assumptions, questionable methods, outright worrisome intermediate results, erasure of any vestige of connection to actual physics, is ever mentioned. Instead its results are used to state at high confidence that post 1950 warming was more than half due to CO_2, in spite of the fact that almost all of that warming was confined to a single time span of roughly 15 year (certainly no more than 20) out of the almost 65 years post 1950, and that almost as much warming was observed from 1920 to 1950 without much help from CO_2, warming that the superaverage of all of the models skates straight over as one can see in figure 9.8a of AR5.

Indeed, I defy anyone to provide a quantitatively defensible definition of the term "confidence" as used in the SPM of AR5 for any of the assertions made therein about global average temperature or the consequences thereof. The term "confidence" is used in this document in the human sense, as in the writers of the section themselves strongly believe that their statements are true. However, this is a summary of supposedly scientific results and any reader is naturally going to assume that the assertions of confidence are defensible, as they are anywhere else in science that this sort of terminology is used from approving new drugs to the confidence one has that a new aerodynamic design will work as predicted if one invests a billion dollars to build it, rather than the moral equivalent of drug companies telling the FDA and NIH that they sincerely believe that a new drug is safe and effective in spite of using absolutely indefensible steps in the statistical analysis from start to finish that is their sole basis for any sort of belief at all.

That's how one knows it. It's also why climate researchers are falling over one another to come up with explanations for this failure (see e.g. Box 9.2 in AR5, with a total of over 50 distinct hypothesized but obviously unproven explanations in the peer reviewed literature so far), why people are finally thinking that it is time to lose the worst of the CMIP5 models before they backfire and lose the entire discipline all credibility, and why estimates for total climate sensitivity are in freefall, already under the 2 C by 2100 limit that all of the expensive measures being taken to ameliorate carbon dioxide was supposed to have produced if we dropped CO_2 consumption so fast that it caused the collapse of western civilization as just one of many side effects along the way. Good news! We're there already, even if CO_2 rises to 600 ppm by 2100, according to most of the latest results, and we might be as low as 1 C, hardly even noticeable and arguably net beneficial!

1C is what one expects from CO_2 forcing at all, with no net feedbacks. It is what one expects as the null hypothesis from the very unbelievably simplest of linearized physical models -- one where the current temperature is the result of a crossover in feedback so that any warming produces net cooling, any cooling produces net warming. This sort of crossover is key to stabilizing a linearized physical model (like a harmonic oscillator) -- small perturbations have to push one back towards equilibrium, and the net displacement from equilibrium is strictly due to the linear response to the additional driving force. We use this all of the time in introductory physics to show how the only effect of solving a vertical harmonic oscillator in external, uniform gravitational field is to shift the equilibrium down by \Delta y = mg/k. Precisely the same sort of computation, applied to the climate, suggests that \Delta T \approx 1 C at 600 ppm relative to 300 ppm.

That's right folks. Climate is what happens over 30+ years of weather, but Hansen and indeed the entire climate research establishment never bothered to falsify the null hypothesis of simple linear response before building enormously complex and unwieldy climate models, building strong positive feedback into those models from the beginning, worked tirelessly to "explain" the single stretch of only twenty years in the second half of the 20th century, badly, by balancing the strong feedbacks with a term that was and remains poorly known (aerosols), and asserting that this would be a reliable predictor of future climate.

I personally would argue that historical climate data manifestly a) fail to falsify the null hypothesis; b) strongly support the assertion that the climate is highly naturally variable as a chaotic nonlinear highly multivariate system is expected to be; and c) that at this point, we have extremely excellent reason to believe that the climate problem is non-computable, quite probably non-computable with any reasonable allocation of computational resources the human species is likely to be able to engineer or afford, even with Moore's Law, anytime in the next few decades, if Moore's Law itself doesn't fail in the meantime. 30 orders of magnitude is 100 doublings -- at least half a century. Even then we will face the difficulty if initializing the computation as we are not going to be able to afford to measure the Earth's microstate on this scale, and we will need theorems in the theory of nonlinear ODEs that I do not believe have yet been proven to have any good reason to think that we will succeed in the meantime with some sort of interpolatory approximation scheme.

rgb

Comment Re:The last sentence in the summary... (Score 1) 232

Was that to me? Sorry, I have physics classes to teach and am insanely busy teaching them, and there is no point in posting a short answer to a difficult or subtle question. I had time to answer this morning and did so. Not that I expect my reply to make any difference in your beliefs. If you wish to accept the word of the climate "oracles" as god-descended truth instead of something that, well, could easily be doubted on multiple grounds I doubt that pointing those grounds out will change your beliefs. I'll merely point out that actual statisticians often make fun of climate scientists (see e.g. William Briggs' blog and his patient, detailed posts on the subject), and for pretty good reasons. Making reliable inferences from computational models in this class is something I've done a fair bit of work in, and it is very, very difficult. This isn't computing the trajectory of a baseball.

rgb

Comment Re:The last sentence in the summary... (Score 1) 232

I was replying to "Here is a graph...". It states that it LOOKS like SLR is already happening (duh!) and that the rise is accelerating.

As to whether or not the future projections are based on physics: How, exactly? Do you mean that there is physics in the models used to make those projections? No argument. Are the models capable of using the physics that are in them to make a prediction of future SLR that can be falsified? Not in any possible way. Hence one integrates the models (contingent on the assumptions that go into the "physics" inside, which is often in the form of semi-empirical formulae that kind-of-work for short-run weather forecasting in the models from whence the GCMs are descended until chaos makes these predictions worthless) , observes a staggering range of possible future climates, assumes further that in this case -- more or less uniquely in the general class of problems "like" this in mathematics and physics -- it's OK to solve the problem on a spatiotemporal granularity close to 10^30th larger than the Kolmogorov scale, assumes further that even though the resulting bundle of trajectories is so broad as to be nearly useless and each one is a "possible" future history of the climate, that the mean of this bundle is a number that is somehow relevant to the future behavior of the actual climate as a single realization of a space of possibilities that is almost certainly far larger than the model space given the coarse graining and smoothing, goes one step beyond that and average over many models that aren't even independent and what -- pray to a benign deity that these are good numbers on which to bet trillions of dollars and millions of lives on right now to -- perhaps -- avoid a catastrophe later?

What part of this makes sense?

rgb

Comment Re:The last sentence in the summary... (Score 1) 232

Excuse me? Seriously? The SLR since roughly 1870 is clearly published in a number of places and amounts to roughly 9 inches. Quite aside from the infinity of statistical fallacies one can generate by fitting linear trends to timeseries data: http://wmbriggs.com/blog/?p=51..., or if you prefer a longer and much more detailed statistical (Bayesian) explication of the problems: http://wmbriggs.com/blog/?p=51..., and the fact that those problems are multiplied enormously when you seek to fit a nonlinear trend to the data, to argue that this timeseries reveals "acceleration", presumably correlated with increased CO_2 near the end, in spite of the fact that its greatest visible period of "acceleration" is in the early 20th century when CO_2 levels were nearly irrelevant to any observed climate change in everybody's models is not terribly sensible.

Then we could analyze the other fallacies in this sort of graph used as an argument for 5 meter SLR by 2100. For example, the current rate of SLR is around maybe 3 cm/decade -- a bit over an inch a decade. In the 8.5 decades left in the century, we might be looking at anywhere from 8 inches to a foot of SLR based on the data as we have it now, foolishly extrapolating a linear trend indefinitely into the future for a highly nonlinear chaotic system that is perfectly capable of things like glacial transitions (either way) or century-scale droughts without any help at all from humans. However, this still doesn't do the problem justice, because of the differential probable error bars visible even in the figure you present, and the fact that the measurement methodology changes near the end, and the fact that to properly account for SLR either way one really has to take gravity and surface deformation into account in multiple ways. In particular, the crust is continuing the process of isostatic rebound resulting from the melting of an ice layer several kilometers thick on the polar regions "only" 12,000 or so years ago. The continents continue to drift. The sea bottom continues to remodel as this occurs. Much of this produces changes that we are only barely able to measure, in some places some of the time, now (mostly with satellites and e.g. GRACE, but there is a bit of chicken and egg problem there as well). There is the fact that an isostatic ocean produces LOCALIZED SLR where warm water floats on cooler water and can produce this sort of SLR in mid-ocean far from any tidal gauges. Tidal gauges in coastal areas are largely locked to local surface temperatures of the water. The satellite record includes this -- the tide gauge record does not, and since 70% of the Earth's surface is ocean, and nearly all of this ocean is "far" from continental boundaries and the comparatively tiny number of measurement stations that go back into the distant past with isostatic changes that are impossible to measure retroactively or correct for in the present, the probable error in global SLR visible in these curves is IMO at least almost certainly significantly underestimated, and that is before one gets to the factor of roughly 10 that Briggs asserts one is likely to underestimate true error by when fitting a linear trend to a timeseries.

So what the data might justify is this. The "rate" (linear trend) of SLR over the last 145 years is something like 2 plus or minus 2 mm/year -- it could be anywhere from basically 0 to as much as 4 mm/year, and this might well still underestimate the probable error. The "current rate" (measured with much better precision, but beware picking endpoints!) is perhaps order of 3 mm/year, plus or minus what, a mm/year? At least? Well within the long term average, and clearly visible as being (probably) equaled or exceeded in the past in periods with little possible correlation/causality linkage with CO_2, even in so short a record.

There isn't any conceivable argument that can be made on the basis of either statistics or physics for the extrapolation of this already poorly known linear trend, augmented by an even more poorly known nonlinear trend, e.g. a quadratic term into the future. Statistically it is pure nonsense. Physically one has to make a complex, teetering tower of assumptions about how air temperatures will change, in what spatiotemporal pattern they will change, and how those changes will melt the kilometer-thick layers of ice on top of Greenland and Antarctica, where the surface temperature of either one almost never does so much as reach the melting point of ice from below. That "teetering tower" are all Bayesian priors to the assertion of any sort of probable SLR rate in the future, and every single time somebody like James Hansen has gone public with his wild statements of Manhattan "probably" being underwater by now or SLR "probably" being 5 meters by 2100, they have been or are being actively falsified by the mere progress of time, which in statistics means you have to go back and re-assess the posterior probabilities and essentially falsify or weaken the probable truth of your assumptions.

The simple fact of the matter is that we have no idea what SLR will be by 2100. The models that predict rapid, radical rise are the same models that are failing to predict the current stagnation in global warming, which is real enough that it is the continuous focus of climate papers at this point and rated its own "box" in chapter 9 of AR5. As Bayesian priors they are not so good, even before you add in all of the other assumptions needed to melt a few million cubic kilometers of ice that currently never reaches the melting point and spends most of the year well below it.

What we do know is that there is little reason to fear catastrophic damage from any rate of SLR observed with human instruments in the last 150 years. Or, really, a rate twice that size. This is actually the approximate limit of sober papers on the issue in climate science -- a few might still claim 30 inches (less than 1 meter) by 2100 but every additional year with a rate closer to 10 inches by 2100 when extrapolated reduces the probability that the higher end claims are going to be correct.

On a similar basis, Bayesian reanalysis of climate models is reducing their median predictions of total climate sensitivity. That "median prediction" is another statistical travesty, but since I'll probably get hammered as a denier as it is from pointing all of this out, we might as well leave that for another time. I'll only say that I hammer "stupid skeptics" just as hard when they fit a quadratic trend to (say) some post-2000 interval of global average surface temperatures and use it to argue that the planet is definitely cooling and the ice age cometh. My own assertion is simple: When we look at the simplest nonlinear systems -- things far, far simpler than the earth -- we observe a richness and complexity of phenomena that is utterly inexplicable in the simple, linearized models that dominate climate science discussions. We also learn things about how reliable even qualitative conclusions are when we attempt to integrate nonlinear fluid dynamic systems numerically at spatiotemporal scales much, much larger than the Kolmogorov scale of the dynamics.

What we learn there should make us consider the climate problem to be unsolved. Period. It would be absolutely amazing -- a miracle of sorts -- if climate models worked! Yes, they produce something that looks like "weather" (all the way back to Lorenz's original much simpler computational models). But that weather is chaotic, and chaotic systems self-organize when one changes their driving. Entire patterns of turbulence appear and disappear (in sufficiently complex systems) even when one doesn't change any of the driving forces, and things like thermal efficiency and mean temperature abruptly and discontinuously change along with them. Perhaps one cannot prove that the Earth is a self-organizing system along the lines of Prigogene's suggestion, one that will nonlinearly oppose any change in its average state by reorganizing its dissipative mechanisms, but it is certainly an heuristically plausible possibility.

In other words, physics itself should make us very, very wary of any sort of linearization or extrapolation of observed linear trends in what is almost certainly the most difficult problem in nonlinear, chaotic dynamics humans have ever attempted to solve with resources that on the surface of things are utterly inadequate do perform the computations at a scale that has any substantial chance of getting the right answers.

But there isn't any fame, fortune or warm fuzzy I'm saving the world feelings to be had from stating "I don't know, and we are unlikely to be able to do any sort of computation that can be relied on to predict, the future of the climate with or without increasing CO_2." All we know is that claims of that sort of knowledge by the supposedly most knowledgeable have failed, time and again, and that the best computations of the uncomputable to date have failed to show much predictive skill on that front as well. Why is this even surprising? In other area of science of equal complexity would anyone take the slightest notice. But then, in no other area of science (except, perhaps, medicine where again fame, fortune., and warm fuzzies are often on the line) does anyone make such sweeping assertions with so flimsy a foundation.

rgb

Comment Re:The last sentence in the summary... (Score 2) 232

What I still haven't seen in is just 1 climate model that explains most of the observed current and historical data.

You could have ended this sentence right there and it would be accurate. So of course any additional clause that you append isn't going to change that. However, it does make the argument contained in that clause less compelling.

rgb

Comment Re:So? (Score 2) 488

Expect more AC posts like this, the power companies are paying green washers to come up with moronic arguments so people in the same tribe can re-post them thinking they actually make sense and won't look like a tool in the process:

Really. "The power companies" are paying people to blog against solar? So a company like, say, Duke Power posts a job opening somewhere and interviews candidates:

"So, we are interested in hiring somebody with excellent blogging skills."

"Oh, sure, if you observe my pale and pasty skin, my slightly overweight condition, and the callouses on my finger tips you can see that I have a long history of sitting and keyboarding instead of working. Indeed, my resume shows the same thing! Look at that -- I haven't got a single thing on there that Duke Power could possible be interested in. In fact, all I want to do is return home so I can visit my "tribe" online."

"Uh, so, why exactly are you applying for this job?"

"My mother is making me look for work! She claims that I'm stinking up the basement because I don't have time to shower or do her silly laundry while I'm busy online! Can you believe it?"

"You sound just perfect for our position. Now, let me ask you -- do you have a social conscience? I mean, is there anything you believe in very strongly -- world peace, God, the environment, racism, sexism? Oh, and I have to ask -- do you collect or distribute pictures of underage nude members of the animal kingdom more complex than arthropods on any of your computers?"

(Dazed silence.) "Uhhh, no? And, like I dunno, do anime cartoons of big-eyed sort-of-japanese nymphet ninja chicks count? They're AI, so they're probably less complex than an arthropod? Can I go now? I applied, so my mom will be happy."

"No, wait, you're hired! And before you panic, you get to work from home! Indeed, your job is going to be really simple: go online and trash-talk solar energy to your homies. But only rooftop solar. We are investing pretty heavily in solar ourselves and want to be seen by the public as being progressive (hey, we even bought out an entire power company called Progressive), but even though we are still paying people to let us load level their air conditioners in peak times, even though it is an enormous, expensive hassle to add generating capacity, even though our inclination to add more quick-online capacity would involve natural gas and hence fracking (speaking of which, how do you feel about throwing in the occasional word of praise for fracking, how it is making the world a better place and stabilizing the continental land mass so that it will eventually prevent the Big One, the next New Madrid earthquake as it were) we are terribly worried that rooftop solar will put us out of business in the next thirty years or so. We want to win the hearts and minds of America, and your online homeboys, well, they are America."

"Dude, if you call my friends `homies' one more time, I'm gonna leave and my mom can suck an oyster. Let me get this straight. All I have to do is dis rooftop solar while I'm playing my online games and visiting lame blogs and you pay me money? And I can do it from home?"

"That's it. You'll be a full-time work-at-home employee of Duke Power. Benefits and everything. You'll need to keep a log of the websites you visit to bash PV rooftop installations, and you'll have to undergo a brief training program where you learn of just how awful, dangerous, and expensive it is and how much better it is for consumers to let us install solar PV farms and continue to deliver energy safely to their wall sockets. Hey, you can even afford to move out of your mom's basement!"

"Well, OK. But nix on the moving thing -- my mom's a super cook and my Sailor Moon poster is kinda glued to the wall at this point, if you know what I mean. Thanks, Duke Power! You just hired yourself a troll! You just wait! When I get through with solar, none of my friends will even think of working out the amortization schedule and ROI on a $20,000 initial investment that goes on top of an expensive household! Because hey, like they are all living in their moms' basements too. But don't worry, I'll be sure they like pause their gaming to tell their parents how bad an idea it is to install anything that might disrupt their game playing. As a Duke Power employee, I promise to Use Even More Electricity with the super new gaming box I'm gonna be able to buy with my salary. And it won't run on solar!"

After all, it isn't like it would go viral and cause a national reaction if it were discovered that power companies were actually paying people to troll against solar under false pretenses. No reputational risk or anything. Sure, I believe you.

rgb

Comment Not surprising... (Score 5, Interesting) 147

... because of the way MongoDB actually stores records and parses them. It is more or less a simple tree or linked list, and hence doing almost anything involves decending branches to the leaves. This is horrendously inefficient in many contexts, while still being perfectly lovely in others. Just doing a match, though, can involve a non-polynomial time search. Maybe they've improved this from when I was trying to use Mongo to drive modelling, but I doubt it as it would have involved substantially changing the way the data is actually stored and dereferenced. I had to cheat substantially in order to get anything like decent performance, and any of the SQLs outperformed it handily.

Note well that it was strictly a scaling issue. For small trees and DBs, it probably works well enough. For large DBs with millions of records and substantial structure, it is like molasses. Only worse.

rgb

Comment Re:OK (Score 1) 268

I had a friend who had a Benjamin, actually, but this was back in the 60's, and yes, I was jealous as I had only a .177 break-action pellet rifle. Speciality pellet that had, as you note, a muzzle velocity "comparable" to a 22 short. Yes (Google being Our Friend), 22 LR is 1200 fps (for a reason!), a 22 short is around 1000 fps, and a Benjamin is (usually, dependent on model and mechanism) 900, which is quite respectable but depends on pellet weight.

When I decided as an Old Guy to get a really good hunting class pellet rifle I looked hard at the Benjamin-Sheridans but ended up picking the Walther Falcon Hunter edition, which is also 900 fps and fires a variety of standard or "hunting class" .22 pellets. I actually haven't tried to fire it through a 4x4 -- but who knows? I got it for my sons (really, my youngest son who is the most avid hunter) and it drops a rabbit as readily as a 22. I'm guessing that a hollow point pellet would quite possibly kill a deer shot at reasonably close range (10 yards or so and a heart or perfect head shot) -- as a 22 might -- or for that matter a human. I doubt it would "drop" either one, though and this is something I would never try with either rifle, of course, unless it is after the apocalypse and it is kill a deer with the pellet rifle or go hungry:-). In the old days I saw for myself that a daisy BB gun would leave a very painful divot in human skin without quite penetrating (no, I did not pull that particular trigger). You would not try that with the Walther as it would go clean through your leg if it didn't hit the bone, and would have a pretty good chance of chipping or breaking the bone.

The Walther, in other words, like the Benjamin Marauder etc, is definitely not a toy gun. I also have an older .177 caliber pellet gun that fires a pellet slowly enough that you can "see" it (barely) en route and one doesn't fire it at a plywood sheet as it might bounce back (or more likely, embed itself 3 mm into the wood). No comparison.

Bear in mind that it isn't just muzzle velocity, it is mass. A .22 LR is typically a 40 grain bullet, a .22 short is around 30 grain, compared to a "standard" .22 pellet at 14.3 grains and speciality hunting pellets at 20 to as much as 40 grains). The .22 LR has around 5x the kinetic energy of almost any pellet rifle out of the bore and that's a simple fact. Finally, it is ballistic drag. Pellets generally aren't fired fast enough to get sufficient stability from rifling and spin to be particularly accurate, which accounts for their "diabolo" waist. This also produces substantial drag -- it is being stabilized by drag. This means that pellet rifles have a rapid dropoff of their muzzle velocity and are really only suitable for short range hunting for any sort of larger game. Real .22 rifles get enough spin that they can avoid the skirted diabolo design, avoid much of the drag, and still have equal or better precision and ballistics. The third issue is the sound barrier. That's the thing that limits .22 muzzle velocity even in the case of the rifle -- there is substantial turbulence as a bullet passes through the sound barrier slowing down, and one needs streamlined bullet shapes like those found in centerfire rifles (which do indeed fire even .22-ish caliber highly streamlined and much more massive bullets with muzzle velocities well over the speed of sound) to have decent ballistics. Rimfire .22 LR bullets are not streamlined and are designed to shoot just under the speed of sound (or in the case of 1200 fps almost instantly drop down under it as the bullet "settles" out of the barrel) so that they usually have decent but not impressive precision (bench grouping). Competition grade guns (rimfire or pellet) usually shoot bullets at muzzle velocities deliberately well under the speed of sound -- .22 shorts, not long rifles, for example. The needs of hunters -- high muzzle velocity at a substantial pellet mass, decent ballistics, high bullet energy -- are not always compatible with maximum precision at range. 900 fps .22 is a decent compromise, but the Benjamin Marauder (which absolutely is a very high end hunting class rifle) can come in a .25 caliber and can fire much heavier pellets. I have 20 grain pellets for the Walther -- I am guessing that they drop my muzzle velocity some but not too much -- and CAN get much heavier pellets, basically up to 40 grains, equivalent to shooting a .22 LR bullet out of the pellet gun. At that mass, however, the muzzle velocity would be substantially lower. Good perhaps for high precision target shooting at shortish ranges or game at those ranges, not so good for 30 to 50 yard shots. But then, pellet rifles are rarely particularly "good" for shots longer than 30 yards.

At 900 fps one has a wide range of "hunting" class rifles with similar muzzle energies to choose from, that are all going to have very similar ballistics and penetration capabilities at similar ranges, because most of the stuff that happens is pure physics. You can pay more for features -- especially precision-enhancing features or precompression, as those things require more engineering. The Walther isn't the most expensive hunting class rifle made, but it is a damn good intermediate one and, if you look around with Google, a very popular one for good reason. One could argue that it is one of the best values out there -- basically the same muzzle velocity as the Benjamin and other very high end guns, extraordinarily good stability and balance, simple mechanics and capable of high precision (small grouping) at 20-30 yards (it is actually a break action rifle -- single spring pump) and very high reliability. With a 20 grain hollow point pellet at 20 yards, easily the match of any small game including foxes or racoons or groundhogs (none of which we would likely shoot where I live) and perfect for rabbits and squirrels or birds.

rgb

Comment Re:Found the IBM link. (Score 1) 268

...Or, \pi R^2 = 3 x 400 \approx 1200 square meters of collector area (concentrated down by the mirrors). If so, the collector surface receives around 1.2 MW peak, or at 80% efficiency 960 KW converted. The 12 KW is therefore not conceivably peak, it has to be a 24 hour average -- 12*24 = 288 KW-hr, which assumes peak can be (nearly) maintained for close to 8 hours a day. This completely changes the numbers. 288 KW-hr/day is $43/day at $0.15 KW-hour, and allowing for (say) 200 days a year effective production at this rate a ROI of anywhere from $8000/year to as much as $12000. That would amortize a $100,000 installation cost in a decade allowing for the cost of the money, and yield profits thereafter. That is actually pretty competitive with passive solar, which also has an amortization time of around a decade or bit more for consumers, although power companies probably beat that pretty substantially with their improved economies of scale. If the other "benefits" from using the water cooled system (nice trick, turning a waste heat liability almost anywhere into an "asset" add value, amortization is correspondingly lowered. Forests of these things in North Africa bordering the Mediterranean, for example, could conceivable power rapid economic development of the region while simultaneously watering the Sahara and conceivably actually altering its climate with progressive anti-desertification, while paying for themselves and even yielding a healthy long term ROI.

If they really mean 12 KW peak, then this is of course ridiculous, but that can't be right as ordinary passive PV could easily generate 120 KW peak, if not 240 KW peak with current technology, from the same 1200 m^2, and with tracking could accomplish an almost identical efficiency profile at 10 year amortization.

The top article was sufficiently messed up unitwise that I'm guessing that the author was simply clueless about this stuff. The missing number, of course, is the cost per installation. If it is less than $100K and produces an average of 288 KW-hours/day, they could range from break even with existing technology to very attractive even without "water" or "cooling" or "heating" advantages (that could easily be liabilities in locations where water for cooling is itself expensive or ecologically restricted). If it is more, well, it's an instant non-starter until they get the price down. But I'm guessing one could build these things for $100K and make money -- the concrete itself is order of $10K to $20K (including the mirrors), say $10K for the PV collector and cooling, $10K for the electronics and tracking, $30K for labor and installation -- $30K to $40K for profit at a 40% or so margin? And improvable with mass production? But I wouldn't be surprised if it were $250 K. Or $80K.

The latter is a complete waste of time. The former makes them a game changer, at least for a few years before passive PV overtakes them and renders the entire energy "crisis" moot by dropping amortization and ROI for consumer rooftop installations from around a decade plus a decade of "profit" to less than seven years and thirteen or more years of "profit". We're already on the edge of where installing rooftop solar on all new construction houses and rolling the cost into the primary mortgage is a no-brainer: "Buy a house and never pay for electricity... (for the next 20 years)" for an extra 10 or 20 thousand in price on a $250K house. Inside a decade, I fully expect to see this happen without any prodding in all parts of the US with adequate annual insolation just because it makes economic sense -- we need just one more factor of two reduction in the cost per watt of 20 year installed solar. For power companies, the amortization/ROI is largely already there in many parts of the US, and they are happily building solar farms wherever they can get cheap land near expensive electricity.

rgb

Comment Re:OK (Score 1) 268

In fact, my high end .22 pellet rifle can almost certainly penetrate a skull. It goes through 5/8" plywood with ease. Certainly at the thin spots -- through the eyes, the temple -- but I certainly wouldn't bet my life that it wouldn't make it through even the thicker parts. And it's more like a .22 short or long, not even a LR in terms of muzzle velocity.

Comment Re:Black holes are real, we observe them all the t (Score 1) 356

Possibly. But if it is a 2*poodle*pi, it is probably disk shaped, possibly with delicately scalloped edges. A spherical poodle seems more likely to be a (4/3)*pi*poodle-cubed, and if nothing else, being cubed is very hard on poodles. Often they subsequently turn into e^{-poodle}, a decaying poodle or (if eaten) into ex-poodle-poo.

rgb

Comment Re:Headline slightly inaccurate (Score 1) 356

And then there is Susskind and Black Hole Wars. In a sense, quantum mechanics already has shown that black holes in the classic sense do not exist. At least there is no entropic disconnect or loss of information.

But that is all theoretical stuff, and since we cannot really directly observe an event horizon, the best that we can say is that we can observe very distant objects that meet the mass criterion for having such a thing, if in fact the theories that predict them are correct. But an object with that same mass but no actual event horizon would, I think, look almost exactly the same from far away as far all that we can see, radiation from infalling superheated particles, is concerned. Do they pass an event horizon or asymptotically approach an almost-event horizon that never quite forms without ever technically reaching it? We'd have to capture one and examine it up close as we shot things in to -- maybe -- be able to tell the difference.

But it makes a huge difference to field theories and the effort to resolve reversible information-conserving quantum mechanics and irreversible information destroying singularities.

rgb

Comment Re:Black holes are real, we observe them all the t (Score 1) 356

As long as we don't have to add 4*poodle*pi, I'm happy. I doubt even a single poodle would taste very good in pi.

Other than that, I get

\Delta C = C' - C = 2\pi(R+\Delta R) - 2\pi R = 2\pi \Delta R

where \Delta R is one standard poodle, which is (curiously enough) 0.314159 meters when converted into metric. That is, a one tenth of a pi poodle.

rgb

Comment Re:Will it come with... (Score 4, Interesting) 37

I spent around a year with it on at least a few of the systems I use. But I have G2 hotwired to cycle windows, open xterms, switch desktops, and fully use its autohide bars which are already laid out with everything I need and little that I don't. I do most of my work in either a browser or xterms, but I have that work in many different subjects with several windows open per subject spread out over 6 desktops that are a keystroke away. G3's window switching mechanism when I used it was arcane and enormously slow in comparison.

The real problem is that while a fork was perhaps needed, they did it wrong. G2 was close to perfect for what it was designed to do -- if nothing else its flaws were all flaws we all had worked around, and it had/has (I'm still using it, personally) some really nice features. Forking off a tablet version of Gnome is just peachy, but it should have been a TABLET VERSION fork, not an abandonment of the mainstream, widely deployed G2 in favor of a tablet friendlier interface that was enormously clunky on a non-tablet desktop or even laptop.

Sometimes there is change because it is needed, sometimes there is change for the sake of change. I sadly think that G3 is ten parts of the latter for one part of the former. Change involves pain either way, but at least one can see some advantage to doing so.

What exactly are the advantages -- not the places where yeah, with work and possibly more slowly you can make it function but actual advantages -- of G3, in particular advantages that one couldn't have implemented just as easily as new features of G2 without necessarily breaking old features that were heavily in use?

I'm not seein' a lot of those. I can launch any application I want under G2 with a key combination, for every application I ever launch, and don't even use that feature any more for anything but xterms because I use a lot of those to do work in. Window )cycling and desktop switching, though, those I use all of the time. Miniapps and application bar launchers, I use those. I don't care about animation. I don't need finger swipe screens. I don't need to have to work to find applications listed in a neat sorted order, or to have to change "views" to access certain features. I login (rarely), pop a single instance of firefox up, and from then on most of what I do is either browser based or xterm based, and I can pop an xterm up with Ctrl-Alt-P in far less than 1 second on the fly, then cycle up through a whole stack of windows with Ctrl-Shift-F to the one I want, then jump to desktop 6 (F6) to set up some music, then back to desktop 3 (F3) to work on something I'm writing, and then...

Maybe I can do all of this (and preserve the macros etc) in Mate. I suppose I should give it a try, maybe in a VM or something. Heck, maybe I'll install a full fedora VM to try it again -- I think I have the room. I used to use Fedora all the time before G3, but it was a serious show stopper.

rgb

Comment Will it come with... (Score 2) 37

Gnome 2 as an option (by whatever name) or only the insanity of windowing systems designed for finger-picking-tablets forced upon keystroke oriented users of actual computers doing real work in many windows on several desktops?

Otherwise, Centos 6 may end up being the last release I ever use. G2 may or may not be perfect, but I've got it more comfortable than five year old denim jeans and G3 sucked and continues to suck and AFAICT will continue to suck, forever, amen.

rgb

Slashdot Top Deals

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...