Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Science Writers: Stop Causing Us Intellectual P (Score 1) 147

The real problem (or interesting thing about this if you don't like "problem") with this is scaling. 2.3^3 = 12.2. If this mystery planet is 2.3 times the size of Earth, one would expect it to have 12.2 (give or take a hair) times the mass of Earth, presuming that it has a similar core structure. It is almost half again more massive. This in turn suggests that the mantel is proportionally less of the total volume of the sphere, or rather, that it has a disproportionately larger core (nickel-iron core densities are 2-3 times the density of the mantel). At a guess, the core alone -- if it is nickel-iron as seems at least moderately reasonable -- is at least half again larger than the size of the Earth. Alternative, its core could contain an admixture of much heavier/denser stuff -- tungsten, lead, gold -- and not be so disproportionate.

rgb

Comment Re:This research should receive enormous funding. (Score 2) 202

Please excuse my absolute ignorance, but I was under the impression that classical information channel was only required to transmit one of the entangled photons. If one of the entangled photons (or what ever it is that is entangled) was transported elsewhere (truck, fiber optics, what-not) the two entangled would still maintain the same state (spin etc) and information could then be transmitted faster than light by changing the state of one and reading the state of the other.

Information cannot be transmitted faster than light as far as we know in standard physics today (barring extreme relativistic things like white or black holes and I doubt even those unless/until experiment verifies any claim that they can).

Quantum theory doesn't get around it. You cannot choose the direction to "collapse" or "change the state" of one of the two entangled spins, because the instant you measure it, it "collapses". You might now be able to predict the state of the other end of the channel, but the person there can't because he doesn't know what you measured, so if he measures up or down when he tries (again, supposed "collapsing the wavefunction") he won't know what you measured at your end or (since the two spins are no longer entangled as soon as a measurement is made at either end) what you do to it subsequently.

But the real problem (the "paradox" bit of EPR) is much worse than that. Suppose the two "entangled" electrons are separated by some distance D. Non-relativistic naive stupid quantum theory states that when one of the two electrons is measured, the wavefunction of the whole thing collapses. But suppose that D is nice and large -- in gedanken experiments we can make it a light year, why not? In the "rest frame of the Universe" (the frame in which the cosmic microwave background has on average no directional doppler shift) experimenters on both ends simultaneously perform a measurement of the spin state of the two electrons. This (simultaneity) is a perfectly valid concept in any given frame but is not a frame invariant concept. Neither is temporal ordering a universally valid concept. But given a simultaneous measurement of the two spins, which measurement causes the wavefunction to collapse and determines the global final state, given that the entropy of their measuring apparatus (which is responsible for the random phase shifts that supposedly break the entanglement, see Nakajima-Zwanzig equation and the Generalized Master Equation) is supposedly completely separable and independent?

By making D nice and large, we have a further problem. I said that the measurements were simultaneous in "the rest frame" (and even gave you a prescription for determining what frame I mean), but that means that if we boost that coordinate frame along one direction or the other, we can make either measurement occur first! That is, suppose the spins are in a singlet spin state so that if one is measured up (along some axis) the other must be measured down. Suppose that in frame A, spin 1 interacts with its local measuring apparatus first and is filtered into spin down. This interaction with its local entropy pool -- exchanging information with it via strictly retarded e.g. electromagnetic interactions -- supposedly "transluminally", that is to say instantaneously in frame A -- "causes" (whatever you want that word to mean) spin 2 in frame A to collapse into a non-entangled quantum state in which the probability of measuring its spin up in that frame some time later than the time of measurement in frame A is unity. In frame B, however, it is spin 2's measurement that is performed first, and as the electron interacts with its entropy pool you have a serious problem. If you follow any of the quantum approaches to measurement -- most of them random phase approximation or master equation projections that assume that the filter forces a final state on the basis of its local entropy and unknown/unspecified state -- it cannot independently conclude that the spin of this electron is down -- the measurement will definitely be up -- because in frame A the measurement of spin 1 has already happened. In no possible sense can the measurement of spin 2 in frame B in the up state "cause" spin 1 to be in a state that -- independent of the state of its measurement apparatus -- will definitely be measured as spin down. Otherwise you have (in frame A) to accept the truth of the statement that a future measurement of the state of spin 2 is what determines the outcome of the present measurement of the state of spin 1. Oooo, bad.

The problem, as you can see, is that relativity theory puts some very stringent limits on what we can possibly mean by the word "cause". They pretty much completely exclude any possible way that the statement "measuring spin 1 causes the 1-2 entangled wavefunction to collapse" can have frame-invariant meaning, and meaning that isn't inertial frame invariant in a relativistic universe isn't, that is, it is meaningless. We can only conclude that the correlated outcomes of the measurements was not determined by the local entropy state of the measurement apparatus at the time of the measurements.

Fortunately, we have one more tool to help us understand the outcome. Physics is symmetric in time. Indeed, our insistence on using retarded vs advanced or stationary (Dirac) Green's functions to describe causal interactions is entirely due to our psychological/perceptual experience of an entropic arrow of time, where entropy is strictly speaking the log of the missing/lost/neglected information in any macroscopic description of a microscopically reversible problem. That's the reason the Generalized Master Equation approach is so enormously informative. It starts with the entire, microscopically reversible Universe, which is all in a definite quantum entangled state with nothing outside of it to cause it to "collapse". In this "God's Eye" description, there is just a Universal wavefunction or density operator for a few gazillion particles with completely determined phases, evolving completely reversibly in time, with zero entropy. One then takes some subsystem -- say, an innocent pair of unsuspecting electrons -- and forms the e.g. 2x2 submatrix describing their mutually coupled state. Note well that both spins are coupled to every other particle in the Universe at all times -- this submatrix is "identified", not really created or derived, within the larger universal density matrix, and things like rows and columns can be permuted to (without loss of generality) bring it to the upper left hand corner where it becomes the "system". The submatrix for everything else (not including coupling to the spins) is similarly identified.

Nakajima-Zwanzig construction treats this second submatrix statistically because we cannot know or measure the general state of the Universe and have a hard enough time measuring/knowing the state of the 2x2 submatrix we've identified as an "entangled system". It projects the entirety of "everything else" into diagonal probabilities (by e.g. a random phase approximation, making the entropy of the rest of the Universe classical entropy) and then treats the interaction of these diagonal objects with the spins as being weak enough to be ignored, usually, except of course when it is not. It is not when e.g. the spins emit or absorb photons from the rest of the Universe (virtual or otherwise) while interacting with a measuring apparatus or the apparatus that prepared the spins. Because we cannot track the actual fully entangled phases of all the interactions in this enormous submatrix and with the submatrix and the system, the best we can manage is this semiclassical interaction that takes entropy from "the bath" (everything else) and bleeds it statistically into "the system".

In this picture (which should again be geometrically relativistic) there was never any question as to the outcome of the "measurement" of the entangled spin state by the remotely separated apparati, and furthermore, while the NZ equation is not reversible, we can fully appreciate the fact that if we time reverse the actual density matrix it approximates, the two electrons will leap out of the measuring apparatus, propagate backwards in time, and form the original supposedly quantum entangled state because it never left it -- it was/is/will be entangled with every particle that makes up the measuring apparatus that would eventually "collapse" its wavefunction over the entire span of time.

Note that in this description there is no such thing as wavefunction collapse, not really. That whole idea is neither microreversible nor frame invariant. It describes the classical process of measurement of a quantum object, where the measuring apparatus is not treated either relativistically correctly or as a fully coupled quantum system in in a collectively definite state in its own right. It isn't surprising that it leads to paradoxes and hence silly statements that don't really describe what is going on.

This is a more detailed discussion of the very apropos comment above that similarly resolves Schrodinger's Cat -- the cat cannot be in a quantum superposition of alive and dead because every particle in the cat and the quantum decaying nucleus that triggers the infernal device is never isolated from every other particle in the Universe. The cat gives off thermal radiation as it is alive that exchanges information and entropy with the walls of the death chamber, which interact thermally with the outside. The instant the cat dies, there is a retarded propagation of the altered trajectories of all of its particles communicated to the outside Universe of coupled particles, which were in turn communicating/interacting with all of the particles that make up "the cat" and with the nucleus itself and with the detector and with the poisoning device both before, during, and after all changes. the changes never occur in the "isolation" we approximate and imagine to simplify the problem.

Hope this helps.

rgb

Comment Re:Nice try cloud guys (Score 1) 339

Although I don't want to get into the specific definition of "cloud" vs "cluster" vs "virtualized service server" etc -- with the understanding that perhaps it is a definition in flux along with the underlying supporting software and virtualization layers and hence will be hard to pin down and hence easy to argue fruitlessly about -- I agree with all of this. A major point of certain kinds of clustering software from Condor on down has been maintaining a high duty cycle on otherwise fallow resources that you've paid for already, that have to be plugged in all the time to be available for critical work anyway, that burn some (usually substantial) fraction of their load energy in idle mode waiting for work, and that depreciate and eventually are phased out by e.g. Moore's Law after 3-5 years in many cases even though they aren't broken and are perfectly capable of doing work. Software like Condor lets even desktops be part of a local "cloud" that can be running background jobs that don't really interfere with interactive response time much but that keep the duty cycle of the hardware very close to 100% instead of the 5-8% a mostly-idle desktop might be (while still burning half or even 3/4 of the energy it burns when loaded).

So it really isn't all about carbon (except insofar as energy (carbon based or not) costs money). It's about money, and some of the money is linked to the use of carbon. High duty cycle utilization of resources is economically much more efficient. That's why businesses like to use it. It's often cheaper to scavenge free cycles from resources you already have than it is to build dedicated resources that might end up sitting idle much of the time.

The catch, however, is systems management. In many cases, the biggest single cost of setting up ANY sort of distributed computing environment is human. A single sysadmin capable of setting up serious clustering and managing virtualized resources could easily be six figures per year, and that could easily exceed the cost of the resources themselves (including the energy cost) for a small to medium sized company. All too often, the systems management that is available is of questionable competence, as well, which further complicates things. Virtualization in the cloud can at least help address some of these issues too, as one shares high end systems management people and high end software resources across a large body of users and hence get much better scale economy IF you can afford enough competence locally to get your tasks out there into the cloud in the first place and still satisfy corporate rules for due diligence, data integrity and security, and so on.

However, be aware that for all of the advantages of distributed computing, there are forces, market and otherwise, that push against it. I buy a license for some piece of mission critical (say accounting) software, and that license usually restricts it to run on a single machine. If I put it on a virtual machine and run it on many pieces of hardware (but on only one machine at a time) I'm probably violating the letter of the law, and the company that sold the software has at least some incentive to hold me to the letter so they can sell me a license for every piece of hardware I might end up running a virtualized instance upon. Correctly licensing stuff one plans to run "in the cloud", then, is a bit of a nightmare -- if you care about that sort of thing. If one is a business, this can be a real (due diligence sort of) issue.

Which brings us full circle back to the top article. There are ever so many things that would be vastly more efficient "in the cloud" or just "run from an internet and distributed servers" as a more general version of the same thing. Netflix, sure, but how about paper newspapers? Every day, they require literally tons of paper per locality, cubic meters of ink, enough electricity to power a small manufactory, transportation fuel for the workers that cut the trees, the trees as they go to the paper mill, the fuel that carries the paper to the newspapers, and finally the fuel needed to deliver the newspaper to the houses that receive them and as the final insult, the fuel needed to pick up the mostly unread newsprint and cart it off to "recycle" (which may save energy compared to cutting trees, but costs energy compared to not having newspapers at all).

Compare that to the marginal cost of storing an image of the same informational content on a server with sufficient capacity and distributing that replicated image to a household. The newspaper costs order of a dollar a day to deliver. The image of the newspaper costs such a thin fraction of a single cent to deliver that the only reason to charge for an online paper or news service at all is to pay the actual reporters and editors that assemble the image.

Compare the cost of delivering paper mail to email. Compare the cost of driving out to "shop' vs shopping online. The world hasn't even begun to realize the full economic benefits of the ongoing informational/communication revolution. And sure, some of the benefit can be measured in terms of "saving fuel/energy resources" (including ones based on carbon, but even if the electricity I use or that is used in steps that are streamlined or eliminated comes from a nuclear power plant it costs money just the same).

Personally, I don't worry as much about "carbon" utilization reduction as I do about poverty and improved standards of living worldwide (which I think is by far the more important priority) but network based efficiencies accomplish both nicely.

rgb

Comment I'm not sure I understand... (Score 1) 321

...since an e-book reader is software, not hardware. I read my (many) Kindle books on my old-gen Kindle, on my wife's ipad, on my own galaxy tab 2, on my wife's rooted Nook, on my android cell phone (but not usually, too small print), on my computer(s). In other words, it is pretty easy to get a free Kindle book reader for many (maybe even most) platforms and hook it into your library. I'm not sure what "advantages" a Kindle per se is over any of these platforms, either for reading books or for playing Android games or for doing work of various sorts. My Galaxy is pretty awesome for the purposes I put it to -- reading books (mostly Kindle books, sometimes Google books, not infrequently free epub/mobi books), playing Sudoku, playing any of the other dozen or so well-done games I've invested in so far (some for free, some cheap, some "expensive" at $7 or $9 each), rarely browsing the web, doing email, etc. Rarely because I prefer to attach a bluetooth keyboard if I'm going to do keyboard-based work, but I have other computers that are better suited for most of that.

So I don't get the "Kindle Killer" comment. You mean something better than the Kindle as an Android platform? Lots of choices -- Samsung Galaxy is arguably better in nearly any dimension, for example, and many people have pointed out that the rooted Nook is a nice cheap choice (and would be a "good" choice if Barnes and Noble got their head out of their rear and didn't force one to root it to be able to install arbitrary Android apps from the Android store). And then there is the iPad -- which is a lovely little piece of hardware whatever you think of drinking the Apple-ade. There is the Surface -- personally I won't get one both because I still have a bit of Evil Empire problem with Microsoft and because it is expensive as all hell compared to anything but a full-feature iPad. I've looked at a bunch of the other Android Tabs in the stores, and none of them really suck, although some are arguably better than others. Many are cheaper than the Kindle and don't have Kindle's anti-Google thing going (although the Kindle is reportedly better than the Nook in that regard, but perhaps not by much).

If you mean kindle SOFTWARE killer -- then I truly don't understand your comment. A better version of the existing Kindle book reader? A third party reader (unlikely, given proprietary stuff)?

A hammer?

rgb

Comment Re:Actually, a really nice article... (Score 1) 80

Unless I've missed something crucial here. But perhaps we'll have a breaktrough in accelerator technology that will let us reach these levels at some point. If we hit the resonance, the scattering rate will be of the order 1e-31, 13 orders of magnitude higher than what I used in my back-of-the-envelope calculation. But we aren't likely to hit those energies soon, I think.

Oops (blush). I haven't done relativistic kinematics for a very long while either, but I forgot about momentum conservation altogether. And here I am teaching undergrads about inelastic collisions...

Well darn. It looks like it could borderline work in the sense of produce events every month or even more if one could get TeV electrons at beam currents of order 1 ampere, but you're right, getting to the PeV resonance will be, err, difficult. OK, so probably not worth rebuilding SLAC for.

As for muon catalyzed fusion, Larry's last idea on the subject was politically incorrect but intriguing. He suggested using it as an energy boost second stage gleaning muons from fission reactors. But even then (20+ years ago) fission reactors were politically incorrect and there wasn't a lot of enthusiasm for the idea. I never worked out the math (I assume he did, somewhere) to see if there were enough muons per fission and enough fusions per muon to get a significant gain in net nuclear fuel yield, though.

Comment Re:Actually, a really nice article... (Score 1) 80

Interesting article. Things really do get complicated at those energy scales...:-)

They're using Cerenkov detectors, though, for very very high energy events. I wonder how sensitive they are to muons with much lower energies. The scales on the figures in the article, for example, don't actually go down to 100 GeV -- the left hand edge (log scale) appears to be 1 TeV. But the cross sections are indeed pretty small and it is difficult to get rates to rise above the background cosmic ray muon flux (which I actually measured, once upon a time back in the 70's before I became a theorist:-).

I suppose the only unresolved question would be whether or not there is a narrow but strong electron-electron neutrino resonance around the rest mass of the W-. The collision volume (even summed over the length of the beam column in e.g. a SLAC-like pipe) is quite small, but SLAC is apparently capable of generating 1/2 an ampere of beam current. That's basically 10^19 electrons/second, which knocks five orders of magnitude off your estimate of 1 event per 300000 years to one per 3 years. That seems as though it is low enough that IF there were any sort of actual resonance, it might knock another order of magnitude off and get one at least several events per year, maybe more. Nobel prizes have been won with little more...:-)

Is the seat of the pants estimate sufficient it to propose doubling SLAC's peak energy and current and designing a custom beanding magnet at the end of a long otherwise empty beam pipe to resolve resonant muons from background? Maybe not, but it might be part of a proposal that included other experiments (including a revival of its experiments to search for Higgs, for example) that might benefit from a substantially beefed up beam.

Of course the REALLY cool way to do this would be to do it either on the moon or at one of the Lagrange points -- someplace where 100 km beams don't require either Earth-expensive real estate or tunnels or pipes, and where solar energy could provide a gigawatt of "free" power once you built the facility. Really cool and awesomely expensive, but imagine using the polar moon to build a ring and the equatorial moon to build a "linear" (great circle curvature) accelerator. Even a theorist can appreciate that...

Imagine doing an experiment to scatter electrons off of thermal big bang photons, for example, doppler shifted up to GeV scales. Which is similarly difficult, actually.

I once upon a time fantasized about creating some sort of "wiggler" in the electroweak interaction that could resonantly convert electrons into muons along the lines of the way FELs create a "virtual photon" in the electron rest frame. If one could ever make this sort of thing efficient enough, one could revisit the issue of muon-catalyzed fusion and maybe do an end run around thermal confinement problems. My Ph.D. advisor (Larry Biedenharn) spent a decade or so looking hard at muon catalyzed fusion so I learned a lot about it then even though my research was in completely different stuff). The primary block point was the huge cost per muon to create muons via e.g. nuclear cross sections and pion decay. If one could ever short circuit that, the issue would be worth revisiting even with the other problems, just for the pleasure of the physics...

rgb

Comment Re:Actually, a really nice article... (Score 1) 80

Thanks, you are probably right -- as I said, I wasn't doing the math, but was just thinking that an accelerated beam IS a rapidly moving detector. I was also assuming that it was the lack of collision frame energy in the huge neutrino detectors that was the limiting factor in detecting thermal neutrinos -- to create a W boson requires order of 100 GeV, and of course this just isn't available (outside of Heisenberg uncertainty and extremely suppressed virtual processes) which mutually thermal atoms and neutrinos. But creating a 100 GeV/c^2 electron beam has actually been done (LEP) specifically to enable the creation of the heavy vector bosons in particle/antiparticle collisions (which peaked out around 209 GeV/c^2). I would have expected the thermal neutrino cross-section to take a dramatic uptick once the frame energy was sufficient to actually enable the direct process -- even in the huge detectors in use a major problem is that the neutrinos coming in don't have ~100 GeV/c^2 in the collision frame, right?

I think LEP was shut down and its tunnel re-purposed into the LHC, and I'm guessing the LHC can't be used to accelerate a lepton beam without basically rebuilding it, so there may be no machine currently in existence that could do this anyway (I should have thought more carefully about the collision frame issue and the W rest mass when I suggested SLAC or the FEL could do this -- they are still well below the threshold for producing W's from thermal neutrinos (SLAC is close at 50 GeV/c^2). But if the cross section issue IS lack of frame energy as opposed to probability of encounter (as seems likely -- there are going to be plenty of close encounters between electrons and neutrinos in a long run even with a comparatively low neutrino density, but one has to have at least enough energy to create a W for at least enough time to make it a virtual channel for the final muon and antineutrino. I'm guessing that we don't have measurements for the cross section at these frame energies (unless there is data from LEP somewhere), but the possibility of a resonance or cross-section spike once the W threshold is passed is hardly unreasonable.

Comment Re:It didn't take long to leave our mark in the se (Score 2) 136

I gotta say, are we talking about the same Parthenon? The one built at the top of a hill overlooking Athens as pretty nearly the sole structure on the hilltop?

It doesn't precisely show the elevations, but:

https://maps.google.com/maps?o...

is one view, or perhaps this will do better:

http://www.greatbuildings.com/...

As you can see, it is pretty much on top of a mesa. So I'm not sure where your "slight dip in the terrain" could possibly be.

I only point this out not because your argument is implausible in general, but your specific example is one of my favorite places on Earth and although I've only been fortunate enough to visit it in person twice in my lifetime (so far) I remember the walk up from Athens proper quite well, including stopping in some of the many small taverns that are along the trek for octopus and retsina.

rgb

Comment Actually, a really nice article... (Score 2) 80

That was really lovely, and thank you for posting it.

You assert that one problem with detection is the difficulty of accelerating entire neutrino detectors to GeV energy scales. I'm not sure that I agree. Muons, as we know, decay into electrons and two kinds of neutrino/antineutrino. Electrons moving at GeV scales have more than enough energy to be transformed into muons in the inverse reaction -- if they happen to hit an electron antineutrino -- or more properly, they have a chance to be transformed into a W- boson which can then decay into several things -- lepton/neutrino pairs or quark pairs, one of which produces muons

Muons are easy to detect. Electrons with "suddenly" shifted energy are also easy to detect (another possible outcome). Finally, quark-antiquark "jets" are easy to detect.

At the densities of thermal neutrinos asserted, it seems reasonably probable (without, admittedly, doing the computation) that GeV scale electrons will encounter free neutrinos and undergo the inverse reaction and produce muons along a freely moving beam track and indeed that places like SLAC and the Duke FEL would be producing a small but detectable flux of muons all along the straight legs of their beams that would then either exit sideways (where they could be detected lots of ways) or continue along the collision frame of reference and be moderately separable at the next bending magnet. Yes, there would likely be some auxiliary production near the actual beam from electron collisions with beam pipe metal outside of the beam envelope, but one would expect to be able to put a vacuum pipe along the frame of reference of the collision a kilometer long or thereabouts PAST a a bending magnet (at the right angle) at the end of a long straight leg and run it into a detector, which would then detect all/mostly muons produced by neutrino scattering. Or so it seems.

Is this wrong?

rgb

Comment Re:Let me know when you win that war on drugs? (Score 4, Informative) 319

...contradictory, inconclusive, and (as even Dr. Sanjay Gupta of CNN finally came to realize and stated in public when he changed his stance on pot) the result of decades of research funded for the sole purpose of finding something wrong with pot. If 96% or more of all research grants are titled "Investigating Marijuana as a risk factor of bipolar disorder and schizophrenia", and the only way to have a grant renewed is to find some positive (that is, negative) effect, it is hardly surprising that 96% of all research results turn up something negative about pot. What is really interesting is that in spite of subjecting it to a microscope far more demanding than we have ever applied to any other substance under similar circumstances, so very little has been double-blind confirmed as a "risk" to pot smokers. It "interferes with" (but certainly does not "prevent") the formation of short term memory -- for the duration of the time you are high, with no long-term effects. It is indeed used as self-medication for lots of different kinds of dysphoria, and can by preventing or ameliorating dysphoria keep people from making beneficial life changes. Sometimes one does need to take action instead of endure when life sucks. Other times, its gonna suck regardless of what you do, and then sure, pot can help make it suck less.

The other really interesting thing about pot is the number of myths straight out of the War on Drugs are still being perpetuated by people who heard some pithy thing about it twenty or thirty years ago and never thought to doubt the veracity of their government or question its interest in the whole matter.

http://www.drfranklucido.com/p...

http://medicalmarijuana.procon...

The government itself is pretty schizophrenic on the issue. There are several places one can get to (compilations of) original papers on pot, and (allowing for the confirmation bias that is rampant in medical science these days, especially when reporting anecdotal "evidence" rather than double blind, placebo controlled studies) it really is pretty benign compared to ever so many other things that are quite legal. The same cop who arrests you, the judge who sits on your case, and the lawyer who gets you off can easily be functional alcoholics. I'm guessing alcohol and bipolar disorder or schizophrenia don't mix real well either -- but that is never mentioned or discussed, for some reason...

Comment Re:Let me know when you win that war on drugs? (Score 4, Interesting) 319

When I was much (decades) younger (and still smoked) I wrote code all of the time when high. In fact, it was one of life's pleasures -- the concentration focus was fantastic. And yes, the code was very complex, was thousands of lines long (when finished) and ran perfectly when I was done as far as I was ever able to tell.

With that said, not everybody could do what I did and work effectively high. But I knew a fair number who could and did, and of course I knew a few who were useless when high. Of course, I knew a fair number or people who were useless coders stone cold straight. This isn't terribly surprising -- the world is full of functional alcoholics too. Pot is different from alcohol, though, in so many ways. Alcohol eventually puts you into a stupor, then kills you. Pot at worst puts you to sleep and has no known fatal dose. It is considerably safer than aspirin or caffeine -- the former you can easily overdose on or it can kill you outright with e.g. Reyes' Syndrome. Caffeine is lethal at doses somewhere between 2 and 20 grams (depending on your metabolism and weight) -- not easy to ingest in coffee, easy to ingest if you put a couple of spoonfuls of legal, over the counter caffeine powder onto your morning post toasties. Cigarettes, don't get me started -- a single cigarette can kill a small child if accidentally ingested, and nicotine makes a dandy insecticide even when highly diluted.

In addition to being amazingly safe compared to almost anything humans consume outside of brocolli, pot is basically a non-prescription (openly illegal in many states) antidepressant. Lots of people who smoke (or drink, for that matter) are self-medicating or compensating for the fact that their lives suck for reasons utterly beyond their control. Is it a good medicine compared to SSRIs or other prescription medicines? I don't know. I do know that drug companies don't want you to have the choice. I do know from bitter experience that the law enforcement industry from police through the lawyers and the courts make a living from pot. I know that the biggest single risk for pot smokers isn't anything associated with pot itself -- it is being arrested, charged, jailed, forced to pay thousands of dollars for bail, forced to pay thousands more for lawyers, forced to pay fines and court costs, forced to endure probation, forced to pay for "rehabilitation". It is being fired, not being hired, not getting into college not because of your grades or intelligence (both of which can be just fine) but because of your "police record". And the penalties scale up enormously for the poor and stupid who often smoke weed because life as a janitor or store greeter or one of the dudes who has to put on a costume and wave at passing cars to get them to file their taxes or patronize a failing store sucks, but weed makes the menial and mindless jobs you can get a bit more tolerable without ruining your liver.

If pot has a flaw as a recreational substance, it is that it can, by making a shitty situation tolerable, act as an ambition suck. Hamlet on pot:

To be, or not to be: that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing end them? Or just get high
And suffer no more; and by suffer to say we end
The head-ache of the thousand natural schlocks
That life is heir to, 'tis a consummation
Devoutly to be wish'd. So don't bogart that joint,
My friend, pass it on over to me...

Sometimes, though, it really is better to take arms against the sea of troubles and by opposing end them. Pot can make it a bit too easy to suffer the slings and arrows and end up trapped in a life that consists of little else. Or not. Or it can do so for a while, and then people grow up. Ultimately, it ain't nobody's business but your own, and it certainly isn't a positive predictor of failure -- or success. Like anything, for some people (especially some of the mentally ill) it is probably a serious mistake. For others it is harmless. For still others, it is probably beneficial. Given its very low risks, its moderate benefits, and its popularity, we should legalize it. We should never have made it illegal. Then police can go back to dealing with actual crimes like rape, robbery and murder, many lawyers can get a real job, the governments (state and federal) can set a few hundred thousand people free from jail, the courts can decongest to where cases can be heard in days instead of months, neighborhood drug lords can join the lawyers in looking for actual work, drug companies can deal with the horrors of self-medication without a prescription or profit to them, and society -- will never notice, otherwise. Somewhere between 10% and 20% of the population at least occasionally smokes pot (10.8% in a 2009 poll admitted at occasional use, and this likely under-reports usage given that it is illegal and given that several states have legalized it in the intervening years).

rgb

Comment Re:Discover is the wrong word (Score 2) 223

Well, to be honest, they've asked for funding to do the obvious experiment to test it. It's not particularly clever, only expensive. And, as has been pointed out repeatedly above, they haven't "discovered" this, it is part of the standard lexicon of QED and has been for maybe 60-70 years.

A clever way of testing it would be to use e.g. a free electron laser like the one we already have at Duke and shoot the laser beam into a "wiggler" -- a region of alternating crossed fields -- well downstream of the circulating ring. No need for two lasers, no need for massive new expense. In the frame of the photons, the region of alternating crossed EM fields looks like a photon heading the other way. You can make the wiggler field strength quite large and put a bending magnet just past it with detectors and look for positron-electron coincidences. This would actually have lots of advantages. Cheap. It uses existing hardware instead of building (much) new stuff. The pairs produced would not be in the rest frame of the lab (but in the "virtual" rest frame of the collision) and would only have to travel a short distance before encountering a field that could separate them before they annihilate. And when one was done, one could take the whole thing apart and go back to using the FEL for its many other purposes and say: Gee, guess quantum theory works after all and go about one's business. Unless of course, there are surprises, which seems to be to be class A unlikely but which is barely, barely possible and hence worth perhaps a MODEST expense to verify it.

rgb

Comment Re:Can you make condoms with it? (Score 5, Funny) 90

Re: IBM Memo 92148 (Anonymous Coward/Slashdot) Can you make condoms with it?

Hmm, intriguing idea. Almost certainly, but out of which polymer? A rigid "Titan" condom could certainly cover more than one situation (and the idea had considerable appeal when we ran it over the flagpole among our senior execs to see who saluted it and who turned away blushing) but the boys here in R&D said there might be trouble fitting it into a wallet. However, the marketing boys said that we wouldn't even have to change the name -- Titan Condoms (made by IBM!) would sell like hot cakes even if one did have to keep them standing on a shelf or nightstand next to the bed. Besides, if they don't sell to the general population, a bit of retooling and they'll make gangbusters self-propelled grenade casings (especially in the larger sizes) -- although legal says that calling them "Titan missiles" might infringe some trademark or other.

R&D was, however, quite excited at the prospect of a brush-on "Hydro" condom -- one would never need to take it off. We had a number of volunteers for a pilot project, and it turns out that in fact, one might never be able to take it off. Apparently "Hydro" is also being considered as a nearly indestructible super glue because of all of its dangling, um, "bonds" but this was being investigated by another team. There were, unfortunately, a few drawbacks pointed out by those party-poopers over in legal and their paid shills from the medical profession, so the idea was tabled for the time being, which basically means that we're still going ahead with the project but looking for just the right test population -- males on dialysis or willing to undergo a critical surgical alteration of the liquid waste elimination pathway, for example. However, we're a lot more interested in large federal or state contracts; this is (for example) an intriguing idea for our prison systems, if we can get it past Engineering.

Keep up the good work, AC, and we are gratified that you are making this valuable suggestion anonymously, as it saves us from the tedious process of running you down and making you sign release forms or having you assassinated so that we can cleanly patent the idea as our own. Now you'll have to excuse me -- I have to go empty my cloaca.

Irving Bentabit
IBM (R&D)

Comment Re:Deniers are too stupid to read -- prove me wron (Score 1) 661

And 1) is both true empirically (climate models are failing to accurately predict climate) and openly acknowledged to be true by, among others, the IPCC. Openly in AR3, relegated to selected paragraphs deep in the document in AR5, but there nonetheless.

2) is still an open question -- or rather, there are definitely feedback that mute the severity, but it is also claimed that there are positive feedbacks and it is not yet clear which one wins. CO_2 alone would produce between 1 and 1.5 C of warming by 600 ppm (some 0.5 of which we have already realized). Hansen believed (and probably still believes) that water vapor feedback would at least double, more likely triple it to between 3 and 5 C. Empirical evidence has gradually forced nearly all of the climate science community to cut back their "best estimates" (based on a statistically meaningless mean of the predictions of the broken climate models, see 1) above) of total climate sensitivity to roughly 2.7 C in AR5 and it is currently in free fall in the literature, increasingly constrained by the lack of tropospheric warming, "the hiatus" (as it is named and discussed in AR5) and Bayes theorem. Currently the argument is whether or not it will end up as high as 2.3C, with papers appearing arguing that it will end up being in more or less neutral net feedback territory -- 1 to 1.5 C -- and others covering the range of 1.7 to 2.3 C. Since basically this is a scientific crap shoot and has been from the 1980s on (partly because we are still learning about clouds, partly because the "physics" that the models supposedly are based on begins by averaging phenomena in a nonlinear Navier-Stokes equation from its Kolmogorov scale of around 1 mm to the cell size of around 100 km -- with adjustable parameters galore -- as if it does not matter) so you pays your money, you places your bet. Net negative feedback hasn't even been ruled out by the data, and the longer the hiatus continues the more likely it is that the feedback is indeed net negative.

4-6) are what they are. Sea level rise is almost invariably given as the primary cause of catastrophic damage, yet it has also proven to be the one place where there is absolutely no sign of catastrophe. SLR rates have changed little for 140 years. It is also remarkably difficult to predict the rates at which land ice will melt, given the problems with 1) and 2) -- Hansen (as the primary author of the entire claim for future catastrophe) goes on TED Talks and with a straight face says that he expects 5 meters of SLR. Any other sane climate scientist I've talked to is now talking about anywhere from 30 cm to as much as a meter. The data itself suggests that we'd be unlucky to make as much as 30 cm by 2100. Public media are full of egregious claims of ongoing disaster (melting Himalayan glaciers, increasing tornado or hurricane damage) often and sadly backed by public figures in the scientific community that should no better as there is no evidence of any of the above). This quite correctly reduces the credibility of the other claims of these individuals -- if one went back and looked at the predictions that Hansen in particular has made in fully public view ex officio as head of GISS and how badly they've failed, it would be difficult to see how he has any credibility left. Beyond that, many -- although not all -- of the claims for damage due to "climate change" (something that happens all of the time naturally and hence is impossible to attribute or refute) are marginal results that are not statistically significant. And estimates for the damage resulting or likely to result from climate change often fail to take into account benefits accruing from climate change or the simple fact that nature has already accommodated the change given the smear of temperatures and climate ranges available between the equator and the poles.

All of this greatly complicates the discussion of costs and benefits. Not everything about global warming is bad. Indeed, the global warming that has been ongoing since the end of the Little Ice Age has been almost entirely beneficial. If SLR and climate sensitivity are admitted to be at best poorly informed guesses based on models that are in terrible agreement with the data (e.g. HADCRUT4) from 1850 to the present everywhere but the reference period (training set for the model, which does not count), as is clearly visible in figure 9.8a of AR5 it is by no means clear that "the science is settled" -- whatever that means when the "settled science" is based entirely on trying to solve what is arguably the most difficult computational problem ever attempted by humans with completely inadequate methodology and tools -- or what the most prudent course of action is for humans to take in the meantime.

Given the enormity of the investment required to do anything at all significant about CO_2 concentration -- where all of the measures taken by all of the world over the last 20 years, in spite of their enormous price tag, have done almost nothing -- the most prudent course is probably to wait and see before dumping another half-trillion or trillion dollars into the well without even the prospect of the investment impacting the overall rate of CO_2 increase globally -- while continuing to invest heavily on the keystone technologies that might actually eventually have a cost-effective impact. Solar power, for example, cannot form the basis of an actual global civilization as things now stand. It is barely cost-effective in ideal locations as a means of eking out fossil fuel derived power that is still required to bridge the substantial temporal gaps when the sun goes behind clouds, when it is night time, when it is winter and the sun is tipped to too great an angle for efficient generation. We have no cost-effective technology yet for storing solar power under any circumstances so that it could provide 24 hour power at global civilization rates in the middle of Death Valley or the Sahara. We have no way of transporting electricity generated in Arizona to Maine, or the Sahara to Finland -- current power transmission technology is limited to around 300 miles, and to get even to 3000 would require nearly linear scaling of e.g. high voltage transmission line dimensions, with a highly nonlinear scaling of their cost (that is, they would have to transmit power at order of ten million volts, instead of order (less than) one million, and air just isn't that good an insulator, especially when it is damp or electrically charged itself). Wind is even more problematic as even places with comparatively reliable wind can have windless weeks as easily as days. Nuclear is the only viable non-Carbon source of power, and as of this moment it has to be the primary electrical generation means eked out by alternatives as no alternative generation mechanism is capable of functioning as a primary. And many of the same people who oppose coal based electricity oppose nuclear just as vehemently.

In the comparatively rich West it is all too easy to forget that roughly 2/3 of the world's population is unbelievably energy poor. 1/3 of the world's population is just plain old poor -- living basically not even in the 19th century or early 20th century but in the 17th century. All global poverty, at its root, is energy poverty. With enough, cheap enough, electricity one can create clean water, plenty of food, jobs, air conditioning and heating, safe and comfortable houses, the means to cook food and light homes after dark that don't involve burning dried animal dung in a tiny hut while exposed to disease-bearing parasites that fly in through the smoke one is breathing. China and India, with a huge fraction of their populations who are in precisely this category, have sensibly enough chosen to mostly ignore the claims of possible future catastrophe because to do nothing to provide energy for these people right now is an ongoing catastrophe that the world blinds its eyes to while worrying about 2100 and CO_2.

Maybe CO_2 will cause a global ecological and economic catastrophe. Maybe not -- the evidence is so far not at all clear or universal that it will do either one. In assessing the risks, however, one has to fairly balance out the costs of measures that everyone knows will do nothing to ameliorate the CO_2 level likely in 2100 (and are enormously expensive now) against the benefits that could be derived from the same level of investment trying to, say, ameliorate global poverty, global ignorance, and global disease right now. Half a trillion dollars, wisely spent, could probably come damn close to solving these problems over 20 years. In 20 years, we might also actually have accumulated enough actual evidence (as opposed to the "projected on the basis of dysfunctional General Circulation Models" not-really-evidence-at-all kind of evidence that is actively pushed at the moment) to have a much better idea of how the climate really functions. We might have fixed the models by then so that they come a lot closer to agreeing with the data. Money currently and wisely being invested in breakthrough research and technologies -- high density energy storage, lower costs photovoltaics, low hanging fruit like LFTR -- as well as longer shots such as high-temperature high-current superconductors capable of carrying energy the 10,000 miles necessary to provide Finland with electricity generated in the Sahara in winter, Maine with energy generated in Texas in the winter, or fusion (the universal solution to all of the world's energy woes if and when it works) might have time to mature.

That's the really silly thing. Solar technology is a no-brainer -- once costs drop to where one can amortize the investment over 10 years or less. At that point one doesn't have to "encourage" its adoption, one has to stand back and let the free market work. In some parts of the US, solar has marginally reached that (for corporate level investments) although for individual homeowners the amortization is still too long, more like 20 years. The latter is too long (and too close to the expected lifetime of the cells) to be a truly comfortable investment, but costs are dropping, the amortization along with it, and at some point new construction will often come with a solar roof not because it is mandated or "green", but because owning a house with most of its lifetime energy costs rolled into the mortgage at a discount relative to over-the-counter energy prices is an appealing prospect. At that point solar will rise to displace/eke out maybe 30% of our energy needs, perhaps even as much as 50% if people synchronize manufacturing energy demand to the daytime in sunshiny states. That will, of course, still not suffice to keep CO_2 levels from increasing (if the Bern model is correct) -- only the replacement of coal burning plants with nuclear power can do that, and if that were done solar itself would once again be a hammer looking for a nail, at least for a century or three.

Solar on top of mature base technologies is even more of a no-brainer -- if somebody perfects e.g. zinc oxide batteries or any of the other potential ultra-high-energy-density batteries people are working on (perhaps with breakthrough nanoscale techonology) tomorrow, that is a game changer. The global warming crisis would truly be over, because solar would become a viable replacement even for nuclear, and if one achieves enough energy density (e.g. anything within a factor of 2 or 3 relative to gasoline or fuel oil), one could ship electricity to Maine by charging a train-sized battery during the day in West Texas and running the charged batteries up to Maine overnight to deliver the next day's electrical power, then running it back the next day to be recharged once again. But until that happens solar will not be the basis of any real solution to the CAGW/CACC problem.

So far, I haven't heard one single solution proposed that makes sense and is capable of actually solving the problem, with the obvious exception of the adoption of global nuclear power for as long as our nuclear fuel resources hold out. Climate aside, the human species needs to solve the problem of generating power for the indefinite term in order to build a steady state civilization without the extremes of energy wealth and energy poverty clearly visible to anyone who doesn't have their head stuck in a mansion today. If that problem is solved, climate issues become moot. Until it is solved, measures being taken are simply transferring your money into the carefully selected pockets of those who make the loudest claims for greenness and have the right friends, who -- paradoxically enough -- turn out to predominantly be precisely those energy companies that everybody excoriates as being the cause of all evil. Who makes the most money out of the global warming crisis? Oil companies. Coal companies. Power companies. None of those companies suffer losses due to the "crisis" -- most of them are profitable at record levels because of the crisis.

Something to think about.

rgb

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...