Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Quantum Teleportation (Score 1) 412

David: Try that black hole example. Your nominal terminal velocity at the horizon equals lightspeed, and deeper in, it's more than lightspeed. It can be entirely legal to travel faster than background speed of light, and even the local speed of light, as long as you aren't exceeding the local velocity of light, in the direction of motion. You just probably won't be able to report your success back to any observers outside the hole. :)

That's the basis of the Krasnikov tube idea. You activate the tube, ride the artificial gravitational gradient to your destination arbitrariy fast, then, if the tube polarity is reversed, ride it back again, arbitrarily fast. The way your signals get mangled means that round-trip SR-style definitions of dates and times go all to hell, but those definitions also break down in some pretty mundane everyday situations, and as long as you don't actually arrive before you left (and why would you), there's no underlying causality paradox. The optics get scrambled, but that's about it.

Certainly, the SR definitions go a bit mental in this scenario, but they also go a bit mental in the presence of conventional gravitational fields, and we don't say that therefore gravitational fields can't exist ... we say that SR doesn't claim validity in the presence of significant gravitaitonal fields, because the field gradients violate the basic geometrical assumptions that SR was built on.

So trying to disprove the existence of warp drives using special relativity is a bit crazy. Trying to disprove "metric engineering" solutions by using a theory that presupposes flat spacetime is like trying to disprove the viability of aerofoil-based heavier-than-air aircraft designs by presupposing the absence of air. One can certainly obtain a rigorous disproof, but the disproof is pretty much worthless, because it's based on the simplifying assumed absence of the very effects that are required to make the hypothetical mechanism work (in this case, gravitational distortion).

This doesn't necessarily mean that we really can build a practical warp drive – there may be other insuperable obstacles – but the usual reasons given for why we can't do it are ... let's say ... not especially intelligent.

Comment Re:Quantum Teleportation (Score 1) 412

Thank you for a collection of dressed-up semantic arguments.

I wasn't aware that they were dressed-up! :) Semantic analysis is often an important tool in the more abstract branches of theoretical physics. Think about how often Einstein demanded that we reexamine exactly what we really mean by, say, "distance" or "time" in a given situation.

Next you'll tell me that white is black if you squint hard enough.

That's you indulging in a fantasy future scenario. Scientific debate is usually more constructive when people spend more time addressing what each other have actually said, than what they can imagine each other saying.

... , all future theories must predict all current and historical measurements, ...

No, they have to predict most current and historical measurements, and an explanation should be available for any remaining mismatches (e.g. human fallibility, test theory limitations, peer pressure, etc.). The explanation for those mismatches can legitimately be social/psychological rather than based on fundamental physics (e.g. expectation bias).

People screw up, and physicists are people. The experimenter is part of the experiment.

If we always required perfect agreement with reported results, then special relativity woud have been dismissed, because the first peer-reviewed published experimental paper on testing SR concluded that the theory didn't make a great match to the experimental evidence. With hindsight, we now consider that first experimental paper to have been flawed.

Remember also that in the early days, Darwinian evolution was supposedly soundly disproved by thermodynamics applied to the available experimental data. Instead of killing the theory, we kept it, and later on our knowledge of physics changed in such a way that Darwin's idea turned out to be compatibe with the new calculations.

We can't (and shouldn't attempt to!) generate all peer-reviewed historical measurements from a physical theory, because many of those results are now known to be bad, and/or mutually contradictory. Consider all the early experiments that claimed to have either verified or disproved the existence of gravitational shift before around ~1959. With hindsight, all of those early g-shift experiments, including the ones that got the right answers, and the occasional prize-winner, are now considered to be junk science and not to be included in any proper scientific review of the evidence. We have to assume that the currently accepted dataset may include the occasional rogue element, which we may have overlooked because it agreed with our expectations.

While I understand the argument that agreeing with all currently-believed experimental data is a great test for a theory, I would suggest that an even more powerful theoretical success would be for a theory to disagree with the ocasional known result, so that when we go back and retest those results in the light of the new theory, we find that the original results are unsound. I'd suggest that a theory of this sort that is only 99% compatible with known results is actually more scientifically falsifiable, and has potentially greater predictive power han one that has been tailored to exactly correspond to all currently accepted results, as a retrospective curve-fitting exercise.

A "99%" theory (if validated) can reveal to us previously unrecognised problems and mistakes in the existing peer-reviewed record caused by human fallibility, whereas a "100%" theory is partly defined by any such fallibilities and helps to perpetuate them.

Submission + - Hawking Says Scientific Progress is Major Source of New Threats to Humanity

HughPickens.com writes: BBC reports that according to Stephen Hawking most of the threats humans now face come from advances in science and technology, including nuclear war, global warming and genetically-engineered viruses. "Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years," said Hawking in answer to a question during the BBC Reith Lectures. "By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race. However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period."

During his lecture Hawking also answered a question on whether his synthesized electronic voice had shaped his personality, perhaps allowing the introvert to become an extrovert. Replying that he had never been called an introvert before, Hawking added: “Just because I spend a lot of time thinking doesn’t mean I don’t like parties and getting into trouble.”

Submission + - Einstein's General Theory of Relativity is 100 years old (newscientist.com)

ErkDemon writes: Einstein produced the "release version" of his General Theory of Relativity in 1916, with his full paper "The Foundation of the General Theory of Relativity" Die Grundlage der allgemeinen Relativitätstheorie, Annalen der Physik 354 (7), 769-822, the field equations having been presented and published towards the end of 1915.
Combining Ernst Mach's arguments for relativistic gravitation with work on curved geometries developed by Gauss and Riemann, Einstein's general theory became one of the defining scientific theories of the Twentieth Century, and institutes around the world have been marking the centenary, including IAS Princeton, the Max Planck Institute, and Imperial College, London. Work on the subject may not yet be complete – the theory is famously at odds with parts of quantum mechanics, and its reliance on the "dark matter" hypothesis to produce correct predictions for galaxy-scale physics is still a matter of controversy.

Comment Re:Quantum Teleportation (Score 1) 412

" No matter what happens in any arbitrarily long span of scientific development, red will not become blue, ... "

A signal viewed from an environment with a more intense gravitational field is seen to be shifted to the shorter-higher-energy end of the spectrum. It's called gravitational blueshift. Red emitted light, viewed from the bottom of a sufficiently deep gravity-well, can in theory be seen to be blue. Or you could have the source and viewer approaching each other at high speed, and use the conventional Doppler blueshift to turn red into blue.

" ... down will not become up, ..."

Gravitational fields can be intransitive. If two identical observers orbit the same fast-spinning star on opposite sides, then the star's rotation creates an associated gravitational effect that pulls matter preferentially in the direction of the star's rotation. When the two observers exchange signals, observer A can appear to be higher than B according to signals sent one way around the star, but lower than B according to signals sent the other way.

... and neither (classical) information nor human craft will exceed the speed of light.

That depends on which speed of light, whose speed of light, and whether we remember that in physics, we normally talk about velocities rather than speeds, except in situations where the averaged round-trip signal speed is used to define a coordinate grid (which tends to happen in relativity theory, because we find coordinate grids comforting and reassuring). If you want to travel faster than the "cartographic" background speed of light, it's easy – just freefall or dive into a black hole, and when you pass the r=2M radius, you should be moving faster than distant background lightspeed. Of course, you won't be travelling faster than your own local light, because that's infalling too, so no physical paradoxes. Admittedly, the interior of a black hole is probably not somewhere that most people would want to travel to at high speed ... but that's a different problem. Theorems that forbid super-fast travel tend to assume that lightspeeds have to be isotropic (because that was a simplifying assumption made by special relativity), whereas in reality, they're not. You can travel (in theory) as fast as you like, as long as there's a suitable gravitational gradient pointing in the right direction. Special relativity's geometry isn't necessarily valid in the presence of gravitational gradients, and/or lightspeed anisotropies.

" To believe otherwise is to believe not in science but in fairy stories. "

IMO, Science (when it's done properly) isn't supposed to be about beliefs, it's supposed to be about working hypotheses. When people start "believing" too much of what they're told, that's when science often goes bad.

Submission + - Happy Birthday Frank Hornby!

ErkDemon writes: Today is the 150th anniversary of the birth of inventor and toymaker Frank Hornby.
Hornby invented the Meccano metal construction toy (currently sold as Erector in the US) that inspired generations of children to become engineers, patenting the basis of his system in 1901. Originally sold as an educational system for teaching mechanics, “Mechanics Made Easy” became “Meccano” in 1907, and Hornby’s company, Meccano Ltd. went on to become one of Britain’s biggest toymakers, with Hornby creating a further string of product lines including Hornby Trains and Dinky Toys.
Hornby’s is a rare “British inventor” success story — his creation turned him from being a clerk in a meat importing company with no real qualifications or schooling into a millionaire industrialist and Member of Parliament.

Comment Causality issues outside Special Relativity (Score 1) 1088

Outside of special relativity, having particles travelling faster than the background speed of light doesn't necessarily introduce causality violations, if the local /velocity/ of light, at that location and moment, in that same direction, is even greater.

Consider the case of a drifting particle falling into a black hole from null infinity. The inward velocity of the particle would be expected to hit v=c at the event horizon, and to continue increasing (unobserved) as the particle continued to fall, to an arbitrarily high multiple of background lightspeed. But the particle doesn't illegally time-reverse, because it never overtakes its own signals (which are falling inwards even faster). So gravitational event horizons provide an example of predicted (censored) super-fast motion, without involving exotica like negative energy-densities. Like Newcomb's old argument against heavier-than-air people-carrying craft, general disproofs of superfast motion are mathematically tidy, but not necessarily physically reliable.

Outside of black hole problems, super-fast motion can be legal if you use a relativistic acoustic metric instead of the Minkowski metric (in an r.a.m., the motion of a particle is associated with a local offset in nearby light-velocities, allowing the particle to move faster than background c without ever exceeding local c).

Relativistic acoustic metrics are fun, and seem to reconcile quantum mechanics with several key aspects of general relativity - they're tentatively used by some people exploring "quantum gravity" options, when modelling Hawking radiation.
... The reason why we don't use relativistic acoustic metrics seems to be partly historical/social: Special relativity got there first and established the Minkowski metric as a standard, and some relationships come out differently with an r.a.m. than they do with special relativity, so we tend to say that unless someone has convincing evidence that says otherwise, the SR version of events is considered to be "canon". And it's difficult for evidence to be considered convincing if it runs counter to one of the best-known scientific theories, so there's a kind of positive-feedback loop in operation.

Mainstream relativity guys tend not to study r.a.m.'s, not because anyone's come up with a logical reason why they shouldn't work, but because they're told that SR-compliance is mandatory for any credible relativistic field theory, and it's generally thought that violations of SR (like particles moving faster than background c) simply don't happen. So other than the quantum gravity guys, almost nobody's been looking at this class of relativity theory, and the QG guys tend to stop at the point where the thing starts to diverge from special relativity.

Short Answer: Yes, if this thing is right, it probably involves rewriting the physics rulebook, and probably junking special relativity, but ... no, the requirement for special relativity was never really as strong as many people seemed to believe. Yes, losing special relativity would be major from a theoretical and social point of view, but no, it's not too difficult to construct a relativistic alternative, if you're prepared to lose the simplifying assumption of flat spacetime.

(So yes, it might simply be a duff experiment. But it's not yet safe or sensible to assume that that's the case).

Have a Cool Day,
Eric 0955706831

Comment Re:No! It is really, really bad. (Score 1) 2288

Yeah, and before one publishes a paper quoting rainfall in gallons per square yard, they have to decide whether they'll be using the Imperial gallons or US gallons, because the two are significantly different. Apparently the Imperial gallon was 4.54609 litres, and the US gallon is 3.785411784 litres, making the US gallon very close to 5/6 of the Imperial measure with the same name. If someone doesn't realise that there's no single internationally-agreed definition of a "gallon" -- it's not an international unit -- then if they're unlucky, their calculations can be off by 20%

Comment Re:Not so bad to have different systems. (Score 1) 2288

Actually, the Imperial hundredweight is 112lb, but the US hundredweight is 100lb. That's why there's a different number of pounds to the US ton and the Imperial ton, and why commodity traders talk about "long tons" and "short tons". The metric tonne is conveniently in middle. And talking of commodities, the US gallon is different to the Imperial gallon, and the US oil barrel //I think// corresponds to the eel-barrel rather than the wine-barrel? The trouble with these "natural" measures that everybody supposedly understands is that they were different all over the world. Imperial and American inches were different sizes before they both got standardised on 2.54 millimetres, and this made US and Imperial feet and miles slightly different, too. It was a nightmare for engineering work if you bought in a load of foreign machine tools and they were marked up in the wrong sort of inches. Even basic cookery measurements are locally different: a cup of sugar in the US is different to a cup of sugar in the UK. And don't get me started on pounds and ounces ... an ounce was a different weight depending on whether you were measuring liquid, grain, solid, wine, spirits, gold ... and as for feet, there might have been, what, ten different local definitions of a foot, with some using twelve inches and some using thirteen? Before the metric system, international weights and measures were a disaster. After it was introduced, people could at least define their local measures in terms of a single universally understood reference, rather than have a bookshelf of arbitrary and approximate third-party conversion tables and almanacs comparing different quantities with the same names using different materials in different countries. And often these conversions weren't officially sanctioned by anyone, because there simply wasn't an official conversion factor for the same nominal unit in Country X and Country Y. We could say that the Imperial and US inches seemed to be different by a factor of ... something ... but the US inch wasn't going to be //officially// defined as X Imperial inches, and vice versa, so the conversions were always measured approximations rather than strict engineering definitions.

Comment Re:The government IS causing the loss of value (Score 1) 424

To be fair, the Government guys managed to remove the first nine pounds of explosives by hand before they gave up and threw in the towel. Apparently the place is so packed with explosives-related equipment (half-built fragmentation grenades and the like) that they felt that taking anything else out would be too dangerous. Robots aren't an answer if you're dealing with a junkyard of explosive gear stacked high, where a robot fumble is liable to knock things over. Sure, if the robot gets blown up, nobody's dead ... but it could blow up the whole remaining stash. Which means that all the expensive protection work they're doing now to try to protect the surrounding neighbourhood would have to be done anyway.

Comment Re:Complete incineration of toxins - how? (Score 1) 424

Hey, don't diss thermite! For genuinely nasty explosive chemicals, try hydrazine.

When I was a kid, my chemistry book warned that hydrazine had a tendency to explode unexpectedly in response to vibration. Or heat. Or light. Or cold. Or sound. Or electrical charge. Or chemical reaction with contaminants on the surface of the holding vessel. Or roughness on the surface of the holding vessel.

Or ... basically, if you looked at it kinda funny.

And on top of all that, it's supposed to be horribly toxic.

Slashdot Top Deals

This is an unauthorized cybernetic announcement.