Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Interplanetary Internet protocol in devel 98

shadowlight1 writes "This MaximumPCMag article discusses NASA's current research into interplanetary protocols for Internet data. The research team includes one of the inventor of TCP/IP. Get ready to ping-flood Pluto. "
This discussion has been archived. No new comments can be posted.

Interplanetary Internet protocol in devel

Comments Filter:
  • Except that your lag will make it seem like you're on another planet.
  • if there we're some new physical layer like gravity waves or something able to signal at c^2 - and before quoting Einstein there ARE phenomena that travel >c (they just have NO mass).

    Chuck
  • It certainly gives a whole new meaning to "Fingering Uranus."
  • What ping flood Pluto ? I've been trying for years to get a signal thru to Venus and now I find out NASA hasn't come up with the protocol yet. Arrggh !!!!
  • I think that by the time we had a real black hole in our solar system, we'd have more pressing problems than the packet loss.
  • The [Slashdot] article never stated that it was TCP/IP, so drop your "greater-than-thou" attitude.

    The research team includes one of the inventor of TCP/IP
  • If I recall, this is an ancient idea. Mighta been in one of the Programming Pearls books?


    ---
    Have a Sloppy day!
  • I've thought about this for a while, ever since satellite was the "next big thing" for high bandwidth ISP connections. The only problem with Satcom this is that it requires either a hell of a lot of satellites in LEO (cf. Iridium) or a few satellites in Geosync.

    And if you are dealing with Geosync, you have a minimum 0.5 second round trip, and that isn't even going to the moon, but staying on Earth! Even the moon gives a minimum 3 second round trip.

    Scott
    "Space is big. Really big!" - William Shatner, SpaceLine.com
  • Oh, prawly. I didn't think I was the first person on the planet to come up with that idea.

    Although the Programming Pearls books can't be like, real ancient, if the title means what I think it does. ;)

    Breace.
  • Yes you can do this with any particle with spin (electrons for example). You can also do it with optical polarization of photons. I'm sure other nonlocally entangled two state systems exist.

    However, you cannot use this to transmit information. For instance, two entangled particles are emitted separating from each other at a rate of 2c. You measure the state of one of the particles (suppose it is spin up), then the distant particle will then become the appropriate state (say spin down).

    Measurement doesn't force the spin state to a value of your choosing ... it only forces the global system to be consistent with your local measurement. You can use this to transmit a one time pad but the protocols used also require a normal slower than light channel between the sender and reciever.

    Kevin
  • 1 moon.earth.sol.mw (34.21.56) 500.263 ms 601.991 ms 541.324 ms
    2 mars.sol.mw (34.25.5) 180400.005 ms 185394.558 ms *
    3 jupiter.sol.mw (54.2.3) 3600530.348 ms 3601001.451 ms 3602219.045 ms
    4 pluto.sol.mw (68.3.4) 604803040.079 ms 604804356.086 ms *
  • Didn't I read a while back about a quantum computer they're builing at Los Alamos that can transfer information at (seemingly) faster-than-light speed? I seem to remember that they had it up to 12 KM sometime early this year. If you can transfer information instantly over 12 KM, I don't see why 12 billion miles would be that insurmountable (I'm serious).

    Maybe we don't have to deal with planetary lag, after all. Or maybe I've just read Ender's Game one too many times.

    ----

  • by MenTaLguY ( 5483 )
    I get 2 second ping times on IRC all the time. Does this mean NASA will be running an ircd on their next interplanetary probe?

    *grins*
    Berlin-- http://www.berlin-consortium.org [berlin-consoritum.org]
  • by jd ( 1658 )
    This would be absolutely cool! I could ogg the entire Neptunian Netrek team before they could react! And take out all their major worlds.

    I bet Stef still couldn't win a Quake game against them, though.

    Seriously, though, the time delay makes TCP-style communication silly, and if you're using UDP the lag becomes irrelevent (except in Netrek! :).

    It seems bizare that they'd need to invent a new protocol for this when protocols which would function perfectly well already exist.

    (Or are they trying to score geek-points?)

  • by drwiii ( 434 ) on Wednesday August 25, 1999 @07:15AM (#1726262) Homepage
    5 router.blackhole.sol.mw (42.42.42.42) 624810293.843 ms * *
    6 * * *
    7 * * *
    8 * * *

    ---

  • They seem to have forgotten that UDP is a non-ACK'd unverified non-sequential protocol - you send the packets out and _hope_ they get there. Assuming they can make the networks reliable enough, UDP would be perfect - just send out commands split up into packets, with each packet tagged so they can be reassembled in the correct order, and be sure to send the packets in duplicate, from two different transceivers - then the request can be processed and the data sent back. Obviously, this would not work for things such as web browsing, FTP, and telnet - derivatives of TFTP and finger would be promising though.
  • It's true this work has been going on for a while. SCPS is a close relative of TCP/IP - in fact, out-of-the-box TCP/IP is a valid subset of SCPS. SCPS is not intended for terrestrial applications, and its proponents freely admit that its interface to the terrestrial IP network will be through application-level gateways.

    SCPS supports such things as drastically compressed packets, big windows, SACK, several address families (including a really tiny one for a constellation of spacecraft), and a bunch of other not-new ideas. Its file transfer protocol supports hole-filling and such-like, and it has an integrated security protocol with its own complete spec. It's really a pretty good piece of work, and has performed well when tested over satellite links.

    But it's not a be-all and end-all (though the earlier poster's information that it has problems came as a surprise to me). Deep-space communications do come with their own set of problems. Doppler shift, strange modulation schemes and the like are all part of the big bad black void.

    I'm kind of amused by the considerations of deep-space host naming, though Cerf is correct that the problems of deep-space packet routing are best solved before it becomes a problem rather than after. He's an optimist and a space lover with such a big space bump that he guest-starred on _Earth: Final Conflict_. But the poster who tied this into the ICANN problems and then said he wanted to get rich of the .earth domain made me laugh until I remember how those greedy opportunists made Jon Postel's last days a living hell. A pox on all of 'em.
  • by drwiii ( 434 )
    Gives new meaning to "the Sun is down" (:

    ---

  • Naw...remember that TCP/IP isn't a hardware specification. In fact IP's credo is that it a communication protocol that works across various hardware(copper wire, fiber, token ring, interplanetary communication array :-).
  • "...as will the inventor of TCP/IP, Vinton Cerf, who is known as the "father of the Internet." How dare he steal credit from Algore?

    Doh! Beat me to it....

    I was just wondering the same thing as well...

    any ideas, pass 'em on!
  • I want to clear up a few points
    1. They're talking about SCPS. There are a few messages that will tell you more about that.
    2. The round trip to Pluto is 17hours. You can't send ACK's all the way there and wait for the pack to get back.
    But you can do it with a car phone. The distance to cover is less than .3s(add a .3s buffer and you only need to go to the cell- if the cell does buffering)
    3. If I get this right the protocol is going to be some sort of UDP with major ErrorCorrection. Huge proxies are needed at either end of the line if you wanna hug the WEB while hiking(at least past the moon). Of course some data will have to be resent - there will be some sort of send on request mechanism.
    4. I got a 1.5 sec lag and still can play Starcraft. So TCP/IP is good enough for wiring the Moon.
  • One question: How is the above post "Flamebait"?

    --------
    "I already have all the latest software."
  • Obviously, getting radio to the "dark" side of the moon isn't an insurmountable problem, and NASA has plenty of solutions available. (Orbiting a comm sat around Earth - Luna Lagrange point L2, talking to geo-stationary comm-sats would do it)

    It's just going to be a bit expensive, and we don't currently need to talk to anybody back there.

    By the way. The "dark side" is unseeable to earth, not unlit, by the way.
  • Well if we want to be able to convert to their protocol, we'd probably want to have domains that mean something everywhere, e.g.: slashdot.org.earth.sol.milkyway, or even slashdot.org.III.sol.milkyway...
  • Well, since IPv6 is finally out, we can refer to that as the "New Internet", which makes the previous "New Internet" the "Old Internet", and the "Old Internet" still the "Old Internet", thus squishing the notion of the "Old New Internet".

    Huff, Huff!
    --------
    "I already have all the latest software."
  • I'd guess that ping flooding wouldn't work... Fiber is cheap, but inter-planetary bandwith will come at a premium, is my guess. There will probably be all sorts of things to make sure only packets from authorized places will make it between planets, and also bandwith constraints.

    As it happens, FTP is already for the University of Mars. Just not telnet.

  • by Psiren ( 6145 )
    Now we'll have some really cool excuses for servers being down...

    Sorry, our planet was eclipsed.
    Damn rocks got in the way of the packets I tells ya!
    Ooops, wrong moon ;)

  • by PHroD ( 1018 )
    first things aliens will receive from us will be porn banners and spam promising them to lose weight


    "There is no spoon" - Neo, The Matrix
  • Routing a call to a speeding BMW is not the same as getting data to astronauts. The problem with getting a call to a vehicle is that the position is changing and coverage is limited due to terrain. The problems described in getting data to planetary locations is propagation delay.

    Transmit and acknowledgement can work in space, you just need larger timeout values, much larger. The article states that connection-based links are impossible. "Connections" are a state of mind, so to speak. You could achieve connections, they would just be very high latentcy. And things like forward error correction can minimize data loss on high noise radio links.

    "Off-the-shelf" technology and know-how should be able to solve the problems described in this article rather quickly.

  • ...and a live audio feed from Britney Spears!
    Would they interpret that as a very primitive way of communicating, or an act of aggression?

    we live in dangerous times...
  • Im thinking the protocol or method would be more similar to UUCP for the exchange of data than TCP/IP. Will kibo gain more followers? I think so.
  • by DdJ ( 10790 ) on Wednesday August 25, 1999 @05:45AM (#1726284) Homepage Journal
    Bandwidth to Pluto isn't neccesarily going to be all that bad. Latency is what's going to suck. If it takes you 30 seconds to get a packet from point A to point B... that tells you nothing at all about the bandwidth. This is nonintuitive to some people. A common saying back when usenet was done via UUCP instead of TCP/IP was "it's hard to beat the bandwidth of a station wagon full of mag tapes". A station wagon full of media has a *tremendous* bandwidth, but really poor latency and a huge "packet size". UUCP is actually more suited to interplanetary communications than TCP/IP is. Luckily, we've got some great tools for getting UUCP networks and TCP/IP networks to play together nicely -- mail and news will work without a hitch over UUCP, even today. And MX records mean never having to say "I hate bang paths".
  • This type of protocol would have to handle huge lag.. Think about it.. it's 8 minutes for light to go from here to the Sun, a bit under an hour to Jupiter, right? Pluto is what, a week? The moon is 1 second each way..

    Any protocol that can easily cope with even a 2 second ping time, can be used for all sorts of things.. They make mention to cars and such in the article, but basically you can toss them at any moving target and be okay with it. Nice.

    ---
  • Sorry, but this isn't anything innovative.. we've been doing interplanetary networking for quite some time now..

    check www.dark-jedi.net [dark-jedi.net] for more info.

  • So like.. can I set up a BNC server on mars and IRC as:

    coug@knows.that.women.come.from.venus.and.men.co me.from.mars
    :)
  • what kind of rubbish is this, by the time were populating other planets dont ya think this protocol will be obsolete?

    There's a manned Mars mission in ten or fifteen years, and it ain't no week-long trip. I don't know about you, but I'm not going to Mars if I can't get Slashdot...
  • Righto... I knew that. But based on what we've been sending to other planets recently (using, say, the mars probes as the state of the art) we aren't getting all that much bandwith (in addition to high latency) due to all sorts of factors, not the least of which is that you have to spend lots and lots of space on Error Detection & Accomidation. And if we, say, have a megabit between here and Neptune, we would want to save that for people who needed it, not slashdotters using 10,000 ping samples to estimate the current distance from here to the planet.
  • I don't think lag will be an issue for forever. If IBM is successful in the area of quantum teleportation as a means of information transmission, all communications regardless of distance will be instantanous.

    See : http://www.research.ibm.com/ quantuminfo/teleportation/ [ibm.com]

    We will have "ansibles", ala Ender's Game, eventually. And it will be quite cool!
    --
    "All that is visible must grow and extend itself into the realm of the invisible."

  • by gsfprez ( 27403 ) on Wednesday August 25, 1999 @07:37AM (#1726292)
    Sigh... we need more military people in /.

    Space Communications Protocol, what the author must have been talking about, first of all, is being headed by MITRE (the strap-on brain of DoD back east to help them with anything geek).

    The main goal is to delelop a protocol that looks and feels to the user like TCP/IP, but handles the fact that the major reason for packet loss being.. well, lost or damaged packets, literally, out into space.

    TCP/IP assumes that lostpackets are because of network congestion, and so a missing packet is requested to be retransmitted.. and this usualy does the trick.. since most terra-nets run on fiber or copper...

    If you kept asking for retransmissions in space - you exasserbate the problem so that if the errors grow to only 10^-6, and you use plain ol TCP/IP, the overhead and loss drowns the network out.. and you get nothing.

    10^-6 errors can be a good day around here in the space biz... so one of the major points of SCPS is to deal with high BERs differently than TCP/IP, the other, of course, is security (how can you get spy sat data to the ground and beam it with an RF signal that anyone can pick up?)

    SCPS has standard ftp, and will encorporate http eventually.. but its not done yet AFAIK.

    You can read all about it here... [nasa.gov]

    http://bongo.jpl.nasa.gov/scps
  • This is mostly wrong. What they are talking about is SCPS, something that has been in development for many many years. It's a generalized protocol for delay with massive delays, or even the minor delays with satellite communications compared to terrestrial networks. Also has to deal with high loss, etc. It tries to be a lot of things and that's it's problem with the implementation and functional operation.
  • I got another amazing idea!
    ("OH NO, Matt, Not another one of your amazing ideas again!?!?!" "Shut yer trap.")
    Obviously, e-mail and newsgroups would work fine, but the other sorts of requests, like file requests and data-processing requests, would be impossible to negotiate - you couldnt tell if the commands you sent were correct or not...
    Fortunately, there's a solution.
    Mirroring.
    It's not necessary to mirror everything, but you can make web surfing possible by mirroring their directory tree for the documents, and cgi-bins involved with site-searches; for other cgi-bins [data processing and file-requests], an interface would check for syntax in the data processing, then ship it out and tell the user approximately when they can expect their results back (a local program would save the incoming results and then provide a web interface to retrieving them) - file requests would be similar, there would be an ls -lR stored locally, and it would be checked to see if the file was there and if permissions allowed access by that user. (A crontab entry could mirror the file at appropriate intervals.) If the file was, data about its size, date, etc. would be given to the user, who would approve or disapprove the request - if the request was approved, then the file would be ordered to be shipped to one of many "Cache servers", which would have many very large disk storage media. The user could pick up the file later, and if another user requested the same file, they could be redirected to the cache server for immediate access. If the files remained unused on the cache server for a certain amount of time, they would be deleted to make room for other files being ordered.
    The only problem with this system - time and money. Time can be allowed though, since it would be pointless to have Interplanetary Internet implemented long before there was much up there to talk to. Money will hopefully be donated by generous research corporations (probably ones with orbiting satellites.)
  • by Anonymous Coward
    So what you're suggesting is a connection window of roughly a week, assuming no packet loss, no additional latency, and packets moving at the speed of light. That doesn't sound very feasible, at least under TCP/IP. As others have suggested, UUCP is probably a better bet.
  • technically, channel capacity is proportional to the bandwidth times the log of the snr, which means that channel capacity is still not that bad, even if snr is ridiculously low.

    this is, of course, if you can find error-correcting codes that nearly achieve channel capacity with low output error probability, and can be decoded fast enough. there are such codes though, like so-called turbo codes.
  • This is a bit of old news, Vint Cerf has been working on this for more than a year now. He is acting now in advance of bad decisions expected by the ICANNt and NSI. There has been a fight going on for years over expanding the TLDs from the current 227 to thousands, millions, or an unlimited number.

    Then I could be anti@cypher, and my mail would get to me, and you could eyeball my webpage http://www.anti.cypher and so on.

    For years I ran a shadow TLD of .earth, and there were several hundred machines on the internet which used .earth with a physical location for a hostname (leuven.earth, london.earth, ougadougou.earth :-) Sendmail on those hosts believed the fake RNSs added to the bind root.hints file, and the whole thing worked quite nicely from 1989 until 1997. Then Vint asked our group to stop using .earth so he could plan on using it as a new TLD as part of an interplanetary addressing scheme.

    There are several projects going on at the same time for this "interplanetary internet" (exonet?, xenonet?). Vint Cerf and company are working on an extensible naming scheme for planets, moons, orbits, asteroids and ships in transit.

    There is another group working on reliable transmission protocols and routing protocols to deal with huge round trip times and extremely expensive transmission costs. Just ACKing a transmission is not going to cut it, the ACKs need to be piggybacked on transmissions going the other way, and the state machine to keep track of it all will be huge.

    There is a group at Caltech working on the low level transmission characteristics (layer 1 stuff) with a large amount of redundancy. Cyclical and longitudinal redundancy woven into the bitstream, multi-frequency phase encoding, all the coolest tech for RF fanatics.

    When all this stuff comes together there will be at least one ISS and possibly some private orbital stations. Expect some privately funded space exploration missions as soon as it becomes possible for a corporation to buy some cheap boost to LEO and from there they will start to explore in the hopes of finding something to make their stockholders very rich. I've been predicting for years that cheap space missions will be the next "revolution" to replace all the hype around the internet.

    I still want to control .earth as a TLD, and the gateways sending messages between the earth domain and the space domain. Could get very rich that way :-)

    the AC
  • I can see it already, everyone clamoring to homestead a ".uranus"
  • A long long time back I worked for *cough*M$ for a while. JUST 6 MONTHS THOUGH!

    Anyways I had this silly idea then and I emailed it to our friend BG.

    I figured if you put a mirror on the moon you could store data in a 'light loop' by shinning a laser at it and turning it on and off (real quickly). Turned out you could actually store quite a bit of data (for that time) in such a loop with a pretty low latency.

    So of course clouds would be an issue, and I suggested that two satelites would be easier to deal with. Just beam up your data and keep it in a laser loop between the satelites.

    Well, I never got a response from Billy Boy, but sure enough, about a year later he announced his satelite launching plans. :o)

    Breace.
  • Both radio and lasers are electromagnetic energy. Radio is just much lower frequency (visible light is about 10 terahertz IIRC)

    So far us humans haven't been able to control the modulation at anything near 10thz so we just do CW on it, which still gets a reasonable symbol rate.

    The thing about that end of the electromagnetic spectrum is that it is extremely line-of-sight. You have direct experience with this, if you've ever seen a shadow.

    Also even at much lower frequencies the line-of-sightness of radio becomes a problem. You will get blackouts from eclipses. We still don't have a way to get radio signals to the dark side of the moon.

    You also run into the problem of even a perfect reciever not being able to detect a signal amongst the randomness of space, even with extremely directional antennas (you think they put dishes on those things cuz they look cool? :) you're attempting to radiate a signal to an area several million km^2 across and ten square meters of antenna gain area simply will not cut it.

    Apparently this won't be a huge problem, since we are still recieving Voyager transmissions, but it might be an argument for lasers assuming the tight directional tolerances can be accomodated.

  • From a common quote file:
    "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway"
    --Andrew Tanenbaum

    I've lived this quote several times :-)

    This quote is relevent to linux users because it originated during some discussions between AST and Linus Torvalds. See:
    http://www.dina.kvl.dk/~abraham/Linus_vs_Tanenba um.html
    although I no longer find the quote there :-( check the babyl archives.

    the AC
  • as useful for is it means a more cost effective way of communicating with deep space probes. One of the main problems NASA has had funding their probes is they need a massive communication system just to send rudimentary commands to them. The internet grew up on low bandwidth for years which lets many small systems act in an array to quickly transmit data, this is suprising to none of you. But a deep space network would mean it would cost a hell of alot less to send a mission into space because it could use a more robust communication system that was already in place. The Mars Pathfinder sent back it's data packaged as e-mail is packaged, but it was only a 9600bps connection that ended when Mars set relative to us. A similar system using a distributed network would mean we could communicate with a probe 24/7, even communicate with multiple probes all using the same system. This is way more cost and resource effective than what's used now.
  • Didn't you hear? In the year 2025, they renamed Uranus to end those jokes once and for all. It's been renamed "Urrectum".
  • Yes, you should. Please.

    wow u too can see ;-)
  • Sorry if you got the impression I truly wanted to make money off of something that should always be free. I have the greatest respect for Jon Postel and all the amazing works he accomplished.

    For years there was a .earth domain, and although it wasn't official, it was fun to play with and use for training and playing. Vint Cerf is now working on a couple of projects to expand addressing and routing to the vagaries of space. All of this started a couple of years ago when NASA sent a web server up with the shuttle into orbit, and a new TLD .orb was created for the occasion. It was fun probing around the Root Name Servers to see the delegation to a NASA gateway, and for a short while it allowed zone transfers of the handful of records that existed.

    Now .orb has gone away, and there is a working group trying to protect some of the future space naming schemes. Given the various attempts by various organisations to control the TLDs and naming in general, Jon Postel and now the people he inspired are working hard to keep future naming schemes open and available for everyone, not just a greedy corporate controlled WIPO or ICANNt.

    Sorry for the misunderstanding

    the AC
  • This is a concept that some of us have been thinking about for some time - as networks get faster, there's a growing "latent" (pun intended) storage capacity in the bits that are in transit.

    Do a little math: On a gigabit network with a quarter second latency (a reasonable assumption for a "nationwide" network), there is over 30 MB of storage in the link itself (one-way). At terabit and petabit speeds (and/or tremendous latencies), the buffer becomes quite sizable.

    So far as I know, no one really makes direct use of the network as a storage buffer, but it could be done fairly easily, so long as you don't care a whole lot about getting your data back!

    The closest thing to this in real use is the BFS (broadcast file system) used by cable headends to make files available to their digital settop boxes - they just dedicate a channel to continuously broadcasting the contents of the filesystem. Of course, three's a limit to how large such an FS can be as a practical matter, but settop boxes are small and stupid, so it works well for now. This is how things like the program guide and such are delivered in digital satellite TV systems, too.

    Maybe I'm missing something, but it seems much of the debate about RRLFNs ultimately collapses to the age-old (in internet years) controversy over whether or not TCP keepalives are a good thing. (LFNs, pronounced "elephants" are Long Fat Networks - todays high latency networks - this wording is from the applicable RFCs, 1323 and its ilk. RRLFNs are Really, Really LFNs, something I just made up.)
  • Yes, but a black hole of that size would evaporate within microseconds.
  • But where is the noise coming from? In space there is no friction and if the payload is delivered with light/laser is the any limit to the channel capacity? The Utilization of SATCom uplinks could/would hold back the channel capacity untill future developments in the multiplexing area are resolved. But the satellite to satwllite connection could be unlimited. On the topic of noise the only time we would have the noise problem is when the signal is leaveing and entering the atmosphere and of course when you start using multihops, but the digital signal would be cleaned up at each intermediate hop and a clean signal would be sent to the next destination. How many channels can you put across a beam/pulse of light/laser? Right now it is only determined by the hardware configuration transmitting the signal as light/laser technology is just starting to get tapped and understood. Just a few comments on this interesting subject so let me know what you think.
  • Sure we won't be populating planets, but we may very well be setting up many a space station in orbit of earth, and bases on the Moon, and sending manned missions to Mars, etc.
  • I wonder if we could get the /. effect to take out Pulto.
  • what kind of rubbish is this, by the time were populating other planets dont ya think this protocol will be obsolete?

    This software will be long gone by the year 2000....
    /.

  • I mean P-L-U-T-O Pluto!!!! damn keybored
  • by Signal 11 ( 7608 ) on Wednesday August 25, 1999 @06:17AM (#1726318)
    When it absolutely, positively, has to be lost at the speed of light interplanetary tcp/ip!


    --
  • Will probably need to add a upper domain to everything... Let's say, http://slashdot.org.earth.solarsystem.milkyway/ (or maybe http://slashdot.org.earth.ss.mw/)

    Then after finding other intelligence, we would need to set some bridge to convert their protocol to Interplanetary Protocol to TCP/IP... Of course address translation and the rest; possibilities are endless and fascinating.

  • Um, nothing about this research has to do with superluminal transport of information in a useful manner. Although EPR nonlocality (the phenomenon this research takes advantage of, IIRC) does give you 'instantaneous' action-at-a-distance, it does so in such a way that information *cannot* be transmitted; the 'receiving end' can't even tell when the action has 'occurred' until the 'senders' *tell* them.

    -spc
  • Two things:

    Some experiments have been conducted which claim evanescant modes can transmit information faster than the speed of light (yes, I know the difference between group velocity and phase velocity). I am remembering a claim from a group which stated they were transmitting Mozart through a crystal at 4.7X the speed of light (if you don't mind the received signal being attenuated 80 dB). However, objections have been raised to the research (most revelvant was that the audio signal + carrier was easily predictable in a mathematical sense thus it is difficult to conclusively say when in fact the signal arrived). I could dig up an abstract for the research.

    The research at Los Alamos to which you are referring actually was concerned with Quantum Crytography. In Quantum Cryptography, a random pad is distributed securely. The distribution mechanism involves strange EPR-paradox type faster than light effects. The Los Alamos 95-96 Physics Division Progress Report gives 205m for the distance under which free space quantum key distribution has been performed. I imagine greater distances have been achieved. I could dig up some abstracts here too if necessary.

    Kevin
  • We still don't have a way to get radio signals to the dark side of the moon.

    Hang a couple of repeater satellites in such a way that at least one of them can always see both stations. You could do this with lasers just as easily (well...) as radio, too.
  • Hell, if it gets them to stop spamming us with that flipping Message, I'll ping-flood Vega myself! :)

    Anyways, their servers are busy with intergaslactic.distributed.net's project to crack the message in Pi.

  • Hell, if it gets them to stop spamming us with that flipping Message, I'll ping-flood Vega myself! :)

    Anyways, their servers are busy with intergalactic.distributed.net's project to crack the message in Pi.

  • Oh, ok -- I can't get xDSL service in my sorry little apartment, but people on other PLANETS get connected all day. That's real fair. Maybe I should just move to outer space, huh?

    /* Prediction -- first response to this post will say "Yes, you should. Please." :) */
  • Channel capacity (the actual number of bits per second that can be transmitted reliably) is likely to be very bad. The high amount of noise and attenuation is going to make the signal to noise ratio very low, which means that very aggressive error control schemes will be needed -- which introduce redundancies and thus reduce the effective bit transfer rate.

    Bandwidth doesn't tell you how fast data can be transferred. Channel capacity does -- which, according to the Shannon coding theorem, is proportional to both the bandwidth and signal to noise ratio.


  • Lasers can be blocked easily, bounced away from the target easier, scatterd with space dust, and the planets are in constant rotation. If your target is on the other side of the sun, how do you get it? At least with radio you have some chance.. (A pretty small one due to the sun's massive radition, I suspect, but at least a chance) That's a bit of an extreme example, though.

    And radio is probably cheaper and easier to work with. (dunno, but a laser with the power needed to cut through the background radiation in space would probably be much more expensive (and bigger) than a radio transmitter..)

    WARNING: Above info could be totaly wrong. I'm not even close to an expert in such things. Just my somewhat educated opinion.

    l8r
    Sean
  • let's see, by one report from NASA, Pluto is 4.4 billion kilometers away from Earth...that translates into 4 hours.
    Hmm...methinks interplanetary quake matches would not be very practical!
    ...although, that's a better ping than I get right now...

    Zilfondel
  • "Yes, you should. Please."
  • With radio, you just broadcast your data. With lasers, you need to find out where your target will be at a given point in space and time, and the shot has to be extremely accurate. Why make it harder on yourself?

    Today's English Lesson: Oxymorons

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...