Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Internet 2 Crawls Forward 130

JimBoBereLLa writes sent us yet another story about that wondrous beast known as Internet2. Talks about specs and bandwidth and applications currently being used to test the new network (which currently connects 180 points, of which my sofa tragically is not one of them). A fairly fluffy piece, but at least it's nice to know that it's getting somewhere every few months.
This discussion has been archived. No new comments can be posted.

Internet 2 Crawls Forward

Comments Filter:
  • I administer a public Linux Distribution mirror at a University with Internet2 access via a DS3. It serves out LinuxPPC, Mandrake, RH Sparc-Alpha-i386, YellowDog, and many more. Our Internet1 bandwidth usage was becoming a major factor a number of months ago so I throttled back the FTP server and created a FTP virtualhost for all you lucky Internet2 users. That virtualhost isn't rate limited. The users that use that virtualhost love it because of its sheer speed. The only problem is no one advertises that it's there. The distros don't put it on their download pages. It's in my welcome.msg but we all know how apt users are to read that. grrrrr.......

  • Well, was internet "1" originally meant for home use?
  • ugh. We don't need any 3d rendering or bump mapping of that cave. Of course, this could bring pr0n to a new level!

    --
  • Yeah? Shut your cake hole, bum chum!
  • by Nexx ( 75873 ) on Tuesday August 29, 2000 @05:34AM (#819725)

    The biggest problem Is that the internet is already too much of a standard.. it's easy to change standands if they're not too popular or used that much... Want proof of this? In some cases, we're still using the kermit protocol to download.. if anyone doesn't know, this is one of the first protocols written for the internet. So that leaves Internet2 to the scientific/idle rich category. By the way, has anyone done any real world benchmarking? As in, take a copy of the slashdot code, and see how it runs on internet2?

    Uh.... I think I'm missing something here. First, Internet2 is (was?) a research-only network. It's not (supposed to be) a place for pr0n and other commercial use.

    You speak of standards. The Internet, as we see it, is just a collection of heterogenious networks in a homogeneous naming space (well, mostly--a discussion of NAT is beyond the scope of this comment). You want to move to I2? Fine. Go enroll as student or faculty at one of 180 research institutions. Just don't go there for much Quake use :-)

    You also take a look at kermit. Kermit was not an Internet protocol per se, but was a terminal-terminal download protocol that assumes unreliable network streams. Yes, that means certain instances of kermit-over-IP existed, but it doesn't mean that kermit was necessarily an Internet application-level protocol, where most transfer tools assume a TCP-level (or equivalent) functionality, leaving error detection and correction (among other things) to the lower-level protocols, and it dealing more with the application-specific information.

    As for benchmarking /. code, I really don't see the difference of its code running on I2, Internet, or my home LAN; provided that the host is mated to a decent backbone, its performance would be more dependent upon load/system configuration than the underlying network itself. I2, with its much smaller user base and a similar backbone, consequently has much lower load, so it should perform better.

    Besides, if you wanted to compare/contrast I2 to Internet, you'd probably do a network traffic analysis (peak bandwith, peak latency, multiplexing capabilities, etc), which would be a function of the routers than anything else.


    --
  • by Anonymous Coward
    On numerous occasions, the author of the article shows how he obviously isn't aware of things he's talking about.

    Most obviously the physical Internet2 backbone is called "a Web." Web is has only semantic structure; physical structure has nothing to do with it.

    Another obvious lapse is claiming that there's no multicast on IPv4; there is multicast, it's just that support for it is a bit uneven.
  • You're right. If the technologies developed in I2 require the user to carry around a large brick and pay $15/minute, people will not use it.

    Duh.

  • The full sentence is:
    On NGI's 100X test-bed, you're looking at about 1 minute download time, and on a 1000X Web, the full EB can be yours in just 15 seconds.
    This makes me wonder... If the network gets 10X faster, shouldn't the download time decrease with more than a factor of four? Sure there are protocol overheads and stuff, but this seems like rather large lossage of bandwidth utilization. ;(
  • (The karma limit is now 50, so unless and until it is removed you're wasting your time.)
    --
  • by Metrol ( 147060 ) on Tuesday August 29, 2000 @02:35PM (#819730) Homepage
    After reading through the previous posts it doesn't seem to me that anyone has addressed the scariest part about I2, "quality of service". What this actually means is that within the header of an IPv6 packet is a priority byte. Based on the value of this byte will determine whether that packet sits waiting at the router, or rushes on through. From the article...

    I-2 is researching what it calls "quality of service," some way to guarantee seamless delivery of priority transmissions. A collaborative medical procedure, for instance, should not be interrupted by e-mail traffic. One thought is to create a premium service, where critical data would be tagged so that routers would pass it through first, much the way railroads clear the tracks for express trains.

    The way the article makes it sound, there will be a purely technical reasoning behind which packets will be given a priority. Bzzzz, wrong answer folks. What is being sold to corporate IT managers out there (based on some IPv6 seminars I've been to) is that you'll be able to buy higher priority for your packets.

    Stop and really think about this. You're an ISP that can assign a different priority to packets going to and from your various customers. Are you really going to ignore the billing potential of selling higher packet priority to different folks? For that matter, as the demand goes up for higher packet priority, so does the cost.

    There are some truly frightening scenarios that can come to play here with the standard as it is presently being presented. What we're really looking at here is that live medical procedure waiting at the router for a CEO's E-Mail to get through.
  • Does this mean that anyone who can ping this host (ie me) is on I2?

    No. The schools will route I2 as well as regular internet traffic.

    I had typical transfer rates of 5-6 Mbps late at night with I2 hosts at other schools. Of course, last time i was in school (2 years ago), I wasn't competing with all the Napster traffic...

  • Not addressed was when, if ever, will I2 be open and available to organizations and individuals outside of the 180 academic institutions?

    If it is opened, there will most likely be the typical Oklahoma Land Rush of speculators and "netrepreneurs" who wish to be the Amazon, Ebay and Sanford Wallace of the "New Internet". For better of for worse, this will change the landscape of I2 as its' current users know of it today.

    If it is kept closed and limited to universities it will become purely an academic entity for research, development and communication and a test bed for future technology.

    The problem which might happen in the latter possibility is the private sponsors, Worldcom, Qwest, others, may eventually want to see some return for their investment translated into profitable products. If I2 does not produce marketable products or technology which can add to the companies' bottom line, there is again, a possibility that stock holders, board directors or company officers may decide to withdraw funding.

    Finally, a portion of I2 is financed by taxpayer money. As with nearly every federal program, there is always the possibility of Congress cutting or eliminating funding in the future jeopardizing I2's existance. See Super Collider for further reference.
  • Shut it you tart!

    I'm an Englishman and virtually never suck men's cocks. And neither does my boyfriend.
  • According to Steve Campbell, the campus network is at 100% capacity. At least this spring when I was on it was. I'm sure for summer term it's not as bad.

    __________________________________________________ ___

  • Oops Slashdot ate my less-than sign.
  • It's faster than IP-over-Bird [ietf.org].
  • There's merit to what you say. I have, in fact, presented the argument at a rudimentary level: Roads cause cars, like you say.

    Which is, in fact, a part of the problem, but not when separated from the whole view, namely urban development as an entirety. Take a step back to Portland, to San Diego, to Toronto. All of these cities have large urban populations and busy downtown cores, with major potential for traffic...but the cities in question have alleviated the problems by favoring inmprovements to transit rather than road infrastructure. portland took things one step further, drawing on traffic calming projects in germany (they know cars). What followed was a a series of laws that eliminated free parking for company employees, and large subsidies for transit and carpooling (either monetary, or physical allowances. In all three cities, one can find car lanes that are only for use by cars with three or more passengers, or buses).

    What followed was a "the establishment of an urban growth boundary adopted in 1980, middle-class neighborhoods continue to grow and thrive close to the downtown instead of engaging in a suburban exodus, while more distant, exurban communities remain undeveloped, leaving the people there in therir pastoral splendour...this contrasts sharply with cities such as Detroit where 30% of the downtown core remains empty and the only people who live there are either the very rich who inhabit 'fortress' areas which are access controlled and patrolled by private police, or the very poor who live in run down areas with a decayed infrastructure...the stabilizing (emphasis mine) middle-class having fled to the suburbs long ago." [Namir Khan, Healthy Cities Report]

    The pattern is cyclical...roads --> people --> traffic --> roads --> people...you can add elements to the cycle ad infinitum, as guaranteed by the butterfly effect. Pointedly, the statement worth making is not "roads cause cars", but instead "roads do not cause less cars, only more traffic".

    It's interesting to think about the middle-class as the stabilizing factor in urban development (and by extension, traffic use). If we were to categorize a hierarchy of internet users, what would be the defining parameter? In the urban case, it's clearly money...on the web, i would argue that the class system of internet usage revolves around bandwidth speed (the obvious conclusion), but rather the wealth of knowledge and information in transfer. The premium is web space, just as in cities the premium is land. The purchasing power is in the value of your information...large multinational companies constitute wealthy, gated communities with private intranet policing and limited access, whereas the 'poor' netizens spend their time chained to useless IRC events and porn surfing. in this case, the stabilizing factor happens to be people with legitimate interests in technology and even a hand in the process. The stabilizing factor is Slashdot.

  • Unfortunately, I've been unable to find the article itself that I mentioned (FYI, it was a late 1998 issue of Australian Personal Computer magazine), but it is entirely possible that in the last 18 odd months that the Internet2 project has diverged into an acutal network (The Grid) and the protocol set (Internet2), distinct from each other.

    The best I can say is that I'm certain the issue was treating both the network and the protocols as a unified Internet2 project, purely for academic and similar use.
  • Also think of it this way. I would give your left arm for a 10Mb uplink!

    imagine this pricing perspective:

    Qwest charges $900.00/Mb to the internet - a 10Mb link wouild be $9,000.00/Mo.

    You pay for tuition and get a $9K pipe to the internet - you are STOKED! (maybe you dont get the full pipe - but it is bound to be better than anything else)

    so - 10Mb is fast as hell - dont complain.

  • Come on now... IP-over-Moose can't be that fast. ;)
  • IPv6 has been developed largely in public. Although you can use it with static IP addresses, based on the NIC MAC address, you don't have to use this - in fact you can assign whatever IPv6 address you feel like to an interface and use that instead (as long as it's routable).

    IPv6 addresses can't be hard coded into NICs, because part of the address derives from the network provider and the site.

    The result is that IPv6 can be just as anonymous as IPv4, with some reasonable setup, though by default it's possible your IP address will be quite static. No doubt privacy-enhancing tools will make it easy to randomly choose the lower part of your MAC address to get some privacy back, just as analogous tools block cookies etc.

    If you must be so paranoid, why not at least learn about how IPv6 works before you start posting?
  • You don't know, and can't find out very reliably (at this point, anyway) what any of the three-letter gov't agencies may or may not have built into Internet2. Remember, 'Internet1' was originally ARPANet, and was build largely with federal funds and support. There's no reason to think that the second generation will have any more "backdoors" build into it than the first -- though I suppose there's no real reason that there couldn't be a good number in the current incarnation...
  • "We drill for oil in Alaska, send it through pipelines, refine it, and ship it to an oil-fired electrical utility. The oil is burned, producing steam to push turbines that generate elecricity. The electricity is sent to the grid, travelling hundreds of miles with transmission losses along the way, and thence to your clothes dryer. Here the electrical energy is converted to the mechanical energy of the revolving drum and the thermal energy of the heating coil of your dryer, allowing your clothes to dry. On the other hand, you could have just hung your clothes out to dry on a clothesline!" (Sim Van der Ryn and Stuart Cowan, Ecological Design)

    You don't give a nuclear reactor to a third-world country. This much is obvious, for a number of reasons. What's less obvious is that we give more than we need to ourselves...why not? It's only a matter of convenience. The simple answer: when we all take more than we need, everyone is shafted.

    A PalmVx has enough storage capacity to keep track of all the things you will ever do for the rest of your life in text. Your 20 gig IBM Deskstar 75GXP has enough storage space to keep track of just about anything in the correct (read: simplest) format. When everyone has this data stored on a local drive, the situation isn't that big of a deal...the consequences are internalized. When we all share a fixed space, like the net, then there's a problem.

    There is no such thing as a fat or thin pipe. Take my Coke as an example. Obviously, there are limitations on the acceptable width of the straw in the can. It has to be fat enough to allow passage of the soda (pop in Canada) with surface tension taken into effect. It has to be thin enough to fit in my mouth. Other than that, the straw's effectiveness depends on how hard one sucks.

    -j

  • As for the current internet being rejected by business: I hope you're being sarcastic....

    --

  • Who are "we" to decide who gets online?

    You assume all first posters are young... they're not.

    And my gut reaction to this is that it's so arrogant about the net, it might just be a troll.

    Malk-a-mite

  • Daniel,
    Debian is on my list of things to add but I'm out of drive space at the moment. Since I'm supplying the hardware out of my own pocket, upgrades progress slowly. :-(

    How you access an I2 server from an I2 network depends on how you University set it up. Some, like mine, simply used routing tables on our border router to direct I2 packets towards our I2 line and likewise for I1 packets. Some unvs only hook up certain areas or buildings to that specific network. It really depends on the Unv. Email me and I'll give you the URL of my I2-only virtualhost. here [mailto]

  • Umm, playing Unreal Tourney in a CAVE would require no more network bandwidth. It would require loads of processor power though.
  • Canada already has a faster network. I don't think it's getting much use yet (it's a government project). Now where's that link?

    Doh.
  • No, that's not it though...these are all the symptoms of a problem that can't be escaped. It's a trivial conclusion that traffic in a closed system doesn't grow past its peak, and will settle at equilibrium. The thing is, there's no such thing as a closed urban system...be default, a healthy community must grow.

    The closest comparison of a system we can observe in relation to the internet is an ecosystem. In either case, it grows, it shrinks, it responds to change, it is populated (an ecosystem by organisms, the net by information), whereby only the fittest of beings/data survives progress, and it is abused by humans. The only difference is that we created it.

    Here then are the things to keep in mind when making an analysis of the net: (excerpts taken from The Ecosphere, by Barry Commoner)

    The First Law of Ecology: Everything is Connected to Something Else
    No such thing as inside-outside influence in the internet. There is only one set of 'organisms', one collective of criteria that affect the net, and it's anything that is wired. Bloat or not, it will affect our network. "The dynamic behaviour of a cybernetic system-for example, the frequency of its natural ocsillations, the speed with which it responds to external changes, and its over-all rate of operation-depends on the relative rates of its constituent steps." Like Baka_Boy was saying, when you get off the freeway, if the local roads suck, you're still screwed.

    The Second Law of Ecology: Everything Must Go Somewhere
    This is where the principle fails. On the net, you find dead ends. On the net, you find concepts and organisms that are ephemeral in every sense of the word. On the net, the life or death of an idea is just a matter of energy.

    The Third Law of Ecology: Nature knows Best
    The premise being that by whatever means possible, the course of nature has perfected the stability of organic chemicals to the point where anything man-made should be treated with caution at the very least (this is why every single one of you has minute amounts of ScotchGard in your bloodstream right now. So far, no ill effects have been found, but 3M has stopped making the stuff recently, despite the fact that the industry is worth several billion dollars. Food for thought: if they acknowledge the possibility of adverse effects now, they are not criminally responsible for them by US law. Makes you wonder how much damage is possible, that would make the loss of billions preferable to compensation. For a related comment, see this [slashdot.org] post of mine). Unfortunately, there isn't a lot we can use here, because the internet is a creation of man to begin with. The most we can do is look for ways to turn linear, destructive patterns of information into cyclical events. OOP anybody?

    The Fourth Law of Ecology: There is No Such Thing as a Free Lunch
    Appropriated from economics, and blatantly obvious.

    -j

  • Come ON MAN! 4.5 GIGS of Pr0n in 15 seconds? Mmmmmmmmmmmmmmmmmm

    On a serious note though, why would we want to restrict the I2 to such a "Brave New Worldish" vision? You are entitled to your opinion, but your vision is quite disturbing. Why even bother? You forget that companies ability to make money, eventually will lower the price of connecting to it and web space. What if a child wanted to take a virtual real time tour through a Parisian museum, from his home in California? Woo...somebody needs their 2nd cup of coffee this morning.

    I am what I am..a sig.

  • by a.out ( 31606 ) on Tuesday August 29, 2000 @05:18AM (#819751)
    Anyone else out there remember the "September Syndrome" when all the freshmen at college/university first got their accounts. They would test the waters and often would be quick to flame or troll. This would be quickly corrected by the existing community members and the freshmen would be put in their place. In about a month and things would calm down untill the next September. Well the net has been looking like September for the last couple of years now. Every day of every month, September. Oh well. :)

  • NO!

    Don't let them fool you!

    They are taking our rights away one by one!

    hehehe, sorry. I'm tired and it is a slow Tuesday workday. I don't think my humor is coming out as blatantly as I desired it to.

    .mincus
  • by AdamHaun ( 43173 ) on Tuesday August 29, 2000 @05:39AM (#819753) Journal
    He has a point, though. It would be nice if there was some way to ensure that the people on the net respect the net. If you have to have a license for hunting, fishing, and driving, why not for I2? Admittedly, the above examples are threats to life, limb, and environment, but there's no reason we shouldn't try to protect our information sources too. If net access was seen as a privilege rather than a right...

    Just a few random thoughts.
  • I'd just like to speak up for the people you're essentially trying to censor right off I2. I got my first shell account 7 years ago so I'd say that qualifies me as an internet old-timer :o)

    Oh yeah, I'm 20... so that would have made me 13 at the time. I don't pretend to speak for a wide range of people, and I usually don't get involved in this kind of thing but I think it's grossly unfair to blame everything that's wrong with the internet on those damn "teenage morons". It's disturbing to see how often that demographic gets attacked these days, seems like whenever someone's pissed and there isn't a blatently obvious problem source it must be the fault of those damn kids. Remember, morons and idiots exist in every demographic, not just teenagers; a script kiddie, d00d, lAmeR or whatever could just as easily be your upstanding next-door neighbor.

    As for newbies, most genuinely want to learn and those that don't quickly go away. Give them a break from time to time, we were all newbies too once upon a time.

  • No, but there were looser (read: no) restrictions on who can be on the network. I2 has a rather stringent policy on who can be on and who can't. Please read the FAQ [internet2.edu] :-).


    --
  • I work as a sysadmin for the Swiss Post and there are dozen of thousands of workstations (NT, Linux, BSD, MacOSXYZ, etc.) connected together on a damn' fast network.
    When I have to install some software I just mount a remote disk (could be 300km far) and launch the exec from there.
    One day I also made a test of burning a CD during the work hours. The data to be burnt were something like 50km far.
    Believe it or not it worked. All our machines (in this office) have 100Mb Ethernet and, whenever I download some stuff, I am sure the bottleneck is the harddisc.
    So, when I read about XGb/second I just wonder how much I'll have to spend on hardware (optical connection to disks, faster than light BUSes, etc.) to benefit from this powerup.
    BTW, NO : I don't want Internet 2 to be the fastest ever just because Internet 1 happens to be fast enough, I just want it to be free as in Free Speech and Free Software.
    (I don't mind about Free Beer but I would about Free Guinness)
    --
  • by Anonymous Coward
    And of course corporations who wish to email trade secrets.

    Dude, this is Slashdot. Trade secrets, copyright and patents are frowned upon here. Corporations wishing to trade secret information gets no sympathy from anyone here. Anyone wishing to make money or preventing information being free are unwelcome around these parts.

    And people who wish to keep their credit card transactions free from people spying.

    What is the point of this, when most people are so careless with their credit card receipts? Most people dispose of their credit card receipts (which contain all the information necessary to make fraudulent transactions) without destroying them first, and so any dumpster-diving kiddie could effectively use their credit card without permission. As I said before, privacy zealots have an over-inflated sense of their self-worth. There are far easier (and less risky) ways of making money than trying to steal from an encryption fanatic's credit card.

  • by jilles ( 20976 ) on Tuesday August 29, 2000 @05:43AM (#819758) Homepage
    "Indiana University music students can now hear the performances associated with their course work on computer. IU, which has the largest music school in the nation, has digitised its entire music library. "

    Hehe, so it is already being used for sharing music.
  • In Canada the company I work for just created a network for public traffic that matches the Internet2. 2.4Gbps from the west coast to our crossborder connection in Chicago. We already carry large amounts of video traffic too. Gig E man rings feed the public too.
  • by Nexx ( 75873 ) on Tuesday August 29, 2000 @06:02AM (#819760)

    The question is, will they ever "roll it out" to beyond what it is now? I mean, sure, they use IPv6, sure, their backbones are probably an order of magnitude fatter on a per-host basis, but would "they" ever roll it out, or would the current IPv4-based Internet just migrate to IPv6 when the specs are "done", tunnelling some legacy IPv4-based traffic in a "4-bone", or doing some sort of weird IPv6-IPv4 NAT? Or will the current IPv4-based Internet plod on, NAT-ting everywhere (dear lord, I hope not)?

    Then again, when I run traceroute(1) everywhere, I almost always see a 10.x.y.z somewhere :-)


    --
  • I guess that's true. The network guys have said that the I2 has actually been up to around 45% of capacity, whereas the commodity (normal) internet is already at 100% -- around midnight every night. They were talking about banning Napster simply because of the bandwidth usage. But like I said, I don't really see much advantage to me, joe schmoe.

    __________________________________________________ ___

  • kewl - you funding it?
  • DHCP supports anonymity by sharing (admittedly limited) IP address among large numbers of people.

    If the DHCP server keeps good enough logs (depending on how it is configured), then the IP address and time can be linked with the MAC address, which can then be traced back to your computer. And also, some places that use DHCP (including the college dorm where I live) have enough IPs that they can assign one *static* IP to every computer on the network.

    IPv4 works... now. We don't need IPv6... unless it's advocates have a different goal, which IPv4 isn't meeting.

    IPv4 works just fine, but is running out of addresses. Hence the need for IPv6. IPv6 isn't about tracking people -- it's about keeping up with a rapidly expanding Internet.

    =================================
  • Via the new Web, an astronomer in Amsterdam can remotely manipulate a telescope, study a distant nebula, and then participate in an international videoconference to discuss his or her findings.

    GREAT - how PC - but what /I/ want to know is how fast will QIII CTF run.

    its all about the apps - Inet2 means nothing if I cant get the advantage in MMORPG or QIII + similar.

  • tuxedo-steve wrote: the Internet-2 network wasn't ever destined to become a public network - access would be restricted to academic bodies and such

    Erm, isn't that The Grid not Internet2? Or am I talking arse? If so then what is the difference between The Grid and Internet2?

    As I understood it Internet2 was a set of protocols for a new high-speed backwards-compatible internet for use by anyone would could afford to hook up, whereas The Grid was an entirely seperate new high-speed network which was strictly for accademic/millitary use.

    --

  • 1. Somethink like freenet (in the caching capability) is what is needed. People who want pr0n and sports news highlights will be able to get them close to themselves (say, for instance, @home had a freenet node for every couple of blocks), and other, more interactive traffic will go across the net.
    What is really needed is seemless integration -- i.e. you go to a website, and check out a URL for a sportscenter swimsuit video -- and they send you a ... "forked redirect" (check A, if not A, check B, etc.) that sends you to a local freenet node or ordered cache server to pick up the information.
    Even this little example is not perfect, though -- too limiting (and too "seemed" (!seemless) -- smarter browsers (first encode the URL -->freenet key, then look. If not exist, go directly "on the net"). (segue into part 2...)

    2. I don't understand why people like to point to fixed routes as a solution. The less dynamic things get, the worse things are in case of failure, and, in a sense, the harder they are to figure out. DHCP, IPv6, routing protocols, Banyan's old NOS -- these things exist in fluid situations, and aren't hampered by failure, etc.
    A more "organic" solution to this is already happening -- proxying, caching, and Akamai-esque things. This stuff is pretty much transparent to the user (good), doesn't do any exclusive "pick and choose from that selection" and if stuff breaks, the network will adjust itself automagically. (i.e. the proxy's down, so go out to the server and get stuff directly instead!).

    Ultimately, the more we engineer our systems to be self regulating (with careful, well thought out protocols) the better off we are, and the easier the upgrade path.

    Yeah!

    willis/
  • There is actual math that supports this. I do not have my Discover magazine article conveniently in my sphincter to pull out and read, but I recall along with the highway example, a physics demonstration of the principle where a stick (or something) was being supported by several rubber-bands. And the researcher was able to make the stick rise (i.e., show it was supported _better_) when he cut some of the unneccessary supports. I apologize for my extremely vague description, but as counterintuitive as what you've seen reads, it's got some genuine examples that can be tested in the real world.

  • Well, perhaps then we could suggest Kibo's [kibo.com] HappyNet idea be expanded to include the internet itself... Now where would /. end up? my guess is probably somewhere in the megabozo category...

    -GreenHell
  • We're getting ahead of ourselves. First, we need to restrict people's access to libraries. Currently, too many foolish, ignorant people have access to too much information. No good will come of that.

    We also need to license people for social interactions -- when people get together, they often exchange thoughts and ideas (many of these, no doubt, gained from the libraries!). From a careful study of history, we now know that this uncontrolled interaction usually leads to grave disaster for the ruling elite. Thus, for their own good, and ours, non-approved persons will be confined to their homes until they have proven they are capable of civilized behavior in groups.

    (my apologies to Jonathon Swift :)
    -----
    D. Fischer
  • Generally I've noticed that the latency over internet 2 is almost no different than what I get to commodity net sites. I'm at msu.edu and traffic going to the west coast is usually better over the regular net! Here's a ping comparison of 2 Bay area sites(stanford and teamplay.net(exodus hosted). Pinging www.LB-A.stanford.edu [171.64.14.239] with 32 bytes of data: Reply from 171.64.14.239: bytes=32 time=70ms TTL=238 Reply from 171.64.14.239: bytes=32 time=70ms TTL=238 Pinging core.teamplay.net [216.33.28.138] with 32 bytes of data: Reply from 216.33.28.138: bytes=32 time=57ms TTL=243 Reply from 216.33.28.138: bytes=32 time=56ms TTL=243 I meet with exodus.net in Chicago with like 8ms of latency, then it's all on their network to cali. While I'm 25ms to the Abilene node in Cleveland, then off to Stanford. While there are alot of DSL/cable offerings(anet DSL in Chicago = best peering i've ever seen, Optimum Online = best cablemodem on east coast) that ping as good as I do or better, I dont see them pulling 5Mbps downloads :)
  • by Anonymous Coward
    I tend to think that there is a limit on the bandwidth you can use.

    When I was connected throught a 14400 bds lines, I almost always saturated the line. Now with 512K bds DSL, I maybe saturate it 20% of the time (and yes, my pattern of use radically changed)

    Give me 1600x1200*3*120 bytes per seconds. That's basically the _maximum_ bandwidth berween my computer and me. If the bandwidth could be that high for everyone, I don't see what you could do with more (well I could ask for a little more, but not an order of magnitude more)

    Cheers,

    --fred
  • "you'll be able to buy higher priority for your packets"

    How is this effectively different from the current situation: "free" ISP -> ISP -> DSL, cable modem -> T1 -> T3

    More $$ -> more bandwidth -> more priority

    You mention scary possiblities. What about the possibility that a bunch of teenagers' download of a pirated MP3 blocks research information from getting through? Oh wait, that's already happening.
    -----
    D. Fischer
  • I am guessing you haven't read up on I2 lately. Internet2 is the next round of major architecture changes that will be applied to the Net (as in the highway we all access today). The I2 changes will just be very high bandwidth and application of that bandwidth.

    "The primary goals of Internet2 are to:
    - Create a leading edge network capability for the national research community
    - Enable revolutionary Internet applications
    - Ensure the rapid transfer of new network services and applications to the broader Internet community" internet2.org [internet2.edu]

    I want a new reality -Chaswell
  • If they think they're going to improve the net with this plan, they are wrong. Look at the internet years ago when it was just a bunch of government sites, educational institutions and a few large corporations. You had great bandwith and you could search it by clicking on links. Now look how it has evolved into its current mess. Internet 2 sounds a lot like internet 1 when it first started with modern improvements. Give it ten years and it could evolve the same way.

    Not only that, but it might never evolve at all! Remember all of those old networks like Tymnet, DATAPAC and CISnet? They were also very similar but the general lack of freedom killed them. Internet 2 could face the same problem.

  • The biggest problem Is that the internet is already too much of a standard.. it's easy to change standands if they're not too popular or used that much... Want proof of this? In some cases, we're still using the kermit protocol to download.. if anyone doesn't know, this is one of the first protocols written for the internet. So that leaves Internet2 to the scientific/idle rich category. By the way, has anyone done any real world benchmarking? As in, take a copy of the slashdot code, and see how it runs on internet2?
  • Atlanta is perhaps the first major American metropolitan area to have growth unlimited by natural barriers. For the Olympics, the traffic infrastructure was nearly doubled and from 1996 to 1997, Atlanta was a fairly pleasant city to drive in. But the infrastructure facilitated an enormous suburban expansion and, once again, Atlanta has severe congestion.

  • by The Queen ( 56621 ) on Tuesday August 29, 2000 @05:04AM (#819777) Homepage
    4.5 gigabytes of data.... on a 1000X Web, ...can be yours in just 15 seconds.

    Crap, don't let the RIAA and MPAA hear about this. ;-)

    The Divine Creatrix in a Mortal Shell that stays Crunchy in Milk
  • If this is true (which i don't doubt), what's going to stop the people from keeping inet1 up? inet2 is looking to be an elietest type place, so imho fuck it. let's fix the internet that's already in place before we up and decide to replace the whole thing.
  • Three paragraphs of non-sequitur followed by a falsehood. Wow. You have either not had your coffee this morning, or had too much.
    You don't give a nuclear reactor to a third-world country. This much is obvious, for a number of reasons. What's less obvious is that we give more than we need to ourselves...why not? It's only a matter of convenience. The simple answer: when we all take more than we need, everyone is shafted.
    When we created the telegraph to move information faster than by carrying pieces of paper with marks on them, was that "more than we need"?

    How about when we created the telephone to transmit information as sound instead of transcribing it to paper (and moving many times the bandwidth of the telegraph), was that "more than we need"?

    Now we're in an age of ADSL lines, cable modems and megabit satellite links. One DSL line can carry information equivalent to several voice channels, and cable modems crank at Ethernet speeds (albeit shared). It's all using existing infrastructure. When did using some of that become taking "more than we need"?

    Some hundreds of millions of years ago, plants learned how to conserve water and fend off deadly solar radiation. They came out of the seas and took over the land. When did they start taking more than they needed? Hell, it was there, and there was sunlight going to waste just like there's bandwidth going to waste in just about every fiber-optic strand in the world. We're not "wasting" anything by refusing to be limited to 300 BPS modems and 20 megabyte 14-inch hard drives; if anything, we are conserving by getting more and more out of less and less. This isn't waste or selfishness, it is the exact opposite.

    A PalmVx has enough storage capacity to keep track of all the things you will ever do for the rest of your life in text. Your 20 gig IBM Deskstar 75GXP has enough storage space to keep track of just about anything in the correct (read: simplest) format. When everyone has this data stored on a local drive, the situation isn't that big of a deal...the consequences are internalized. When we all share a fixed space, like the net, then there's a problem.
    [emphasis added] Maybe I'd like to use a Palm to do more than track what I want to do. Maybe I want to play games on it, read books (with pictures) on it, and listen to music on it. Maybe I want to use something like a Palm (and maybe a headset) to supplement or replace my personal stereo, my pager, cell phone, GPS, bicycle trip computer, and even my laptop machine. So I do more with less mass, bulk and energy; where's this taking more than I need? And since when is the Internet "a fixed space", anyway? It's grown by orders of magnitude in the last ten years, and more orders of magnitude are in the offing.
    There is no such thing as a fat or thin pipe. Take my Coke as an example.
    I'll take it unless you spit in it.
    Obviously, there are limitations on the acceptable width of the straw in the can. It has to be fat enough to allow passage of the soda (pop in Canada) with surface tension taken into effect. It has to be thin enough to fit in my mouth. Other than that, the straw's effectiveness depends on how hard one sucks.
    Har. The flow of Coke is limited by the atmospheric pressure and the viscosity of the fluid (Classic will flow slower than Diet). Once you have a full vacuum on your end of the straw (possible, considering the weakness of your arguments! ;-) the Coke cannot flow any faster no matter how fast you can drink; the only way to get Coke faster is to have a fatter pipe. There is more than one thing you might want to push through a pipe, too. If you want to move the water for a block-full of houses, you need a fat pipe. If you tried moving it through a straw, three things would happen:
    1. The straw would explode due to the driving pressure exceeding the hoop strength of the plastic.
    2. If the straw were made of steel instead, the power requirements of the pump would quickly "take more than you needed" to move the water.
    3. Somewhere as pressures increased, the water would be delivered at boiling or hotter. The energy put in by the pump is dissipated as friction, and that heats both the water and the pipe.
    Sooner or later you need a fatter pipe, QED. Arguments to the contrary... suck. ;-)
    --
  • The Grid doesn't really exist yet - it's a bundle of various technologies, including middleware, service brokers, and lots of quite specialised software, along with some standard protocols such as IPv6 et al, and no doubt some specialised ones.

    Internet2, by contrast, has little to do with host software per se - it's all about engineering the network for higher and more consistent performance (using QoS), making multicast work better, and getting IPv6 working in production mode (not all at the same time, necessarily).

    So, comparing the Grid to Internet2 is a bit like comparing GNOME to the Internet...

    Some good URLs on Grid computing are:

    http://www.gridcomputing.com/ - long list

    http://www.gridforum.org/ - various working groups, has list of other work under Related Initiatives.

    An excellent in-depth book is

    The Grid : Blueprint for a New Computing Infrastructure, by Ian Foster (Editor), Carl Kesselman (Editor)
    ISBN: 1558604758
    http://www.amazon.com/exec/obidos/ASIN/155860475 8/

    The Grid will be a very impressive technology when fully realised, e.g. Grid-enabled apps could do things like real-time medical modelling and diagnosis using a bunch of remote supercomputing resources acquired dynamically on a computational market. Not to mention the gaming possibilities :)
  • by Anonymous Coward on Tuesday August 29, 2000 @05:06AM (#819781)

    Please, this is not Flamebait... but,

    How many of you think this will really evolve as an autonomous Network not connected to Internet(1)? MS gave this a try with their MSN and failed. Isn't it better to Keep replacing old HW/cables with new gradually and 'eternaly'? After all ArpaNETs original total bandwith is said to have been 56k. Theoretically we're still using it but the bandwith has increased.... a lot.. :)

    Anonyumous Howard

  • How about a household of four people watching four different HDTV-quality streams, running local-logic search-bots and videoconferencing? With current bandwidth, it's pointless to offer services that depend on fatter pipes, so it's senseless to argue that "because there are no such services, we don't need more bandwidth". Build the thing, and people will use it.

    What if I want access to Mauna Kea's observations?

    If you don't want more bandwidth, stick with cans on a string. I'll take yours.

  • by mincus ( 7154 )
    Internet: Built by hackers for fun, adopted by the world because of its openness, rejected by business because of its openness.

    Internet2: Created to Squash all forms of openness, and make the internet none 'own-able', like companies are trying to do with books, DVDs, and everthing else in the world.

    Dont buy into it! We can make this internet better ourselves!

    .mincus
  • We're on internet2 a U rochester. [xm@jolt xm]$traceroute backbone2.syr.edu traceroute to backbone2.syr.edu (128.230.165.4), 64 hops max, 40 byte packets 1 resnet-tiernan-bbgw.utd.rochester.edu (128.151.85.250) 1.126 ms 1.10 ms 2.128 ms 2 gilbert-resnet-bbgw-if.utd.rochester.edu (128.151.4.9) 1.909 ms 2.140 ms 2.480 ms 3 annex1-to-annex5505-1.utd.rochester.edu (128.151.5.73) 106.170 ms 21.105 ms 2.76 ms 4 syru-uofr1.nysernet.net (199.109.1.57) 4.655 ms 3.446 ms 4.681 ms 5 128.230.249.2 (128.230.249.2) 6.32 ms 4.195 ms 4.222 ms 6 backbone2.syr.edu (128.230.165.4) 6.210 ms^C I get 1.2 megabits to people at other internet2 schools.
  • Here's a link:

    http://slashdot.org/articles/99/08/28/1823211.sh tml
  • My school is one of the I2 schools. I guess we get great speeds to other I2 schools/institutions... however, this does not affect any students because the dorms are all wired with 10Mbit Ethernet. I asked our Network services guy if they were going to upgrade the dorms to 100Mbps and he said no. He said there were a couple of servers that were connected at 100 and may someday be connected at 1Gbps, but if the dorms are still limited to 10Mbps, I don't see any real advantage (to me) to the I2 at all. Some test.

    __________________________________________________ ___

  • There is no such thing as a fat or thin pipe. Take my Coke as an example. Obviously, there are limitations on the acceptable width of the straw in the can. It has to be fat enough to allow passage of the soda (pop in Canada) with surface tension taken into effect. It has to be thin enough to fit in my mouth. Other than that, the straw's effectiveness depends on how hard one sucks.

    and what about the beer/coke bong .. remember - pop a hole in the bottom, and pop the top - then the real bottleneck is your neck and your ability to chug. who needs straws? .. now if I'm always chugging cokes, you're gonna have a hard time taking a sip until I get sick, barf it up, and sell it back to you .. and hey! there's capitalistic ingenuity that should be rewarded!!

    You're absolutely right - the insistence on personal rights and freedoms stomps on the rights and freedoms of others. I see it in traffic every day over here .. mainly .. the depersonalization creates an avenue where relationships can be abused, and we can become comfortable doing it.

    "Each of you should look not only to your own interests, but also to the interests of others."

  • by Fervent ( 178271 ) on Tuesday August 29, 2000 @05:22AM (#819788)
    I think you have your definitions screwed up sir.

    Internet 1 was created by government employees and academics. Only after sufficient trial was it handed over to the public "hackers" (that is, if you accept the definition of hackers as public software commanders, and not the academics who put the system together). Now, it's been almost completely taken over by big business ($850 billion total sales last year).

    Internet 2 is again being created by academics in a much more open atmosphere. True, they are focusing more on broadband video and voice transfer, but nearly every protocol and standard they are using is available to the public and open-sourced (just not the hardware).

    If you're an academic, this has to be. You can't be researching something and have another professor across country say "I already shelled out an algorhythm for high-speed video streaming but I... uh... don't know if I want it getting out." (Corporate secrecy may penetrate the upper layers at some institutions with grants, but most pure academics will cite the simple pride of research as key.)

  • For sure, the advantage is for the university, but the examples cited in the article clearly target the client.

    On this note, I feel that the article is somewhat misleading, in that all the examples cited are probably taking advantage of optimized communication protocols and point-to-point communications that ONLY have fat pipes. While 180 universities might be hooked up to I2, this in no way means that bandwidth between I2 institutions beats non-I2 connections. This might be obvious (a chain is only as strong as...), but case-in-point:

    I ran distributed analysis tests for my research between Stanford and UIUC (both I2 sites) and Stanford and IIT (IIT is non-I2) using CORBA (flame here, but we don't think CORBA was the major hindrance) with the idea that here were two physically comparable distances. For any given analysis, the Stanford-IIT was faster than the Stanford-UIUC run. On the Stanford side, we were connected to I2 via 100Mbps lines. I'm not sure what was the I2-UIUC server pipe, but if you trace the traffic to from Stanford to UIUC and IIT, the numbers that are returned show equivalent times from Stanford to the UIUC I2 router and from Stanford to the ultimate IIT server that hosted our services.
  • Oh please, don't make me out to be Amish, i'm posting aren't i? Take the semantics of the statement as you like it, but regardless, there is a huge difference between the judicial use of prgress and the wanton abuse of it.

    And not enough coffee, by the way ;p

    -j dosen't backwash, but still won't give you his Coke. Diet.

  • !! I remember when people first started saying

    "now, it's always september on the net."

    anyone remember who that said it first?

    .mincus
  • If only social justice were modeled on Philippians, huh. 2:4, is it not?

    -j

  • by stx23 ( 14942 ) on Tuesday August 29, 2000 @05:22AM (#819793) Homepage Journal
    Perhaps the advantage isn't intended for you, but your university. After all, if there are 10 of you pulling mp3s from napster at 10Mbps, the link to the outside world will need to be 100Mbps to deal with it. I had assumed I2 was a technology intended to strengthen the backbone, not provide ultra fast connectivity to the client.
  • With All the Geeks out there, might there be a way to create our own internet??? so that we did not have to deal with all the BS. I mean there is enough brain power to do it ourselves if we put our heads to it.
  • by Jon_Sy ( 225913 ) <big_guy_@NoSpam.hotmail.com> on Tuesday August 29, 2000 @05:44AM (#819795)
    Does anybody see the parallel between internet traffic and road traffic? (yes i'm being a bit facetious, may i add, before you slap me with the Information Superhighway or similar phrase).

    More than 30 years ago, an author by the name of Helen Leavitt argued that expanding roads led to MORE traffic, not less. The argument was fairly simple...sure, you may get a little more breathing room for a while, but that doesn't address the real problem: too many people are driving on this road. Having more space leads to, well, more people driving on the road. (Leavitt, Superhighway-Superhoax. Must reading for the next generation of civil engineers...some of the fluffier tree-hugging ones have taken the cause to heart at this [foe.org] site).

    If you stop to think about it, it makes a lot of sense.
    Now i'm going to continue my line of thought, asuming you follow with the whole "more road = more road rage" theorem. (For those of you who still aren't convinced, either you're an old-school civil engineer in which case there's no hope for you, or you're not, in which case you'll be swayed by case studies like the city of Portland. In the 60-70s, Portland was having huge traffic problems. to solve the situation, they demolished a downtown freeway.) the question is: does the same logic apply to the internet?

    Obviously, with a larger backbone you're going to see both a decrease in transfer time and an increase in usage. But is the decrease a temporary effect? I have a lot of friends who have seen their broadband service deteriorate to the point where they can get their kicks faster on a free isp. I'm sure you do too. Coincidence? Hardly...

    The key difference between real traffic and internet traffic is that physical space is not at a premium. In the real world, land is the bottleneck factor. On the Web, the difference between 5 lanes and 50 lanes is also real, just not in the same way it is in your suburb. What does that mean? There is a greater allowance for 'lane width' patches on the Net...this still doesn't change the fact that to solve information transfer problems, we need to come up with better ways to shift packets, with better cars if you will, rather than expanding the avenues for that data infinitely (a solution doomed to failure because there will always be more data than road. How many of you thought your X-gigabyte hard drve was enough space, only to find it filled yet again).

    What are these solutions? I don't know, i'm (almost) an electrical engineer not a magician...try sifting through Jane Jacobs or Peter Calthorpe or some other engineering conceptualists for answers...it's more likely that a new wave of net design theorists will need to stpep forward and shed some light on the rampant growth, kind of like hacking through jungle foliage with a machete so we can actually aget somewhere.

    -j

  • When you have the larger global/national carriers putting in 10Gig links (OC192) to cope with the current demands, and switch vendors building switches that can handle multiple 10Gig links as a single path (read Multi-Link PPP writ LARGE), running a backbone with some tiny little OC48s (2.5 Gig) doesn't seem all that impressive. Granted, it was the technology tester that helped us get to where we are now, but notice that I2 isn't getting the 192 links, The 'Real' Internet is.
  • That's just the membership policy for UCAID, not the Internet2 at large. Okay currently UCAID *is* the Internet2 at large, but I have a hard time believing that when they roll this out, they expect the entire Internet2 to consist of a single organisation.
  • If you have to have a license for hunting, fishing, and driving, why not for I2?
    What happens to the ability of people to use the Internet for speech where anonymity is important (whistleblowing, unpopular political causes, etc.) when every posting requires one's license to be checked? There isn't an infrastructure for this, and any such infrastructure could and would be abused to silence critics of the people in power. When you have examples of the lack of freedom in front of you, and the importance of a free Internet for tearing down the oppressive regimes (Iran, China, N. Korea), I'm shocked that you would even consider such a thing.

    If institutions have codes of conduct for access to I2 (like Usenet2), that's fine; people could always set up Internet3, Internet4, etc. ad infinitum if they didn't like the rules. But it shouldn't be a government deal.
    --

  • First off, people who are saying that the purpose of I2 is not your personal pleasure. If you're doing research, go to Sudikof which is 100Mbps.

    That said, there is a noticeable difference in the download speeds that you will see between I2 sites and sites on the commercial Internet, even in your dorm room. You mentioned in another post that the school's link to I2 is generally under 50% utilization and that the commercial link is often saturated. The campus LAN is rarely saturated, so the bottleneck when accessing commercial sites is usually at connection to the outside world (it was a 12Mbps link the last time I checked, but as I already mentioned, it's often completely saturated). So what I'm saying is that I2 isn't for the MP3 and porn downloading pleasure of students, but it does speed it up for a lot of them.
  • 2 questions:

    No Debian mirror? ;-)

    How do those of us lucky enough to be at one of the I2 sites take advantage of this network, be it for accessing a superfast linux mirror or for any of the other (relatively few) services running on the network?

    Daniel

    ---

  • Your wasting your time. the very people you're speaking to are 90 percent of the people that should be banned from the internet for foul language, sexual deviance and dangerous views that, in my opinion, represent a serious threat to national security.

    This is another example of the global fall from grace and general neglection of Christian values.

    The amount of cursing here (eg cunt fuck piss wanker shitter motherfucker etc etc) is evidence alone of the moral slide that is purpetuated by the internet.

    I for one hope that access to the Internet2 will be possible via carefully vetted and monitored ISPs, where users are held fully responsible by law for what they view, say and think.
  • The logic behind this seems pretty flawed to me. Roads cause cars? I really don't think that that is the case.

    What is happening is that the cost (in both terms of time and money)of moving a packet is being reduced and as any economist can tell you all uses for that packet which are at least as valuable as the cost to transmit the packet will be used. The problem with this is that as the cost of moving a packet tends toward zero the value of the average packet will also tend toward zero.

    When the internet is expensive it will only be used for its most valuable uses. Right now internet2 is very expensive and therefore is only used for valuable purposes such as scientific research. When this technology becomes cheap and generally avaliable it will be used for the most worthless things that you can imagine.
    ________________
    They're - They are
    Their - Belonging to them

  • by Anonymous Coward
    this is where you guys missed it. We are on the cusp of virtually unlimited bandwidth that will be dirt cheap(within 5 years). It's called the economyof light, look it up on you favorite search engine. The Internet is going all optical, capacity in the backbone is not an issue anymore, and that capacity is now getting pushed out to the edges (tier2/tier3 providers). The last mile(getting there with VDSL and 30Mbps cable modems) and then the last inch(wireless home lans) are all that is left. The last bottleneck will be the servers and end stations.
  • The important part is consistency. QoS is the next big thing. We don't need incredible speeds, we need a true megabit/s to anywhere. To achieve this, backbones have to be very fast, right. But the end user needs are much smaller. How may people live right on a highway? Right now, most servers deliver broadband content at barely 0.5mbit/s. And so far, Internet2 is not much faster than the truly commercial one, only less populated.
  • Yeah, well, the downside is that the fastest I have ever downloaded was once I got something from my roommate's computer at 600Kbps. The fastest to the internet was 300Kbps. My cable modem ($40/month) is faster. So I can complain all I want.

    __________________________________________________ ___

  • What we really need is some equivalent to mass transit -- cheap, fairly reliable, and fixed in its routes. I'm thinking along the lines of the services that AOL, CompuServe, and the like: a subset of the Internet's full content, bundled together and distributed to the (probably large) fraction of Internet users who would seldom go anywhere but those major sites. Individuals can pick and choose from that selection, but must use another service to get access to sites that aren't included.

    We may actually start to see this happen as mobile net access becomes more common. Since there's a definate limit to what kinds of information will be useful (or usable) to someone working on a 300 pixes square screen and small keypad, moving the sites that those people are most likely to use onto a seperate, but linked, "cache" network (think Akamai on a different protocol or port) could help ease the burden on the rest of the network.

  • Sorry, not to be pedantic, but what you just described is 5 529mbps, far in excess of what you just described with your DSL saturation issue.

    I think the main issue with your "inability" to saturate your DSL is an issue of bottlenecks; with an analogue modem connection, your modem is the bottleneck, with your DSL, some upstream provider somewhere is.


    --
  • Oops - you are right, i.e. most of Abilene is on IPv4 not IPv6. However, there is a 'toy backbone' of 2 core routers and 2 campus routers running IPv6, according to the Abilene IPv6 pres at http://www.ipv6forum.com/navbar/globalsummit/slide s/html/michael.lambert/sld021.htm

    vBNS, the other Internet2 backbone, also has a similar 4-router configuration, though in this case the routers are all core type routers, serving Chicago, San Francisco, Maryland, etc.

    Both backbone teams seem to be in 'experiment with IPv6' mode, no doubt due to the learning curve and scarcity of routers that actually support IPv6.
  • by Cato ( 8296 ) on Tuesday August 29, 2000 @08:01AM (#819820)
    Actually not all of Internet2 is IPv6 - Abilene is, I think, but some of the other testing e.g. the Qbone for QoS is still on IPv4, for logistical reasons.

    There's been lots of work on migration of IPv4 to IPv6 - or more correctly, coexistence, since it's quite possible IPv4 will never disappeare completely, just like DOS... The details are fairly complex, but there are various tunnelling schemes (some including automatic tunnel setup as required) as well as protocol translators that let an IPv6 domain talk to IPv4 land via (you guessed it) something like a NAT.

    In time, hopefully, the IPv6 domains will get larger and larger and gateway directly to each other - the 6bone, which is an international IPv6 network, is currently a mixture of tunnels over IPv4, and some 'real' links that are native IPv6. There are even ISPs that have rolled out native IPv6 service, e.g. NTT is one that has done quite a lot in this area.

    IPv6 is particularly useful to Asia and other non-US/European regions, which didn't get much IPv4 address allocation and now really need the address space. It's also important for the massive mobile Internet roll-outs that are happening over the next few years. Just as soon as Microsoft, Cisco and others start shipping IPv6 as standard (quite soon now) it will have a chance of taking over, though it will take anywhere from 5-10 years IMO.
  • How do we know (and ensure) that the FBI (or the NSA or Echelon or whoever) hasn't "requested" that "certain features" be built into Internet2?
    --
  • by jaa ( 22623 )
    with the presidential election fast approaching, I'm surprised Al has time to tinker with I2.
  • by IGnatius T Foobar ( 4328 ) on Tuesday August 29, 2000 @07:00AM (#819832) Homepage Journal
    Is "replacing" the Internet a good idea? You can bet that if the Internet is going to be "completely overhauled" then they're going to "correct" the "mistakes" that were made with Internet 1 -- namely, that pesky little de-centralization "bug" that prevents Big Government and Big Business from exercising tight control over the end-user experience. Internet 2 will have wiretapping and censorship hooks installed at every router and gateway. Internet 2 will require a registered, privileged connection if you want to run a server of any type. Internet 2 will have draconian TOS that ensures that all users will be the tame sheep that Big Government and Big Business wants us to be.

    Don't moderate this as 'funny' -- I'm dead serious.
    --
  • Thank you. I didn't know that IPv6 is actually in use right now
    --

  • I'm afraid that your sig is no longer standard C. Implicit int and implicit function declaration are gone. Your sig would have to be something like:

    #include <stdio.h>
    int main(int O,char**a){10>4*O):10)&&main(2+O,0);}

    Oh well. At least they've got the implicit return 0 rule in there :)

  • ...and have to constantly 'carry' your license while operating a computer on the net? Hmm...

    That begs the question (though it has also come up in other discussion of network theory and design): Where do we draw the line between reliability and performance of a network, and the privacy of its users?

    In a completely anonymous system, no one can be tracked down to persecute them, whether they are a harmless /.'er or an international terrorist or script kiddie. On the other hand, a network with a unique ID for every device and individual lets spammers and kiddie porn peddlers get blocked, but also gives the gov't, or anti-abortion activists, or your carzy ex, find out who and where you are.

    So, what do we do? Continue with the awkward practice of a partially anonymous network and optional, somewhat reliable authentication? Or do we move further towards one of the other ends of the spectrum?

  • by tuxedo-steve ( 33545 ) on Tuesday August 29, 2000 @05:11AM (#819841)
    According to an article I read a couple of years ago, the Internet-2 network wasn't ever destined to become a public network - access would be restricted to academic bodies and such, partially in order to restrict the bloating and commercialisation that happened to the existing Internet. As such, it's not really necessary for it to be connected to the Internet(1) in order for it to flourish, as an earlier comment suggested - it would flourish in its own way, quality rather than quantity.
  • Please no. I don't want to be walking along the 'virtual internet' boulevard and accidentally stumble into the trailer park that is Anglefire, Xoom and Geocities.

    Agh! Please no more 3d banner ads or 3d pictures of your Mom's cat!!! ACK!

    --

  • ...the stabalizing factor is Slashdot,...and countless other professional or topical weblogs, discussion boards, MU*s, etc. (Sorry, but the home-team-ego-trip thing bugs me after a while.)

    I think, though, that we've already begun to see some of the same "suburban exodus" you speak of, though, in the form of AOL, MSN, et. al. -- they're a kinder, gentler, easier-to-use Internet, without all the headaches of the real thing. My grandparents, family friends, etc., won't even think about using a "real" ISP, and are more than willing to take the hit in performance, cost, and availability of information that the mega-services require.

    So, dear /.'ers, a question: Do we want to keep the Net together, (impose growth boundaries, etc.) or should we allow those who lean that way to leave for the "'burbs," and deal with the leaner, meaner Net they leave behind?

  • What are you talking about? The first internet was built as an academic network (not to mention the whole DARPA thing). This one is too.

    --

Say "twenty-three-skiddoo" to logout.

Working...