Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

Ethernet at 10 Gbps 462

An anonymous reader writes "This article talks about 10 Gigabit Ethernet and asks, 'But just how much data can a person consume?' Currently at work, we're working on a major project to re-architect our core application platform so that the different systems can be de-coupled and hosted separately. The legacy design implicitly relies on systems being in the same LAN due to bandwidth-expensive operations (e.g., database replication). Having this much bandwidth would change the way we design. What would you do with this much bandwidth?"
This discussion has been archived. No new comments can be posted.

Ethernet at 10 Gbps

Comments Filter:
  • This would be a boon for HD video workflows. I should think it would be attractive to companies like Pixar and the like.
  • Hmm... (Score:5, Funny)

    by ravenspear ( 756059 ) on Sunday July 25, 2004 @01:02AM (#9792665)
    What would you do with this much bandwidth?"

    Check out more unusual positions.
  • by AJYeary ( 766010 )
    Build an entire slashdot-proof network!
  • What would I do? (Score:5, Interesting)

    by Anonvmous Coward ( 589068 ) on Sunday July 25, 2004 @01:05AM (#9792681)
    The company I used to work for was sending very high resolution images from multiple cameras uncompressed from one unit to another to perform analytical operations on them. I think they manged to work at a gigabit, but 10 would be much nicer for them.

    What would Joe Sixpack do with it? I'm not sure at the moment. Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be. However, what happens when it becomes commonplace? It does open doors. Imagine if cable companies traded in coax for ethernet. They could easily send uncompressed HDTV. That'd be pretty slick.
    • by Anonymous Coward on Sunday July 25, 2004 @01:14AM (#9792738)
      "What would Joe Sixpack do with it? I'm not sure at the moment. Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be."

      The Goatse.cx experience in holographic, 5.1 surround-sound, smello-tactile-vision.
    • by nine-times ( 778537 ) <nine.times@gmail.com> on Sunday July 25, 2004 @01:21AM (#9792781) Homepage
      Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be.

      Isn't that always the way? I remember having a 20Mhz IBM PS/2 and wondering "How am I going to use all this power?" And the 30 MB hard drive- how would I ever use all that space?

      It seems like when we have the capabilities, we find something to do with the extra. HDTV sounds probable, and more bandwidth can only help working over networks on a mass scale (remote home folders and roaming profiles, VNC/Citrix), but you never know. When processors were getting to the 1Ghz point, a bunch of industry analysts were predicting "Now that we have enough power to make working speech-recognition software, we can finally ditch those keyboards!" Yeah, right.

      The big concern is, with the extra bandwidth, will Microsoft see this as an opportunity to release new, extra-inefficient network protocols?

      • Patches. (Score:3, Funny)

        by empaler ( 130732 )
        Several hundred megabyte patches.

        Oh.
      • >...will Microsoft see this as an opportunity to release new, extra-inefficient network protocols?

        yes, every packet will contain an easter egg flight simulator.
      • Re:What would I do? (Score:3, Interesting)

        by Pieroxy ( 222434 )
        Forget about HDTV dude! It's already taking so much freaking time when I try to save my 600 dpi US letter image that I just scanned.

        Animated movie! That is a hog! Even with today's DVI codecs, I can't even play a video over the network. Save high resolution (Well, it's still 720x576, but still that's not that high)
    • The Copenhagen Metro (Score:3, Informative)

      by empaler ( 130732 )
      Had some pretty slick security cams installed in them from the beginning (~3-4 years ago) - but they couldn't use them. Why? Not enough bandwidth to send the images uncompressed. Which was what they had set them up to do. Solution? Turn off cameras. Wait a few years for more funding.
    • I'm still waiting for decent DSL, since I'm
      on 30 year old buried POTS wiring that's 5
      (plus) miles away. Fiber to terminal point
      will not happen here before hell freezes over,
      since the Baby Bells are not spending that
      kind of money.

      However, with that kind of bandwidth to the
      internet, I could set up some homebrew web
      sites, and telecommute to work, and go back
      to (online) school all at the same time.

      I hate to be repetitious, but that kind of
      infrastructure would allow some really great
      collaborative (beowolf?) c
  • by 00zero ( 792723 )
    10 Gb is rediculous. 640K should be enough for anyone.

    good political satire [the-torch.com]

  • When your network pushes over 1 gigabyte/sec, diskless workstations become a much more interesting possibility.

    Typical desktops of the past few years see roughly ~25 megabyte/sec sustained disk throughput (more for SCSI and more recent ATA models). A switched 1 gigabyte/sec network could easily and transparently support 25 remote drives virtually indistinguishable from local storage.

    • Diskless workstations aren't.

      They are just (ab)using the disks of the servers. How Uber Are Your Servers(tm)? Show me a server that can sustain that 1 gigabyte a sec disk access to support those workstations... :p
      • Large Ram disk works wonders for a server that is handling 25 systems.
      • It is not a problem at all if most of your clients use the same, relatively small (and thus cacheable) set of files. For example, a herd of accounting workstations may need access to some s/w package. A diskless client should not need to run 300 different applications all the time; these are generally a single purpose boxes. The more RAM you throw in, the more universal they become. Besides, the disk I/O is not that frequent these days, once you have your app loaded.
  • I worked for a medical imaging company and they would use it.

    they are using gigabit already and you can see slowdown...simply put, a couple hundred 100MB+ x-rays to a single box.... multiply that by however many boxes the hospital has..and 10 gigabit is nice.

    The problem hits in not having enough RAM..and with a 4GB limitation on workstation OS's for the most part this amount of bandwidth could get funky.
  • by weston ( 16146 ) <westonsd@@@canncentral...org> on Sunday July 25, 2004 @01:11AM (#9792709) Homepage
    But just how much data can a person consume?

    If I was going under the knife remotely [wustl.edu], I'd want the surgeon to have as much bandwidth as possible (and very, very, very low latency).

    • If I was going under the knife remotely, I'd want the surgeon to have as much bandwidth as possible (and very, very, very low latency).

      Instead of very low latency, I would prefer no lost packets and *smooth* motion, and not that jagged back and forth you sometimes get! Ouch!

    • by MP3Chuck ( 652277 ) on Sunday July 25, 2004 @02:37AM (#9793092) Homepage Journal
      Imagine a malpractice lawsuit!

      Exhibit A: Surgery Log
      [DR]Surgeon opened Xx[Patient]xX's abdomen with a scalpel.
      [DR]Surgeon punctures Xx[Patient]xX's stomach with forceps.
      Xx[Patient]xX: OMGWTF??!!
      [DR]Assistant: ROFL PWNED!!1
      [DR]Surgeon: STFU N00B i ping 350
      Xx[Patient]xX: w/e
  • by z0ink ( 572154 )
    How about setup a digital broadcast network in my house?
  • With 10 gigabit LAN, the bottleneck won't be the LAN. It will be your servers. Their I/O busses, disk systems etc.

    Even at 1 gigabit, usually the bottleneck is elsewhere.

    10 Gigabits = roughly 1 gigabytes/sec. Considering that PCI bus is 33MB/sec, and even PCI-X is 66MB/sec... Heck the memory bus of my brand new system is only about 1 gigabytes a second.
  • "porn. Lots of porn" /Neo
  • Holo-Porn (Score:2, Funny)

    by Hallowed ( 229057 )
    That might be just enough bandwidth to get a life-like signal to the holographic projector!
  • by gusnz ( 455113 ) on Sunday July 25, 2004 @01:16AM (#9792747) Homepage
    OK, so for stuff like streaming MP3s and so forth, this is a little overkill for the current style of usage. However, where I think this will come in useful is for stuff like remote disk and memory access over IP.

    With a 10G LAN, you'd be able to come up with a great distributed computer system (e.g. for compiling software). IIRC protocols are in the works now for native-ish memory access over networks, turning a network into one huge computer, and you can already access remote disks with the right software. Imagine the simultaneous distributed encoding of several HDTV streams to redundant archives on several different computers, and you'll probably find that more bandwidth = better.

    So yeah, there'll definitely be possibilities for this sort of stuff, even if it is only as a base requirement for the post-Longhorn Windows version :).
    • We're going to run out of storage space long before we can really use that much bandwidth to do the things you're describing. P2P networks will essentially become the defacto method of archiving things like HDTV streams or anything else that is large & not confidential.

      I also imagine that we'll discover just how stable our computers, NIC's & their drivers are. My win2k box (which is way past due for a format/reinstall) tends to bluescreen when pulling anything around 400K~500K for any significant pe

  • In short I'd have to replace all my NIC's and switches. That sucks. Considering that the move _recently_ made from 100Mbit to 1Gbit (LAN) with the steady 10Mbit uplink to the Internet.

    The major problem (today) with 10Gbit? None of the sub-systems could handle the bandwidth. The absolutely rockin' stations with SCSI Ultra-320 (like my Mac @ home for example :) simply couldn't handle the bandwidth. At the bus level or at the hard drive level. So in addition to replacing NIC's and switches we'd be completely
  • Seems like this would be useful for people trying to build clusters with commodity hardware.
  • by erice ( 13380 ) on Sunday July 25, 2004 @01:19AM (#9792765) Homepage
    For distributing intermediate results, I don't imagine there is such a thing as too fast.

    While there are certainly applications that don't need to communicate that fast, more bandwidth means more alogrithms can become practical.

    It's not like you can use it download porn, unless the action is happening in the next room. This is not a WAN technology.
  • NC-PC-NC (Score:5, Insightful)

    by basking2 ( 233941 ) on Sunday July 25, 2004 @01:20AM (#9792773) Homepage

    So, we used to have little dumb terminals that talked to the big smart backend. Then computer became cheaper and we had Personal Computers, but we have to manage and distribute all these updates and it's a real pain and it sometimes destroys your computer during the upgrade/install process. Now we can swing the pendulum back towards the Network Computer a little more.

    This isn't a new idea. Software companies like MS would love to sell you a subscription to MS Office which you renew and they in turn patch and maintain the software on your company's server or on the MS servers. It's a neat idea for sure. Companies like Novel have made some interesting claims about Network Computers.

    There is also the whole Plan9 [bell-labs.com] type of mentality too.

  • by overshoot ( 39700 ) on Sunday July 25, 2004 @01:21AM (#9792788)
    It's the latency. No matter what your bandwidth may be, some tasks (e.g. file servers) need to be "close" to keep latency from being nasty.

    "Close" applies both in physical distance (I have to count picoseconds for the kind of stuff I do) and in network distance, since every router adds considerably.

    For some jobs (backup is a classic) latency is relatively tolerable. However, even for those you have to watch out because one error can cause the whole process to back up for retries. Ten to the minus fifteen BER sounds good until you look at what it can do to your throughput in a long-latency environment.

    • by kasperd ( 592156 ) on Sunday July 25, 2004 @02:26AM (#9793049) Homepage Journal
      I have to count picoseconds for the kind of stuff I do

      Unless you are working with individual gates inside a chip, I doubt picoseconds really matters. On ethernet we are certainly not talking picoseconds. We are still limited by the speed of light, so it would take the signal 100 picoseconds just to get through the RJ45 connector. With a 1.5m ethernet cable there will be at least 10 nanoseconds of roundtrip time, because that is the time it takes light to travel 3m.
      • by Some Dumbass... ( 192298 ) on Sunday July 25, 2004 @09:55AM (#9794109)
        I have to count picoseconds for the kind of stuff I do

        Unless you are working with individual gates inside a chip, I doubt picoseconds really matters.


        I think you're missing something. If the cabling adds a constant delay to any times this guy's measuring, then he can still measure times in picoseconds (assuming his timer is accurate enough, of course). The fact that network cabling would add nanoseconds to a recorded time is irrelevant. Just as long as it doesn't add a variable delay (I wouldn't recommend doing this timing through any sort of switch or router, for example).

        Not that this guy is necessarily using ethernet for what he's doing. Note that he didn't actually say that -- he just said that you had to be close for the kind of stuff he does.

        One possibility is that the guy's a physicist working with a particle detector. He's could be talking about detecting the exact timing of the decay of various particles. If these decays occur on the order of picoseconds, and his equipment can accurately keep time in picoseconds, then the fact that the cabling adds, say, 5ns to all of the measured times is no big deal. Just subtract 5ns from everything. That's good enough to get the relative times of all the measured events, e.g. the amount of time between the detection of emissions created by the initial collision (and thus presumably particle creation) and the decay of the various particles.
  • we have several servers at work that tar.bz2 themselves up nightly and then scp the large .tar.bz2 file over to another server connected to an 8 tape autoloader. all the servers go through a copper cisco gigabit switch and even at gigabut speeds you are topping out at about 23megabytes/sec and transfering a 20GB+ file at this speed still takes several minutes. The servers all use raid 0 and can read faster then 23megabytes/sec so the quicker it can be transfered to the tape connected backup server the bet
  • Lots (Score:4, Interesting)

    by BrookHarty ( 9119 ) on Sunday July 25, 2004 @01:23AM (#9792796) Journal
    We use lots of shared drives, remote desktop applications, X traffic, moving core files, database dumps, email with very large attachments (exchange to boot).

    We migrated to 100meg, it was like night and day, and we still need more. We finally got 1gig to IT's network, and still to slow to push files with lots of users.

    We have a burstable OC192 to our 2nd remote datacenter, OC48/12's to the smaller datacenters. But this is for production networks that need bandwidth, not desktop usage.

    Also, my buddy in Japan just told me he got 100Meg DSL, the stuff you can do when bandwidth isn't a concern. Already Internet TV stations popping up there, amazing. Can't wait for this to catch on in the US. I just upgraded to 6M DSL from speakeasy, and its too fast for fileplanet.

    Speed kills :)
  • on world of warcraft, using a single router to hook several dozens of PCs to the net.

    You won't actually have to control the orcs, the mere sight of them on your screen will initiate instant lag-death for people with lesser video cards.

  • why is it that network and modem speeds are measured in bits per second but hard disk space and ISP download limits are in BYTES?

    I want to know at 10Gbps, how many Meg per hour is that? How long would it take to blow my ISP download limit (4GB) or fill up the hard disk (120GB)?

    If I tune into an online radio at 20bps - how many MB is that per hour? Even worse are the audio or video files available for download that say they are "five minutes long" but don't bother mentioning how many Bytes or bits at all
    • Answer: because laypeople insist on talking in imprecise terms like kilobytes and whatnot. Even the byte, historically, could be of varied size depending on the architecture.

      When talking about bandwidth, always use bits, and always use k=1000.

      Further, how much useful data transfer you get out of the system is not an accurate number.. it fluctuates based on a number of factors, including the network itself, quality of equipment, protocol stack and version, stack settings, local hardware speeds, etc.

      Howeve
  • Way overkill (Score:5, Informative)

    by JRHelgeson ( 576325 ) on Sunday July 25, 2004 @01:33AM (#9792843) Homepage Journal
    As a CCIE, I have been designing networks for years. I have analyzed traffic to/from desktops and watched traffic to the average desktop never even get above 27mbps. This is due to the average file size of the transfer which is rarely above 10 megabytes. At 10 megs, it only takes a few seconds to get it transfered and it only has a few seconds to get up to speed, by the time it gets all revved up, the file transfer is complete.

    High-end workstations such as CAD with gigabit connections, working with 500 mb files, or multi-gigabit video files will occasionally reach 500 to 600 mbps, and even then only for a couple of seconds. At these speeds, power users can use that network connection as if it were a local drive, because at those speeds you are matching the speed at which you're reading/writing data to your local hard drive.

    The only time I've ever seen near gigabit traffic at a steady pace was at network servers, where traffic can reach a steady 600mbps on a single gig link - which is maxing out the speed at which the server drive can read/write data to its hard drive. Think of it this way, a 1 gigaBIT link can transfer a 1 gigaBYTE file in about 10 seconds, that's FAST! Conversely, it takes nearly 20-30 seconds just to write that large a file to the hard drive.

    Now, on a Cisco 6500 core switch, or a Cisco GSR 12000 where traffic is aggregated, these are the only places where I've actually seen multi-gigabit traffic rates, and that was across the switch fabric - not all directed to a single interface.

    The 12000 GSR already has a 10gb interface, it is a single line card that takes up a full slot. It sells for about $60,000 and is used to move data from the switch fabric of one GSR to another GSR, which means you need to put in 2 of them at a mere $120,000 to get the two connected.

    Moving to optical links, you can get up to 36Gbps using Dense Wavelength Division Multiplexing on multimode fiber. This uses several colors of laser light to transmit multiple 'channels' across a single fiber link.

    Even at these tremendous speeds, they are only used at traffic aggregation points, again because any network device, even a turbocharged SAN couldn't handle reading/writing at those speeds for anything longer than a quick burst.

    I say this: If you think that 10gig/sec is your answer, you're looking at the wrong problem. You can get the performance you need at gigabit rates.

    I'm not saying that we'll never need 10gigabit to the desktop, just not until we solve the hard drive bottleneck. Solid state storage could solve the problem, but we'd need to have solid state drives that store 100gb of data in order to match the throughput of the network.
    • Re:Way overkill (Score:5, Insightful)

      by dbarclay10 ( 70443 ) on Sunday July 25, 2004 @02:42AM (#9793111)

      Most of your argument rests on people not being able to read/write data from hard drives fast enough to use the network bandwidth. Some examples:

      The only time I've ever seen near gigabit traffic at a steady pace was at network servers, where traffic can reach a steady 600mbps on a single gig link - which is maxing out the speed at which the server drive can read/write data to its hard drive. Think of it this way, a 1 gigaBIT link can transfer a 1 gigaBYTE file in about 10 seconds, that's FAST! Conversely, it takes nearly 20-30 seconds just to write that large a file to the hard drive.

      More:

      Even at these tremendous speeds, they are only used at traffic aggregation points, again because any network device, even a turbocharged SAN couldn't handle reading/writing at those speeds for anything longer than a quick burst.

      And lastly, your conclusion:

      I say this: If you think that 10gig/sec is your answer, you're looking at the wrong problem. You can get the performance you need at gigabit rates.

      Given your premise, you argue for your conclusion quite well. I don't, however, think your premise is accurate. Or perhaps better, I don't think it's relevant. First and foremost, there's all sorts of storage mechanisms which can transfer data as fast or faster than 10Gbps. Think solid-state drives and some decent-sized drive arrays (they don't need to be *that* large, we're talking roughly 1 gigabyte per second; that can be done with 5-10 consumer-grade drives, let alone the arrays of hundreds of high-end 15kRPM SCSI drives and the like). So on the basis of storage speed alone, your argument fails.

      Second, what does storage speed have anything to do with it? You mention servers not needing this - a *huge* number of servers never touch their drives to read the data they're serving. Drive access == death in most Internet services, and people invest thousands of dollars in huge RAM pools to cache all the data (they used to invest tens of thousands, but now RAM is cheap :). So for a huge number of servers, drive speed is simply irrelevant; it's all served from RAM and generated by the CPU, so unless you're trying to say that CPUs can't deal with 10Gbps (which you aren't, and quite rightly), the conclusion falls down again.

      Do desktops need this? No, of course not. If that's what you're really trying to say, then all fine and dandy, just say it. Acceptable reasons would be "people don't need to be able to transfer their 640MB files in less than 10 seconds" and "their Internet connections aren't even at 10Mbps yet, they certainly don't need 10Gbps!" However, you'll find that this technology quickly percolates downwards, so at some point in the future people will be able to transfer their 4GB (not 640MB at this point) files in a few seconds, and their "little" 640MB files will transfer near-instantaneously.

      • Re:Way overkill (Score:3, Insightful)

        by JRHelgeson ( 576325 )
        Quite right, thanks for the reply.

        I think it's fairly obvious by now that my experience lies primarily in the corporate environment with database servers and the like.

        I do have experience in internet convergence points, but not as much with ISP's serving up video files, or rather the same video again and again. When I think of data transfers, I think of hauling bits from server to workstations, or servers to servers where sustained transfer rates would kill a server - much as you stated; drive access ==
    • Re:Way overkill (Score:2, Informative)

      by Acidangl ( 86850 )
      WS-X6704-10GE Cat6500 4-port 10 Gigabit Ethernet Module (req. XENPAKs) $20,000

  • Even if we do run into a situation where the network is faster than the server I think we will then see the true power of P2P, distributed computing, server farms and the like.

    Imagine a corporate environment where every machine is the file server, storing some pieces to the puzzle? We don't have to rely on backups as often since data is redundantly stored all over the network. Huge servers (big proc, big disk, big ram) may not be needed as much since the big pipe and a lot of small workstation servers
  • getting rid of that annoying coworker...well atleast temporarly. ping -l 102400000000 [insert annoying coworker's ip here]
  • 10 GBps doesn't strike me as all that much. For example its still puny compared to the speed the data is beeing passed around on the main board.

    At the current rate I would predict that our 100mbit switches/routers will be the bottle necks of our internet connections within the next 2-3 years. So an upgrade to 1 gbps switching gear / nics is forseeable but compared to the jump from isdn to broadband, IPv4 to IPv6 or 32 bit address space to 64 bit address space its rather a step forward then a new era.
  • oooh (Score:5, Interesting)

    by iamdrscience ( 541136 ) on Sunday July 25, 2004 @01:39AM (#9792869) Homepage
    IDE over IP. Yes, it does exist.
  • I would spend the first few hours probably doing nothing more than taking cd's and dvds converting them into iso's and trying to clog up the network throuwing them around. After amazing myself with the speed and finding myself content I would probably have to find a few of the fastest machines ont he network and ping flood the slowest one i could find then run over to it and see if it effects how fast notepad opens. Then I would setup a vnc conection to see if 10Gbps would allow me to use it form machine
  • 10GigE is what I'd use as a backbone between buildings, metro area networks, etc.

    Come in real handy if GigE rollouts to the desktop start happening.

    And before anyone starts spouting off about maximum 100m spans, I'm talking 10GigE over fiber [ucar.edu]

    -transiit
  • cuz, like, my ping would rule

  • While I can't say where, I have seen 10Gbps ethernet running at full line rate on a 16lane PCI Express card.

    All I can say is WOW. It was quite amazing. However, we are a long way off from seeing normal usage.

    10Gbps is really only needed for areas where data merges. For example, would you rathat have 10 interfances bounded together? Or just have one? 10Gbps will take off from a ease of management and port desity.

    So for the most part we are just talking about telcom, banks, government, and brocast groups.
  • wget -m http://www.google.com


    That should keep such line very busy for some time...
  • by Entropius ( 188861 ) on Sunday July 25, 2004 @02:19AM (#9793026)
    Let's see. There are about a million pixels on my screen (1280 x 800). Assume 24 bit color, so that's 24 megabits per frame.

    This at 60 fps will be 1.44 Gbps.

    So 10-Gbps ethernet is enough to stream the output of a monitor, *uncompressed*, at full framerate, to either a dumb terminal or another computer. Even the most elementary compression (only reporting changed pixels, or PNG/jpeg techniques) could cut this to a fraction of 1.44Gbps.

    More generally, it could allow more of the things that are currently on the PCI/USB bus to become external, and could become a more flexible replacement for USB. Scanners, cd writers, audio devices, you name it ... lots of things could be externalized and generalized. This would also allow more devices to be shared across networks more easily, since they're *on* the network in any case. With the Internet, nobody cares about the physical location of the machines they access; likewise, with this system peripherals aren't associated as strongly with one specific computer.

    This sort of thing might also have applications for cluster computing, allowing more sorts of things to be done with clusters since you have higher inter-node bandwidth.
  • How about how much information can someone serve?

    Off the very top of my head I can think of a couple of ways for me to digest huge amounts of data over the internet. For one, how about a noncompressed HDTV stream?
    What about video games that don't require a hard drive (and are then more secure)?
    Hell, how about loosing the hard drive altogether and just having a dumb terminal?

    Nah, asking how much data can one person consume is a lot like saying that building a hard drive over 20 gigs is stupid cause it wil
  • You know, whenever a story about bandwidth increases comes out, and there is the inevitable question of "but what will people do with it" I always find myself answering:

    "Don't worry about it, just provide us with the bandwidth and we'll figure out a way to use it."

    Seriously, there's really no telling WHAT will take off until people get their hands on it, start tinkering, and start doing things.

    For starters, how about upping the quality of the media we transfer? Storage space is increasing and becoming chea


  • I'll be building DRBD [drbd.org] clusters in a blink of an eye.
    Actually I already do on 1gbps :)
    Redundancy is good.

  • Video downloading is possible at 1 Mbps--although it takes longer to download than to view or consume--and more than feasible at wired 100 Mbps.

    4.7 Gig for a 2 hour DVD is under 6Mbps.
    The average consumer probably won't buy more than 10Mbps.
    Sure, we'll all want 10Gbps, but not many would be willing to pay extra for it (unless someone comes up with something even more bandwidth intensive than video).

    A publisher might need more overall, but they can probably get by just fine with 100Mbps and a contract

  • by MikShapi ( 681808 ) * on Sunday July 25, 2004 @05:06AM (#9793431) Journal
    I saw some comments here saying Gigabit Ethernet is enough. There's nothing we can do with 10GbE.

    I beg to differ. Sit tight.

    Here's an idea for you geeks that for some reason nobody is busy doing yet.

    Quite a few IT people I know run some form of Linux or BSD server at home, doing a variety of stuff from fileserver to firewall to mail/DNS server etc., though on their desktops they run 2K or XP for reasons such as gaming, simplicity, wife, and so forth.

    Here's the idea. Pool all your harddrives at home on the Linux/BSD box, configure a software RAID-5, share it using samba and network-boot all the 2K/XP machines at home from this network-attached storage. Using Gig ethernet of course.

    What do you get? Every box gets a system drive "Drive C" that can go at 100MBytes/sec. RAID-5 redundancy for all your machines at home. Harddrives, which generate heat and noise are no longer in your computers.

    The benefits are enormous.

    There's a small con though - you won't be able to drag your computer to a LAN-party (unless you drag the server too ;-)

    Currently there is a shortage of one element though: Software that can boot Win2K/XP using PXE from a fileserver. Such software exists in the commercial world and is made by a french company called Qualystem, which doesn't sell it in less than 1-server+25-client licenses, which costs a whopping 2750Euro. They show zero interest in smaller clients. A second product, Venturcom BXP, does the same but falls short as it has a dedicated server that only runs on 2K/XP/2K3 - no BSD/Linux with SAMBA for you.

    If someone in the open-source community were to pick this glove up and write a small driver that emulates a harddisk for 2K/XP on one side (the kind you throw in for a RAID controller by pressing F6 when installing windows), and uses SMB /whatever to access a UNIX fileserver on the other, we'd all be able to rig up a very nifty setup, and use the combined speed of all our harddrives at home.

    We'd also realize that Gigabit Ethernet is not enough, as a cheap 4-modern-ATA-drive RAID5 setup (which effectively streams enough data to store on 3 of them, one of the four being used to store parity info at any given moment) writes at 40MByte/Sec x 3 = 120MByte/Sec, and reads at 60MByte/Sec x 3 = 180MByte/Sec.

    The Gigabit Ethernet _will_ pose a bottleneck.
    If we add more drives, the bandwidth requirement broadens.

    There's also the small issue of the PCI bus, your server must have its ethernet off the PCI bus, like in Intel's 875 chipset, nVidia's nForce 250 or on a PCI-Express card. Otherwise the IDE and GB will choke each other on the too-narrow PCI bus.

    Anyway. once people start doing this, 1000BaseT is back to where 100BaseTX has been for 5 years - choking. I say - Bring on 10GbE!
  • First of all... (Score:3, Interesting)

    by illumin8 ( 148082 ) on Sunday July 25, 2004 @07:43AM (#9793717) Journal
    Let's see, what would I do with all that bandwidth:

    Try to find a host OS with a TCP/IP stack that can properly utilize 1 gigabit ethernet, let alone 10 gigabits. Hint: It ain't Linux...

    Try to find a storage solution that can read or write that fast. I'm thinking something like EMC with about 6-8 2 gigabit HBAs using Veritas DMP (dynamic multi-pathing).

    Try to get all of the above, along with a 133 mhz. 64-bit PCI-X bus that still can't actually keep up with 10 gigabits of data. (133 mhz. 64-bit PCI-X is only about 1024 megabytes per second, not counting overhead).

    The problem is, right now, the rest of the parts of a system just can't keep up with 10 gigabit ethernet. The only box that I would use that can handle that many I/O paths to storage (we're talking six to eight 64-bit 66 mhz. 2 gigabit FC host adapters) is a Sun Fire 6800 or something larger. The problem is, Sun doesn't yet support PCI-X, so now your 10 gig ethernet card is going to be limited to a 66 mhz. 64-bit PCI version, which will only transfer a maximum of 512 MB per second, not counting overhead. That is less than half of the available bandwidth of 10 Gig Ethernet.

    You can forget about putting it in any Intel based system. There are not enough I/O busses and I/O controllers in even the beefiest Xeons or Opterons that can handle this much bandwidth (to disk).

    Also, if your application doesn't need to write all of that data to disk, then how large is this dataset in memory that needs to be transferred at 10 gigabit speeds? If you had a server with 64 GB of memory, it could transfer it's entire memory set over 10 gigabit ethernet in less than 60 seconds.

    A far better, and more economical solution, if you really need 10 gigabits of data throughput to the network, would be to use the same Sun server, and a product called Sun Trunking, which allows you to bond multiple gigabit ethernet interfaces together. You get all of the throughput you want, plus more fault tolerance. I've set it up before, and you can have a continuous ping going, across 4 connections, and pull 3 of those 4 connections and the ping keeps going, without even a dropped packet. It's really fault tolerant, and uses your existing switches, NICs, and hardware, without forcing you to upgrade your entire core switch architecture.

    • Try to find a host OS with a TCP/IP stack that can properly utilize 1 gigabit ethernet, let alone 10 gigabits. Hint: It ain't Linux...

      Define "properly". If you mean efficiency, that's desirable but not critical. If an Intel/Linux server is 75% the efficiency of a Sun server, yet costs 30% the price, you can install two or three for the same bucks. That's efficiency of a sort too, yes?

      Try to find a storage solution that can read or write that fast.

      Well, in terms of raw sustained bandwidth, this doesn't

  • 10 Gb/s thats it. (Score:4, Interesting)

    by jellomizer ( 103300 ) * on Sunday July 25, 2004 @08:53AM (#9793894)
    Remember this is 10 Gigabits per second that is only 1.25 Gigabytes per second (Assuming 100% speed which I never seen happen). Although Right now that is faster then most computers can handle data internally but there are its uses.
    1. System to System Backups. If you have seen the prices of memory and Hard drives space $/GB dropping at a fast rate while magnetic tape remains near constant. Soon the price/GB of memory and Hard drives will be lower then tape so it would be cheaper to backup your data on other Systems and Removable hard drive. So if you have a 3 terrabytes of data it can still take up to 40 minutes.
    2. Imagine a Beowolf Cluster connected at 10Gbs still right now the main slow point with a Beowolf cluster is the network bandwidth. at 10Gbs you are getting closer to the speeds of a supercomputer bus.
    3. Uncompress Video and Sound. No more lossy compression needed with a bunch of people fighting over compression standards. And we always get high quality Audio and Video realtime off the network
    4. 3D Now with 3D displays starting to become available there now will be more data needed to send 3D information. Over the network.

    And thats just the tip of the iceberg. Back when the 300bps modem came out they figured the speed was as fast as anyone needed because it was near impossible for anyone to type more then 30 characters per second. Then the 1200 and 2400 bps modem cam out and they though those were as fast as anyone needed because almost no one can read at that rate. Then the 9200 and 14.4k because it takes almost no time to go to the next page 80x25 of colored text. then the 33.6k and 56k modems (Still the fastest modems for 1 normal telephone line) you can now download a 300x200x256 colors picture in no time. As bandwidth increases we find new ways to max it out and also with increased bandwidth we come with new methods of using the computer because it can now do it.
  • by dekeji ( 784080 ) on Sunday July 25, 2004 @08:59AM (#9793908)
    "Cardinality" is the number of elements in a given mathematical set. When modems ran at 300 baud, you could forget about sending large data sets, such as images, because text and voice data took up all the available bandwidth. As connection rates increased, so did the cardinality of data that users could send. [...] Video currently represents the highest cardinality data

    The term "cardinality" is wrong for several reasons. First, image data isn't represented as sets, it's represented as ordered sequences, and when talking about ordered sequences, both computer scientists and mathematicians talk about their "length", not their "cardinality".

    Furthermore, what matters is not the size of what you want to transmit, but the rate at which you need to transmit it. We call that the "data rate" or (somewhat sloppily) the "required bandwidth".

    So, the overall point of the article, that there is no single media stream that requires 10 Gbit bandwidth, is correct. However, that's pretty much irrelevant: file servers, video servers, and aggregate usage still require that kind of bandwidth. A family of four might require that bandwidth. You might want that bandwidth to have your backup happen in 1 minute instead of 10 minutes. So, there are lots of reasons to want 10 Gbit Ethernet, provided the price is right.

    As for his use of the term "cardinality", the author apparently doesn't quite know the terminology of the field.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...