
Ethernet at 10 Gbps 462
An anonymous reader writes "This article talks about 10 Gigabit Ethernet and asks, 'But just how much data can a person consume?' Currently at work, we're working on a major project to re-architect our core application platform so that the different systems can be de-coupled and hosted separately. The legacy design implicitly relies on systems being in the same LAN due to bandwidth-expensive operations (e.g., database replication). Having this much bandwidth would change the way we design. What would you do with this much bandwidth?"
HD Video for one (Score:2)
Hmm... (Score:5, Funny)
Check out more unusual positions.
Hey, why not? (Score:2, Funny)
What would I do? (Score:5, Interesting)
What would Joe Sixpack do with it? I'm not sure at the moment. Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be. However, what happens when it becomes commonplace? It does open doors. Imagine if cable companies traded in coax for ethernet. They could easily send uncompressed HDTV. That'd be pretty slick.
What would I do?-Horror-vision. (Score:4, Funny)
The Goatse.cx experience in holographic, 5.1 surround-sound, smello-tactile-vision.
Re:What would I do? (Score:5, Insightful)
Isn't that always the way? I remember having a 20Mhz IBM PS/2 and wondering "How am I going to use all this power?" And the 30 MB hard drive- how would I ever use all that space?
It seems like when we have the capabilities, we find something to do with the extra. HDTV sounds probable, and more bandwidth can only help working over networks on a mass scale (remote home folders and roaming profiles, VNC/Citrix), but you never know. When processors were getting to the 1Ghz point, a bunch of industry analysts were predicting "Now that we have enough power to make working speech-recognition software, we can finally ditch those keyboards!" Yeah, right.
The big concern is, with the extra bandwidth, will Microsoft see this as an opportunity to release new, extra-inefficient network protocols?
Patches. (Score:3, Funny)
Oh.
Re:What would I do? (Score:3, Funny)
yes, every packet will contain an easter egg flight simulator.
Re:What would I do? (Score:3, Interesting)
Animated movie! That is a hog! Even with today's DVI codecs, I can't even play a video over the network. Save high resolution (Well, it's still 720x576, but still that's not that high)
The Copenhagen Metro (Score:3, Informative)
Interactive porn? No? (Score:2, Interesting)
on 30 year old buried POTS wiring that's 5
(plus) miles away. Fiber to terminal point
will not happen here before hell freezes over,
since the Baby Bells are not spending that
kind of money.
However, with that kind of bandwidth to the
internet, I could set up some homebrew web
sites, and telecommute to work, and go back
to (online) school all at the same time.
I hate to be repetitious, but that kind of
infrastructure would allow some really great
collaborative (beowolf?) c
Re:What would I do? (Score:2, Informative)
Then there is the other problem that many people seem to be ignoring that ethernet is by design limited to a pretty short distance (I'm too laxy to pull out the networking book). And the fact that because you might have a 10Gbps connection to one other computer, doesn't mean you are gonna have a 10 Gbps connection to anyone else. I know I have 100Mbps within my apartment, but then there is that darn internet connection that tops out at ~2Mbps.
In your world, 'hdtv' might be 1
Re:What would I do? (Score:2)
Parent not informed unfortunately (Score:3, Informative)
Today's "ethernet" doesn't have limitations - it is really only referring to a frame format.
The distance limitations were initially related to running ethernet in half duplex mode, due to the requirement for all devices to be able to detect a collision.
Now that ethernet is run in full duplex the distance limitations due to collision detection have gone.
Distance limitations in "ethernet" are now related to physical media the ethernet frame format is carried over at the specified clock rate. In most cas
Re:What would I do? (Score:5, Informative)
There are two HDTV resolutions in current use, known as 720p and 1080i. 720p is 1280x720 60fps, and 1080i is 1920x1080 30fps (60 interlaced fields). Both of them are 24-bit truecolor.
I have no idea where you got 960x540 from, as it does not correspond to any HDTV resolution. I'm also not sure what the reference to "all this analog crap" is supposed to mean, as HDTV broadcasts are entirely digital.
Re:What would I do? (Score:3, Interesting)
While I agree it's basically two paragraphs of the same standard, keeping the mediums seperate certainly makes sense i
silly question (Score:2, Funny)
good political satire [the-torch.com]
Re:silly question (Score:4, Funny)
true remote storage transparency (Score:2, Informative)
Typical desktops of the past few years see roughly ~25 megabyte/sec sustained disk throughput (more for SCSI and more recent ATA models). A switched 1 gigabyte/sec network could easily and transparently support 25 remote drives virtually indistinguishable from local storage.
Re:true remote storage transparency (Score:2, Insightful)
They are just (ab)using the disks of the servers. How Uber Are Your Servers(tm)? Show me a server that can sustain that 1 gigabyte a sec disk access to support those workstations...
Re:true remote storage transparency (Score:2)
Re:true remote storage transparency (Score:2)
Worked for a medical company (Score:2, Informative)
they are using gigabit already and you can see slowdown...simply put, a couple hundred 100MB+ x-rays to a single box.... multiply that by however many boxes the hospital has..and 10 gigabit is nice.
The problem hits in not having enough RAM..and with a 4GB limitation on workstation OS's for the most part this amount of bandwidth could get funky.
Remote Virtual Immersion (Score:5, Insightful)
If I was going under the knife remotely [wustl.edu], I'd want the surgeon to have as much bandwidth as possible (and very, very, very low latency).
Re:Remote Virtual Immersion (Score:3, Funny)
Instead of very low latency, I would prefer no lost packets and *smooth* motion, and not that jagged back and forth you sometimes get! Ouch!
Re:Remote Virtual Immersion (Score:5, Funny)
Exhibit A: Surgery Log
[DR]Surgeon opened Xx[Patient]xX's abdomen with a scalpel.
[DR]Surgeon punctures Xx[Patient]xX's stomach with forceps.
Xx[Patient]xX: OMGWTF??!!
[DR]Assistant: ROFL PWNED!!1
[DR]Surgeon: STFU N00B i ping 350
Xx[Patient]xX: w/e
WWID? (Score:2)
10 gigabit is kinda much (Score:2)
Even at 1 gigabit, usually the bottleneck is elsewhere.
10 Gigabits = roughly 1 gigabytes/sec. Considering that PCI bus is 33MB/sec, and even PCI-X is 66MB/sec... Heck the memory bus of my brand new system is only about 1 gigabytes a second.
Re:10 gigabit is kinda much (Score:4, Informative)
Re:10 gigabit is kinda much (Score:2, Informative)
Double the bus width from 32 to 64 bits and you double from 133MB/sec to 266MB/sec
Now 4x the Mhz from 33 to 133.. 266 * 4 = 1064MB/sec.
What would we do with it? (Score:2)
Holo-Porn (Score:2, Funny)
The Network Is The Computer (tm) (Score:5, Insightful)
With a 10G LAN, you'd be able to come up with a great distributed computer system (e.g. for compiling software). IIRC protocols are in the works now for native-ish memory access over networks, turning a network into one huge computer, and you can already access remote disks with the right software. Imagine the simultaneous distributed encoding of several HDTV streams to redundant archives on several different computers, and you'll probably find that more bandwidth = better.
So yeah, there'll definitely be possibilities for this sort of stuff, even if it is only as a base requirement for the post-Longhorn Windows version
Re:The Network Is The Computer (tm) (Score:2)
I also imagine that we'll discover just how stable our computers, NIC's & their drivers are. My win2k box (which is way past due for a format/reinstall) tends to bluescreen when pulling anything around 400K~500K for any significant pe
In short (Score:2)
The major problem (today) with 10Gbit? None of the sub-systems could handle the bandwidth. The absolutely rockin' stations with SCSI Ultra-320 (like my Mac @ home for example
Imagine a beowulf cluster... (Score:2, Insightful)
Re:Imagine a beowulf cluster... (Score:2)
beowulfs does seem like one of the best uses, either 10 gigabit or at least push the cost of 1 gigabit LANs right down.
Re:Imagine a beowulf cluster... (Score:2)
Distributed computing? (Score:3, Insightful)
While there are certainly applications that don't need to communicate that fast, more bandwidth means more alogrithms can become practical.
It's not like you can use it download porn, unless the action is happening in the next room. This is not a WAN technology.
NC-PC-NC (Score:5, Insightful)
So, we used to have little dumb terminals that talked to the big smart backend. Then computer became cheaper and we had Personal Computers, but we have to manage and distribute all these updates and it's a real pain and it sometimes destroys your computer during the upgrade/install process. Now we can swing the pendulum back towards the Network Computer a little more.
This isn't a new idea. Software companies like MS would love to sell you a subscription to MS Office which you renew and they in turn patch and maintain the software on your company's server or on the MS servers. It's a neat idea for sure. Companies like Novel have made some interesting claims about Network Computers.
There is also the whole Plan9 [bell-labs.com] type of mentality too.
It's not the bandwidth (Score:3, Insightful)
"Close" applies both in physical distance (I have to count picoseconds for the kind of stuff I do) and in network distance, since every router adds considerably.
For some jobs (backup is a classic) latency is relatively tolerable. However, even for those you have to watch out because one error can cause the whole process to back up for retries. Ten to the minus fifteen BER sounds good until you look at what it can do to your throughput in a long-latency environment.
Re:It's not the bandwidth (Score:5, Informative)
Unless you are working with individual gates inside a chip, I doubt picoseconds really matters. On ethernet we are certainly not talking picoseconds. We are still limited by the speed of light, so it would take the signal 100 picoseconds just to get through the RJ45 connector. With a 1.5m ethernet cable there will be at least 10 nanoseconds of roundtrip time, because that is the time it takes light to travel 3m.
Re:It's not the bandwidth (Score:4, Insightful)
Unless you are working with individual gates inside a chip, I doubt picoseconds really matters.
I think you're missing something. If the cabling adds a constant delay to any times this guy's measuring, then he can still measure times in picoseconds (assuming his timer is accurate enough, of course). The fact that network cabling would add nanoseconds to a recorded time is irrelevant. Just as long as it doesn't add a variable delay (I wouldn't recommend doing this timing through any sort of switch or router, for example).
Not that this guy is necessarily using ethernet for what he's doing. Note that he didn't actually say that -- he just said that you had to be close for the kind of stuff he does.
One possibility is that the guy's a physicist working with a particle detector. He's could be talking about detecting the exact timing of the decay of various particles. If these decays occur on the order of picoseconds, and his equipment can accurately keep time in picoseconds, then the fact that the cabling adds, say, 5ns to all of the measured times is no big deal. Just subtract 5ns from everything. That's good enough to get the relative times of all the measured events, e.g. the amount of time between the detection of emissions created by the initial collision (and thus presumably particle creation) and the decay of the various particles.
backups (Score:2)
Lots (Score:4, Interesting)
We migrated to 100meg, it was like night and day, and we still need more. We finally got 1gig to IT's network, and still to slow to push files with lots of users.
We have a burstable OC192 to our 2nd remote datacenter, OC48/12's to the smaller datacenters. But this is for production networks that need bandwidth, not desktop usage.
Also, my buddy in Japan just told me he got 100Meg DSL, the stuff you can do when bandwidth isn't a concern. Already Internet TV stations popping up there, amazing. Can't wait for this to catch on in the US. I just upgraded to 6M DSL from speakeasy, and its too fast for fileplanet.
Speed kills
play an entire orc army (Score:2, Funny)
You won't actually have to control the orcs, the mere sight of them on your screen will initiate instant lag-death for people with lesser video cards.
What is that in MegaBytes per Hour? (Score:2)
I want to know at 10Gbps, how many Meg per hour is that? How long would it take to blow my ISP download limit (4GB) or fill up the hard disk (120GB)?
If I tune into an online radio at 20bps - how many MB is that per hour? Even worse are the audio or video files available for download that say they are "five minutes long" but don't bother mentioning how many Bytes or bits at all
Re:What is that in MegaBytes per Hour? (Score:2, Informative)
When talking about bandwidth, always use bits, and always use k=1000.
Further, how much useful data transfer you get out of the system is not an accurate number.. it fluctuates based on a number of factors, including the network itself, quality of equipment, protocol stack and version, stack settings, local hardware speeds, etc.
Howeve
Way overkill (Score:5, Informative)
High-end workstations such as CAD with gigabit connections, working with 500 mb files, or multi-gigabit video files will occasionally reach 500 to 600 mbps, and even then only for a couple of seconds. At these speeds, power users can use that network connection as if it were a local drive, because at those speeds you are matching the speed at which you're reading/writing data to your local hard drive.
The only time I've ever seen near gigabit traffic at a steady pace was at network servers, where traffic can reach a steady 600mbps on a single gig link - which is maxing out the speed at which the server drive can read/write data to its hard drive. Think of it this way, a 1 gigaBIT link can transfer a 1 gigaBYTE file in about 10 seconds, that's FAST! Conversely, it takes nearly 20-30 seconds just to write that large a file to the hard drive.
Now, on a Cisco 6500 core switch, or a Cisco GSR 12000 where traffic is aggregated, these are the only places where I've actually seen multi-gigabit traffic rates, and that was across the switch fabric - not all directed to a single interface.
The 12000 GSR already has a 10gb interface, it is a single line card that takes up a full slot. It sells for about $60,000 and is used to move data from the switch fabric of one GSR to another GSR, which means you need to put in 2 of them at a mere $120,000 to get the two connected.
Moving to optical links, you can get up to 36Gbps using Dense Wavelength Division Multiplexing on multimode fiber. This uses several colors of laser light to transmit multiple 'channels' across a single fiber link.
Even at these tremendous speeds, they are only used at traffic aggregation points, again because any network device, even a turbocharged SAN couldn't handle reading/writing at those speeds for anything longer than a quick burst.
I say this: If you think that 10gig/sec is your answer, you're looking at the wrong problem. You can get the performance you need at gigabit rates.
I'm not saying that we'll never need 10gigabit to the desktop, just not until we solve the hard drive bottleneck. Solid state storage could solve the problem, but we'd need to have solid state drives that store 100gb of data in order to match the throughput of the network.
Re:Way overkill (Score:5, Insightful)
Most of your argument rests on people not being able to read/write data from hard drives fast enough to use the network bandwidth. Some examples:
More:
And lastly, your conclusion:
Given your premise, you argue for your conclusion quite well. I don't, however, think your premise is accurate. Or perhaps better, I don't think it's relevant. First and foremost, there's all sorts of storage mechanisms which can transfer data as fast or faster than 10Gbps. Think solid-state drives and some decent-sized drive arrays (they don't need to be *that* large, we're talking roughly 1 gigabyte per second; that can be done with 5-10 consumer-grade drives, let alone the arrays of hundreds of high-end 15kRPM SCSI drives and the like). So on the basis of storage speed alone, your argument fails.
Second, what does storage speed have anything to do with it? You mention servers not needing this - a *huge* number of servers never touch their drives to read the data they're serving. Drive access == death in most Internet services, and people invest thousands of dollars in huge RAM pools to cache all the data (they used to invest tens of thousands, but now RAM is cheap :). So for a huge number of servers, drive speed is simply irrelevant; it's all served from RAM and generated by the CPU, so unless you're trying to say that CPUs can't deal with 10Gbps (which you aren't, and quite rightly), the conclusion falls down again.
Do desktops need this? No, of course not. If that's what you're really trying to say, then all fine and dandy, just say it. Acceptable reasons would be "people don't need to be able to transfer their 640MB files in less than 10 seconds" and "their Internet connections aren't even at 10Mbps yet, they certainly don't need 10Gbps!" However, you'll find that this technology quickly percolates downwards, so at some point in the future people will be able to transfer their 4GB (not 640MB at this point) files in a few seconds, and their "little" 640MB files will transfer near-instantaneously.
Re:Way overkill (Score:3, Insightful)
I think it's fairly obvious by now that my experience lies primarily in the corporate environment with database servers and the like.
I do have experience in internet convergence points, but not as much with ISP's serving up video files, or rather the same video again and again. When I think of data transfers, I think of hauling bits from server to workstations, or servers to servers where sustained transfer rates would kill a server - much as you stated; drive access ==
Re:Way overkill (Score:2, Informative)
I think bandwidth might change everything (Score:2)
Imagine a corporate environment where every machine is the file server, storing some pieces to the puzzle? We don't have to rely on backups as often since data is redundantly stored all over the network. Huge servers (big proc, big disk, big ram) may not be needed as much since the big pipe and a lot of small workstation servers
Perfect for... (Score:2)
Is that really so much ? (Score:2)
At the current rate I would predict that our 100mbit switches/routers will be the bottle necks of our internet connections within the next 2-3 years. So an upgrade to 1 gbps switching gear / nics is forseeable but compared to the jump from isdn to broadband, IPv4 to IPv6 or 32 bit address space to 64 bit address space its rather a step forward then a new era.
oooh (Score:5, Interesting)
10Gbps Wow! (Score:2)
backbone. (Score:2)
Come in real handy if GigE rollouts to the desktop start happening.
And before anyone starts spouting off about maximum 100m spans, I'm talking 10GigE over fiber [ucar.edu]
-transiit
kick your ass at Counter-Strike (Score:2)
PCI Express.... (Score:2)
While I can't say where, I have seen 10Gbps ethernet running at full line rate on a 16lane PCI Express card.
All I can say is WOW. It was quite amazing. However, we are a long way off from seeing normal usage.
10Gbps is really only needed for areas where data merges. For example, would you rathat have 10 interfances bounded together? Or just have one? 10Gbps will take off from a ease of management and port desity.
So for the most part we are just talking about telcom, banks, government, and brocast groups.
Grab the net? (Score:2)
That should keep such line very busy for some time...
Dumb terminals? Cluster computing? (Score:5, Insightful)
This at 60 fps will be 1.44 Gbps.
So 10-Gbps ethernet is enough to stream the output of a monitor, *uncompressed*, at full framerate, to either a dumb terminal or another computer. Even the most elementary compression (only reporting changed pixels, or PNG/jpeg techniques) could cut this to a fraction of 1.44Gbps.
More generally, it could allow more of the things that are currently on the PCI/USB bus to become external, and could become a more flexible replacement for USB. Scanners, cd writers, audio devices, you name it
This sort of thing might also have applications for cluster computing, allowing more sorts of things to be done with clusters since you have higher inter-node bandwidth.
How much can someone consume? Wrong question.. (Score:2)
Off the very top of my head I can think of a couple of ways for me to digest huge amounts of data over the internet. For one, how about a noncompressed HDTV stream?
What about video games that don't require a hard drive (and are then more secure)?
Hell, how about loosing the hard drive altogether and just having a dumb terminal?
Nah, asking how much data can one person consume is a lot like saying that building a hard drive over 20 gigs is stupid cause it wil
Stop asking (Score:2)
"Don't worry about it, just provide us with the bandwidth and we'll figure out a way to use it."
Seriously, there's really no telling WHAT will take off until people get their hands on it, start tinkering, and start doing things.
For starters, how about upping the quality of the media we transfer? Storage space is increasing and becoming chea
Well... (Score:2)
I'll be building DRBD [drbd.org] clusters in a blink of an eye.
Actually I already do on 1gbps
Redundancy is good.
10Gbps? How about 10Mbps? (Score:2)
4.7 Gig for a 2 hour DVD is under 6Mbps.
The average consumer probably won't buy more than 10Mbps.
Sure, we'll all want 10Gbps, but not many would be willing to pay extra for it (unless someone comes up with something even more bandwidth intensive than video).
A publisher might need more overall, but they can probably get by just fine with 100Mbps and a contract
Fast Networking and NAS at home (Score:3, Interesting)
I beg to differ. Sit tight.
Here's an idea for you geeks that for some reason nobody is busy doing yet.
Quite a few IT people I know run some form of Linux or BSD server at home, doing a variety of stuff from fileserver to firewall to mail/DNS server etc., though on their desktops they run 2K or XP for reasons such as gaming, simplicity, wife, and so forth.
Here's the idea. Pool all your harddrives at home on the Linux/BSD box, configure a software RAID-5, share it using samba and network-boot all the 2K/XP machines at home from this network-attached storage. Using Gig ethernet of course.
What do you get? Every box gets a system drive "Drive C" that can go at 100MBytes/sec. RAID-5 redundancy for all your machines at home. Harddrives, which generate heat and noise are no longer in your computers.
The benefits are enormous.
There's a small con though - you won't be able to drag your computer to a LAN-party (unless you drag the server too
Currently there is a shortage of one element though: Software that can boot Win2K/XP using PXE from a fileserver. Such software exists in the commercial world and is made by a french company called Qualystem, which doesn't sell it in less than 1-server+25-client licenses, which costs a whopping 2750Euro. They show zero interest in smaller clients. A second product, Venturcom BXP, does the same but falls short as it has a dedicated server that only runs on 2K/XP/2K3 - no BSD/Linux with SAMBA for you.
If someone in the open-source community were to pick this glove up and write a small driver that emulates a harddisk for 2K/XP on one side (the kind you throw in for a RAID controller by pressing F6 when installing windows), and uses SMB
We'd also realize that Gigabit Ethernet is not enough, as a cheap 4-modern-ATA-drive RAID5 setup (which effectively streams enough data to store on 3 of them, one of the four being used to store parity info at any given moment) writes at 40MByte/Sec x 3 = 120MByte/Sec, and reads at 60MByte/Sec x 3 = 180MByte/Sec.
The Gigabit Ethernet _will_ pose a bottleneck.
If we add more drives, the bandwidth requirement broadens.
There's also the small issue of the PCI bus, your server must have its ethernet off the PCI bus, like in Intel's 875 chipset, nVidia's nForce 250 or on a PCI-Express card. Otherwise the IDE and GB will choke each other on the too-narrow PCI bus.
Anyway. once people start doing this, 1000BaseT is back to where 100BaseTX has been for 5 years - choking. I say - Bring on 10GbE!
First of all... (Score:3, Interesting)
Try to find a host OS with a TCP/IP stack that can properly utilize 1 gigabit ethernet, let alone 10 gigabits. Hint: It ain't Linux...
Try to find a storage solution that can read or write that fast. I'm thinking something like EMC with about 6-8 2 gigabit HBAs using Veritas DMP (dynamic multi-pathing).
Try to get all of the above, along with a 133 mhz. 64-bit PCI-X bus that still can't actually keep up with 10 gigabits of data. (133 mhz. 64-bit PCI-X is only about 1024 megabytes per second, not counting overhead).
The problem is, right now, the rest of the parts of a system just can't keep up with 10 gigabit ethernet. The only box that I would use that can handle that many I/O paths to storage (we're talking six to eight 64-bit 66 mhz. 2 gigabit FC host adapters) is a Sun Fire 6800 or something larger. The problem is, Sun doesn't yet support PCI-X, so now your 10 gig ethernet card is going to be limited to a 66 mhz. 64-bit PCI version, which will only transfer a maximum of 512 MB per second, not counting overhead. That is less than half of the available bandwidth of 10 Gig Ethernet.
You can forget about putting it in any Intel based system. There are not enough I/O busses and I/O controllers in even the beefiest Xeons or Opterons that can handle this much bandwidth (to disk).
Also, if your application doesn't need to write all of that data to disk, then how large is this dataset in memory that needs to be transferred at 10 gigabit speeds? If you had a server with 64 GB of memory, it could transfer it's entire memory set over 10 gigabit ethernet in less than 60 seconds.
A far better, and more economical solution, if you really need 10 gigabits of data throughput to the network, would be to use the same Sun server, and a product called Sun Trunking, which allows you to bond multiple gigabit ethernet interfaces together. You get all of the throughput you want, plus more fault tolerance. I've set it up before, and you can have a continuous ping going, across 4 connections, and pull 3 of those 4 connections and the ping keeps going, without even a dropped packet. It's really fault tolerant, and uses your existing switches, NICs, and hardware, without forcing you to upgrade your entire core switch architecture.
Not just Sun anymore (Score:3, Insightful)
Define "properly". If you mean efficiency, that's desirable but not critical. If an Intel/Linux server is 75% the efficiency of a Sun server, yet costs 30% the price, you can install two or three for the same bucks. That's efficiency of a sort too, yes?
Try to find a storage solution that can read or write that fast.
Well, in terms of raw sustained bandwidth, this doesn't
10 Gb/s thats it. (Score:4, Interesting)
And thats just the tip of the iceberg. Back when the 300bps modem came out they figured the speed was as fast as anyone needed because it was near impossible for anyone to type more then 30 characters per second. Then the 1200 and 2400 bps modem cam out and they though those were as fast as anyone needed because almost no one can read at that rate. Then the 9200 and 14.4k because it takes almost no time to go to the next page 80x25 of colored text. then the 33.6k and 56k modems (Still the fastest modems for 1 normal telephone line) you can now download a 300x200x256 colors picture in no time. As bandwidth increases we find new ways to max it out and also with increased bandwidth we come with new methods of using the computer because it can now do it.
someone's trying to sound important (Score:3, Interesting)
The term "cardinality" is wrong for several reasons. First, image data isn't represented as sets, it's represented as ordered sequences, and when talking about ordered sequences, both computer scientists and mathematicians talk about their "length", not their "cardinality".
Furthermore, what matters is not the size of what you want to transmit, but the rate at which you need to transmit it. We call that the "data rate" or (somewhat sloppily) the "required bandwidth".
So, the overall point of the article, that there is no single media stream that requires 10 Gbit bandwidth, is correct. However, that's pretty much irrelevant: file servers, video servers, and aggregate usage still require that kind of bandwidth. A family of four might require that bandwidth. You might want that bandwidth to have your backup happen in 1 minute instead of 10 minutes. So, there are lots of reasons to want 10 Gbit Ethernet, provided the price is right.
As for his use of the term "cardinality", the author apparently doesn't quite know the terminology of the field.
Reminds me of an old quote... (Score:3, Funny)
1G should be enough for anyone.
-- Nicholas Cravotta, 2004
640K should be enough for anyone.
-- Bill Gates, 1981
Re:Reminds me of an old quote... (Score:2)
WHile it will be pissed away at first, somebody will suddenly come up with an innovative idea that requires hire speed. It will go beyond simple transfer of a ripped(-off) movie or music. But it will happen.
Re:That's exactly the quote I remembered (Score:4, Funny)
I wish my compu had less ram... so that a system dump takes a bit shorter..
Ah, but that is because (Score:2, Insightful)
Re:That's exactly the quote I remembered (Score:3, Interesting)
He programmed on a Mac, and the compilation took typically 5 to 10 minutes. Enough to get a cup of coffee, check the newspaper and have a quick chat with a cow-orker. Then he got a new Mac, and it compiled the program in a minute or so. No time for coffee, no time for news, no time for smalltalk.
So the new, faster computer was too fast... he had to wait at his desk more with the new computer.
Re:What would I do with this much bandwidth? (Score:5, Informative)
Re:What would I do with this much bandwidth? (Score:3, Insightful)
Re:What would I do with this much bandwidth? (Score:3, Informative)
Oh, do we all know that? That's funny, I think that these [emc.com] people [netapp.com] seem to know [sun.com] something [fujitsu.com] different.
Re:What would I do with this much bandwidth?-Music (Score:5, Informative)
Re:What would I do with this much bandwidth?-Music (Score:2, Insightful)
(In other words... true, 10Gb per second isn't available from New York to Hong Kong today... but in 2014, that'll be standard... if not so-three-years-ago.)
Re:Play original quake obviously (Score:3, Interesting)
1. A stream of data being pumped via UDP over a WAN which has a satellite link bang in the middle of it. Very high latency i.e. the bits take >600ms to get to the other end, but data can be sent at "wire speed" as there is no acknowledgement of each packet required == potentially massive bandwidth.
2. A large file being FTPed over the same WAN link. FTP typically runs over TCP/IP. TCP requires acknowledgement of each packet being sent. TCP (wrongly) interprets
Re:Cant use the bandwidth anyway (Score:2)
Re:Cant use the bandwidth anyway (Score:2)
unless you're the kind of wanker^H^H^H^H^H^H person that likes to buy a $300 car and put $3000 rims on it.
Re:it goes without saying (Score:2)
Presidential Bioinformatics (Score:5, Funny)
100 Megabytes per chromosome
x 23 chromosomes per gamete
x 20 million gametes per ejaculation
Therefore Ms. Lewinsky can consume roughly 46,000,000,000 megabytes
(assuming that there is no overflow to a dress)
How much can you consume?
Re:Presidential Bioinformatics (Score:5, Funny)
A single gamete has 1.5 billion individual base pairs. Of course, that's base-4, since DNA doesn't work off of binary. ACGT is what you're made of.
The fact that I just corrected you is pretty sad as well.
Re:Presidential Bioinformatics (Score:2)
A Lewinsky joke just seemed more appealing to the slashdot audience to express the idea of DNA size being an enormous data handling problem (especially if I wasn't sure of my numbers as you've pointed out).
My sister is the real genetic statistician, but I have a mild interest in her field just because of the massive size of data sets involved in those calculations.
I did expect the first reply to my post to be one comparing Monica's data cap
Re:Presidential Bioinformatics (Score:2)
Argh! No more! (Score:4, Funny)
Yeah, I know it's popular, but geez. Not all of us are spending our time gazing and wanking. Some of us actually code (and even talk to women!)
I hereby banish this to the Beowulf cluster of memes, along with Soviet Russia/Hot Grits/Profit!
Re:Argh! No more! (Score:2, Funny)
And don't bash hot grits. Hot grits was cool.
Re:Argh! No more! (Score:5, Funny)
I hereby banish this to the Beowulf cluster of memes, along with Soviet Russia/Hot Grits/Profit!
Umm, ya. Well done. The, um, banishing of things into..... popularity. That'll be effective. We all know how unused each of those oft-repeating jokes are. Oh, wait......
damn.
All your base are belong to porn?
Re:Argh! No more! (Score:2)
Lemme give you some advice, from one
Don't trust anyone who downloads porn but won't admit it.
If we don't get massive bandwidth sooner rather than later,
half the internet traffic will be spam
and the other half will be porn
Re:Posts mentioning Porn (Score:2)
Us geeks without girlfriends, our pr0n usage scales O(n^2) with bandwidth. So if there's one thing we want, it's more bandwidth, and better compression.
And some of us are teenagers.
Re:Posts mentioning Porn (Score:2, Funny)
No, thanks :-)
Re:Posts mentioning Porn (Score:2, Informative)
I think it's you who is the naive child.
Porn continues to be one of the leading drivers of technology (war being the other one) having "made" the VCR, VideoCD, color-printing, Video Streaming, and many other industries.
Porn also continues to be a serious business, with the New York Times (may 18 cover story) claiming Pornography has $10 - $14 billion in annual sales - bigger than any major sports league.
The porn industry employes 12,000 people in C
Re:Posts mentioning Porn (Score:3, Funny)
You must be new here. [slashdot.org]
Speaking as a heterosexual female in a committed relationship, even I enjoy watching pornography every once in a while. It's not a terrible thing. Besides, after you hang around male geeks for a while, you'll realize that many of them are s
Re:Porn! (Score:3, Funny)
It [slashdot.org] was [slashdot.org].
Simple answer(s) (Score:2)
I'm sure I missed a few things, but I hope i hit all t
Re:HDTV baby! (Score:3, Insightful)