10-Gigabit Ethernet Standard Approved 311
A little birdie brings news that that 802.3ae standard for 10 Gigabit/second Ethernet has been approved. Everyone out there with Gigabit Ethernet - you are now officially obsolete. The new standard is fiber only, no more of that nasty copper stuff.
Approved != Affordable (Score:3, Insightful)
Re:Approved != Affordable (Score:2, Interesting)
Re:Approved != Affordable (Score:2)
Hell, where I work, we're still patting ourselves of the back for getting rid of that rotten old coax. We'll probably be languishing in 100Base-T land for a while yet.
The early adopters of 10Gb ethernet are certain to be:
Universities
e-Commerce/ISP outfits
Large corporation's data centers
There is still plenty of life left in gigabit ethernet. In fact, it is still just gaining momentum.
not obsolete (Score:3, Insightful)
Re:not obsolete (Score:4, Informative)
Re:not obsolete (Score:4, Insightful)
On a side note, I have sucessfully pulled 130Mbytes/sec out of 5400 RPM IDE Disks on 3ware controllers, with a cost less than $9000. 3 controllers, 24 disks, 64 bit 33Mhz PCI. RAID 0 over 5. So the potential is there to exceed current GigE, without too many disks or controllers, or getting too expensive.
It would also help a lot if we could get regular gigabit ethernet working well first. I think there was a story here on Slashdot not long ago that showed that most GigE cards had trouble pushing over 400Mbits even with large frames. Only the expensive $500 one came close to it's full potential (900Mbits). My experience is that without jumbo frames, there is hardly any advantage with lower end GigE cards.
Re:not obsolete (Score:5, Informative)
The other bottleneck with even high-end Intel-based servers could easily choke when dealing with not only 10 Gig Ethernet but also add Fiber Channel, multiple channels of Ultra 160 or Ultra 320 SCSI RAID, etc., since the memory bandwidth (and processor bus speed?) would then become the possibly the next bottleneck. RISC servers don't have that much of a problem just yet, but sooner or later it will be.
Re:not obsolete (Score:2, Informative)
The "northbridge" of the Intel E7500 supports two PCI-X busses (more information about the chipset can be found here [intel.com].
The ServerWorks GC series support for PCI-X start from 2 independent busses (the GC-SL) up to six PCI-X busses (the GC-HE). Specs on the ServerWorks stuff is located here [serverworks.com].
I'm not completely sure if the AMD Hammer chipsets will include PCI-X support initially, but if one were to give up AGP 8x (which isn't really needed on a server) then you can turn that into a PCI-X bus to support a single 10 Gig Ethernet controller.
Of course, there is still the bottleneck of the memory subsystem which can make or break a high-end system.
Re:not obsolete (Score:2)
Re:not obsolete (Score:2)
If a network has more than two nodes, as most networks do, then each node isn't expected to saturate the network. Think of adding more lanes to highway, rather than increasing the speed limit.
Re:not obsolete (Score:2)
Re:not obsolete (Score:2, Funny)
Good point. We need something like 3GIO [intel.com]. Plus something has to be done about the bandwidth between the northbridge and the southbride. Right now it is at 266 MB/sec with plans to increase to 533 MB/sec.
Re:not obsolete (Score:3, Informative)
Cisco 12000 10Gb line card [cisco.com]
or like this:
Catalyst 6500 10Gb line card [cisco.com]
Cisco did announce these a while ago.
Re:not obsolete (Score:3, Insightful)
In the real world a company deploying this is likely to have hundreds if not thousands of machines all connected at once.
Re:!not obsolete (Score:2)
Re:not obsolete (Score:2, Insightful)
Re:not obsolete (Score:2)
Let's see, that's about 1GB/s, so I can download the three ISOs for MDK8.2 in oh... 3 seconds or less!
Re:not obsolete (Score:5, Insightful)
What an informative link. (Score:5, Informative)
Fine, I'll do it myself. (Score:2, Informative)
In meaningful terms (Score:5, Informative)
1 LoC (Library of Congress) = 10 Terabytes [jamesshuggins.com] = 10,000 Gigabytes
That's 0.000125LoC/sec, or roughly 2.22 hours to transfer the entire contents across 10GigE.
Wow.
Re:In meaningful terms (Score:5, Funny)
Re:In meaningful terms (Score:4, Informative)
It's 8:1 for storage, but generally 10:1 for network ratings (an example [mathworks.com] - more for serial ports, but it still applies), thanks to a header and a footer bit sent with every byte. Sometimes (rarely), throw in a parity bit for good measure.
Mind you, that's still only 2.78 hours.
Re:In meaningful terms (Score:5, Informative)
I'm not very familiar with 10gige technology yet, but my brief research shows that it uses 64B/66B coding (e.g., 2 overhead bits out of every 66). Running at a clock rate of 10.3125GHz, that gives you a full 10Gbps of throughput, or 1.25 GB/sec.
100baseT uses 5B/4B coding, which does result in 2 overhead bits out of every 10 just like your serial line example. However, 100baseT actually runs at 125MHz so you do get a real 12.5 MB/sec out of it.
Of course, if you really want to be picky about "LoC/sec" or whatever pointless measure the popular media has latched onto this week, you need to consider the overhead of TCP headers, whether or not you want to allow jumbo frames in your calculations, and so on.
Re:In meaningful terms (Score:2)
Streaming digital holographic porn, anyone?
Its not fair! (Score:5, Funny)
*looks at the article*
*looks at his modem*
*cries*
Re:Its not fair! (Score:3, Funny)
Re:Its not fair! (Score:2)
Re:Its not fair! (Score:2)
BTW I have some 2BaseT NICs, available at a good price
Re:Its not fair! (Score:2)
If you're referring to ThinNet, that's 10Base2.
Thicknet is 10Base5.
With this annoucement (Score:5, Insightful)
10Gb speeds should be enough for anybody, so start building the infrastructure now and leave the telcos in the dust.
Will they do it? No. Why not? Because they think that they should bury the copper/fiber hybrid cable that they have been burying and come back and do it again later.
Burying cable is the most expensive part of telecomm.... retards.
Re:With this annoucement (Score:3, Funny)
Just like 640K is enough for anybody.
HH
--
Re:With this annoucement (Score:4, Interesting)
Re:With this annoucement (Score:2)
U of Waterloo tunnles [sentex.net]
Re: Enough for anybody (Score:2, Insightful)
Just like 640KB of RAM should be enough for anybody?
Re:With this annoucement (Score:2)
10Gb speeds should be enough for anybody, so start building the infrastructure now and leave the telcos in the dust.
It doesn't matter how much bandwidth you give me, I will always want more. And so will the people who design software to run on higher-bandwidth networks.
Re:With this annoucement (Score:2)
Here's a real example from SoCal: by the spool, telephone line itself costs about $5/foot. But the total cost to lay underground lines is about $40/foot. (Compare to stringing now-mostly-prohibited overhead lines at $16/foot.)
Re:With this annoucement (Score:2)
Wrong! Copper is already strung around every city and home in America (probably a hefty portion of the world). And, there's a standard for gigabit over copper:
Deployment Guide [intel.com]
[PDF] [10gea.org]
It's limited to 100 meters, but for communities, home networks and any switched network, I don't see a point in passing up what is already laid in the building. For future digs, they could go either way, and I'll agree fiber is the way to go. But let's not ditch copper just yet...it seems to have some usefulness left in it.
Orbital RAM (Score:3, Interesting)
The speed of light will be a hindrance to your plan.
Re:With this annoucement (Score:2)
Yes, that's a good observation... You have to watch out for the latency though! Today most software hardly "maxes out" DDR/RDRAM memory because of fetch latencies. Then consider that the pure speed-of-light delay to Korea is on the order of 0.1 seconds...There's no way you're going to be able to run general-purpose software with that kind of latency; heck you would even be able to see the delay on each mouse click!
(come to think of it, this implies that physical size is a fundamental limiting factor on the speed of computers - it does no good to have an infinitely fast CPU if its parts can't communicate rapidly due to speed-of-light delays...)
Hairy smoking golfballs (Score:3, Informative)
Researchers have realized this for decades. Before enormous silicon chip densities became ordinary, engineers at IBM (IIRC) used to say that the future of computers was "hairy smoking golfballs". This captured a number of important characteristics of very fast computers:
Since those days, Intel and its competitors have fulfilled all of these predictions except for the spherical shape, which is much more difficult and not as important as the other characteristics.
A Pentium 4 is hairy - those 55 million transistors have a lot of connections; and smoking, as anyone whose CPU fan has broken can attest. It's smaller than a golfball in cross-sectional area. That size isn't just to make them more convenient! If a physically bigger CPU would be faster, you can bet someone would be building them.
Re:tacions for a signal? (Score:2)
Unless, of course, you've invented subspace radio.
Quite L0000ng (Score:2, Funny)
...
iSpeed
http://www.ispeed.com
I prefer copper (Score:2, Funny)
anyway, why use fiber, when you can have copper and squeeze it between doors, windows and everything that closes away the server's hum from a peaceful, quiet home? As far as I know, using fiber would be a *snap* between a door.
OK, having said this, 802.11 should rule. But too expensive. snif.
ineiti
Unless... (Score:2)
Oops, never mind (Score:2)
Copper vs. Fiber (Score:5, Insightful)
Re:Copper vs. Fiber (Score:2)
Oh the possibilities! (Score:4, Funny)
wither Cat6 ? (Score:3, Interesting)
Is 1000BaseTX the end of the line for copper? Or will there eventually by a 10000BaseT that will run on Cat6?
Re:wither Cat6 ? (Score:3, Informative)
Re:wither Cat6 ? (Score:3, Informative)
I'm sure that there will be a copper spec for 10 gigabit too, it's probably just not ready yet. Consider that people will be wanting to use this on the backplane of embedded network hardware, and blade servers.
What you can do with 10Gbps (Score:3, Funny)
Download a typical 100K pr0n JPG: 0.00001 s
Umm...
Download a 650Mb ISO: 0.52 s
Hm...
Download 2 650Mb ISO's: 1.04 s
Eeh?!
Download 100 650Mb ISO's: 52 s
Wow!
Download 1000 650Mb ISO's: 8.7 min
Jeez!
Download an image of CowboyNeal: 12.31 hours
Bah... Tech still need to catch up.
Re:What you can do with 10Gbps (Score:2)
Huh? Are you using some kind of submolecular scanning technology? Besides... who would want an exact duplicate of him anyway?
Re:What you can do with 10Gbps (Score:2, Troll)
Download a 650Mb ISO: 0.52 s
Download 2 650Mb ISO's: 1.04 s
Download 100 650Mb ISO's: 52 s
Download 1000 650Mb ISO's: 8.7 min
You forgot one:
Download Windows 2003: 1h 35m 12s
Oh Man! My Network was finally upgraded 100mbs. (Score:2)
How big and busy is your network though? (Score:2, Insightful)
In most cases, small files are sucked down well before your bandwidth usage ramps up that far. And even larger files would probably only be sucked down a few seconds faster (mainly because of the speed of the storage medium on your system).
No more copper-bandits (Score:2, Funny)
Re:No more copper-bandits (Score:2, Funny)
Hit me with the clue-stick please! (Score:3, Interesting)
1) The link shows it has been approved by "Revcom" - who are Revcom, and why should I be interested in their approval?
2) Seeing as ethernet seems to speed up by an order of magnitude each time, why does the standard not allow for many more x10 jumps?
3) How far is 10Gb Ethernet from getting to the consumer/business market?
Re:Hit me with the clue-stick please! (Score:5, Informative)
IEEE
Consider yourself hit with clue-stick.
Re:Hit me with the clue-stick please! (Score:2)
I know of companies that have had 10 gigabit ethernet chips working internally for over 3 years now. They were just waiting for the standard to come out. Now they'll tweak their chips to meet the standard and release them. You should be seeing these in stores Real Soon Now(TM). Expect them to cost between $1k and $3k per HBA at first though. They probably won't reach an affordable level for 5 years or so. We still haven't completed the transition to gigabit.
As if 1000BaseT didn't suck enough CPU cycles (Score:4, Interesting)
I'm I the only one that thinks the only efficent 10GigE NICs are going to be PCI-X cards with an onboard 2.6 GHz P4 co-processor and 512 MB of buffer?
Re:As if 1000BaseT didn't suck enough CPU cycles (Score:5, Interesting)
What would happen is the OS (Linux) would get intercepted at the socket layer and pass the data to the network card. The card would then handle the process of building the packet and all the remaining layers of communication.
This allowed for a high amount of main CPU time left over for actually doing processing while the network card CPU was focused on handling the TCP/IP packet work. IIRC, you could saturate a 1Gb line with data at only 5% main CPU usage.
Re:As if 1000BaseT didn't suck enough CPU cycles (Score:4, Interesting)
My company once did this for a 25Mhz dec machine. We discovered that you now need a new protocol to the adaptor card, and the overhead of that protocol is equal to a well tuned tcp/ip stack. So if they can actually make this work, what it really means is that linux hackers should spend some time to tune the stack.
Note though that tuning the stack may come at the expense of maintainability, or flexability.
Re:As if 1000BaseT didn't suck enough CPU cycles (Score:2)
How long until these things are actually in the hands of consumers?
Re:As if 1000BaseT didn't suck enough CPU cycles (Score:2)
Sure, today. I'm still glad to see that networking standards are being pushed far forward in advance of computing equipment. 10MB/sec Ethernet was hard for computers to keep up with when it first came out, but I'm glad they didn't wait for the computers to catch up before establishing the standard.
New low-end desktops today can comfortably handle 100mb/sec no problem. High end handles 1Gig/sec. In about 5 years, by Moore's law, the new high end machines point to point will be able to use up all of that 10Gig/sec Ethernet. Ethernet is supposed to support all the machines on a LAN.
The fact that we can bearly support 10Gig/sec Ethernet now seems pretty irrelevant to me.
Re:As if 1000BaseT didn't suck enough CPU cycles (Score:2)
That is what switches are for. You don't need to share bandwidth with other computers on the LAN. If you have too much traffic between two computers, and are saturating the link to either of them, then you have a network design issue.
Re:As if 1000BaseT didn't suck enough CPU cycles (Score:2)
TSMC is currently in production at 130nm, starting 90nm process by the end of this summer and jumping to 65nm process in 2005. IBM says 40nm is the final limit for CMOS transistor gates. If it is indeed the middle of 2002, then I don't think Moore's Law is going to be holding in five years.
Of course there's always multiple processor configurations, advances in circuit designs and better nanotech (since processor designs already are properly classified as nanotech at the 0.1 micron level) and all sorts of things to push the limits one more time, but Moore's Law and CMOS are more or less at the end of the road once you're dealing with resolutions of a few dozen atoms which is what you've got at 40nm. And that information is according to the people who have the most to gain by denying it --IBM, Intel, TSMC, UMC etc.
great for backbones and so forth but... (Score:2)
Slashdot Effect (Score:4, Funny)
Re:Slashdot Effect (Score:2, Funny)
Don't be too proud of this technical monstrosity you've constructed. The ability to transfer the Library of Congress in less than three hours is insignificant compared to the power of the slashdot effect.
Rats. It would have been funny if you hadn't gotten to it first. :)
ah, the pleasures of a government job (Score:2, Funny)
yes, i know i'm not pushing my connection at all @ 700K, and i know 10-gig ethernet wouldn't make a rat's ass of a difference, but i like to gloat (/. on mozilla 1.0 takes, oh, 0.981 seconds to load and render)
Wow, the implications (Score:2)
Switch prices (Score:4, Insightful)
Of course they've probably come down a but in the last few months...
Pushing the limits of hardware (Score:2, Interesting)
Companies won't have hardware in their labs until early next year, so don't expect that you will see and 10Gb NICs at Best Buy any time soon.
10Gb over copper? Won't happen! (Score:4, Interesting)
Six months ago, I had the chance to talk with the 3Com technical manager who was on the board drafting the spec.
What he said was very simple; all tests indicated that the only way to have 10Gb over copper is to limit the connection distance to centimeters!
1Gb already pushed the envelope for copper, using all pairs, multiplexing, and error correction; 10Gb is just not possible.
Re:10Gb over copper? Won't happen! (Score:2)
Why is that bad? Centimeters is plenty for backplane traffic.
Also, who said it had to be over CAT5?
bus is the limiter now (Score:2)
Do the math - even on a "high end" server:
Sun SBus - 25mhz x 64bit = ~800mbps
PCI 33mhz x 32bit = ~1000mbps
PCI 66mhz x 64bit = ~4000mbps
And that of course is the raw speed for the whole bus. It's shared between multiple device - and even then you usually can't get the real theoretical maximum throughput.
Until busses at least 3x faster than 64/66 pci become common on server hardware, this will only be realistically deployable as network infrastructure (eg Inter-Switch Links between high end Cisco Catalysts). Even at 3x 64/66 pci, one 802.3ae card will saturate the bus.
Of course 10Gbit Fibre Channel is also coming down the pipe soon - hopefully between the two there will be a real drive for newer bus architectures to actually go mainstream in the server market.
Re:bus is the limiter now (Score:2)
oops, my quick mental math led me astray - SBus would be ~1600mbps, not ~800mbps. In any case, doesn't change the point
Can you say Distributed Computing? (Score:2)
Re:Can you say Distributed Computing? (Score:2)
It will make it better, but you 10G ethernet doesn't match the speed of an SMP interconnect. If a processor has a 133 Mhz DDR bus (like an AthlonMP), that's ~ 17Gbps. You might assume that by the time 10G ethernet is widely deployed, processors might be utilizing 200 Mhz DDR busses for ~25Gbps. It's also considerably lower latency across those little copper traces on the board compared to going through 10G ethernet.
Technology will improve for all sorts of networks and busses, but it will almost always be universally true that a tight interconnect inside a single machine will perform better than an externally cabled network between machines.
Fiber only - for the moment (Score:4, Insightful)
Then eight months later somebody figures out how to run it on old lamp cords and string.
Don't rush out to buy fiber unless you need the noise isolation (glass is great for that!) and don't care about the cost.
Many are missing the point here (Score:3, Informative)
And remember, Intel isn't the only hardware platform out there. While I don't know of a hardware platform that can make fully support the speeds needed, there are some that can support better than 4000 Kbps now.
Re:Many are missing the point here (Score:2)
I think everyone is supporting better than 4000 kbps now: 4000 Kbps=4Mbps
Arg. (Score:4, Insightful)
What use is this? I can think of a few... (Score:3, Interesting)
Another good use is the emerging use of iSCSI, or SCSI over ethernet. 1Gbps ~ 100MBps, but more likely around 60-80Mbps. With 10Gbps, a SAN based on iSCSI will actually be able to use the throughput of those SCSI drive arrays.
Eventually this will trickle down to the desktop, but not right now. So it doesn't really matter what PCI can handle - this isn't presently meant for it. BTW, 133MHz PCI-X will give 10Gbps, so if you have a dedicated PCI-X bus to that adapter, you can handle it will today's technology.
Obsolete? (Score:4, Insightful)
This is not THE new standard, it is A new standard.
It is THE standard for 10Gbps ethernet. Nothing more.
Gigabit is hardly obsolete when a) very few corporate networks are using Gigabit outside the server room, and...
Your average workstation can probably not even push 10Gbps, or anywhere near it in the first place. (Of course, that's not as big a deal, because it's ethernet, right? A single host can't max it out anyway.. the higher capacity means more hosts with lower latency.)
Bandwidth bottleneck on motherboard - AGP? (Score:4, Funny)
Anyone knows the MTU? (Score:4, Interesting)
100MBit maintained the same MTU as 10MBit, 1GBit maintained the same MTU too - leading to severe problems with performance. It's bad enough on 100Mbit, it's horrible on 1Gbit, to think that they maintained the 1500 byte limit on 10Gbit gives me the shakes...
Yes, I know about "jumbo frames", and I challenge you to find an affordable 1Gbit switch that actually supports it.
Anything below 64KByte packets would be insane as I see it.
Anyone knows ?
What's so bad about copper? (Score:4, Funny)
Re:Why Bother ? (Score:3, Funny)
Re:Can I actually use that? (Score:3, Informative)
latency? (Score:2)
HP's TruCluster is designed to use a proprietary cluster interconnect called Memory Channel, with a bandwidth of about 100 MB/s. Gigabit Ethernet is cheaper, but can't compete when it comes to latency. Any idea how we can expect this new standard to compare?
Re:latency? (Score:2)
The main clusters to benefit would be ones like Beowulf which tend to use "commodity" hardware (not that 10Gbps is commodity... yet!).
Re:Can I actually use that? (Score:2)
No. The answer is easy because the only reasonable way to connect high speed to a standard PC is via the standard 33MHz 32-bit PCI bus. That is 1056Mbps theoretically, but you'll be lucky to see 900Mbps in practice. Therefore an average PC can almost utilize the speed of gigabit ether. 10 gigabit is out of the question.
The solution is to go to PCI-X which is a 64-bit 133MHz bus in version 1. Even then, the theoretical bandwidth of ~8Gbps cannot saturate 10Gbps ether. PCI-X version 2.0 will provide 266MHz and 533MHz speeds, but whether that ever shows up in a standard PC is doubtful.
Re:Beowulf Cluster (Score:2, Informative)
Re:Beowulf Cluster (Score:2)
Re:Will Cable companies be using this? (Score:2)
LOL.
Could someone please explain to the idiot above that unusual/old networking technologies do not necessarily fall into the "tokenring" category? That used to be my duty here on slashdot, but I'm getting really burnt out with it.
Cable companies almost universally use what is known as DOCSIS. Not token ring. Not even close. Token ring is not FDDI, nor ARCnet. They are all different things, using different cable types in different enviroments. They are not ethernet or 10base2/5.
Re:Caveman (Score:2)
-- Frozen Caveman Sysadmin