Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

FastEther NICs for UNIX? 10

Patrick Darden asks: "Alacritech has a series of high performance FastEther NICs that offload the IP stack onto ASICs. They call it Session Layer Interface Card technology (SLIC) and claim that it increases TCP performance tremendously. PC Magazine has reviewed this card twice (the latest here) and shows 16-400% speed boosts over other NICs. They have a single port, dual port, and quad port. These are for NT only right now, but Linux drivers are in the pipe. Intel has a NIC geared towards servers that they claim decreases CPU usage tremendously. But only for NT. 3Com has a similar NIC, also only for NT (afaik). What is the best FastEther NIC for Linux? Are there any performance roundups? Any studies based on real criteria? Any real performance figures?" What about FastEther drivers for the other Unicies out there? There was a similar Ask Slashdot about this about two months ago. Is this a substantially different technology, or just more of the same under a different name?

"I'm also curious about Gigether and ATM? Until now, I have always chosen NICs for Linux by compatibility and driver maturity. At this point, it seems reasonable to grab the better NICs and have a shootout, if someone else hasn't already. If anyone is interested in helping, or knows of a similar study, please send me an email."

This discussion has been archived. No new comments can be posted.

FastEther NICs for UNIX?

Comments Filter:
  • by Anonymous Coward
    one of dalnet's irc servers (running linux) managed 38,000 TCP connections, and this was in production use, not some cheesy ZDnet benchmark.

    If the people running benchmarks at ZDnet knew how to run operating systems, they'd have real jobs.
  • by Anonymous Coward on Saturday April 28, 2001 @09:53AM (#260035)
    thats got nothing to do with the tcp stack. max sustainable open connections are a parameter in /proc. you can increase it so you have 65535 tcp sessions at max or just 1. no OS has a limit on the max sustainable tcp connections. even windows 2000 can be tuned to keep 65535 connections alive.
  • by Anonymous Coward on Saturday April 28, 2001 @10:54AM (#260036)
    Just hardware checksums aren't enough, the card must support scatter gather DMA also.
  • The ice is thin, and the zero-copy patch is slightly misnamed. In this case 'zero-copy' is not 'zero copy from user space', but 'zero copy in kernel space'; if there is a user space to kernel space transition there is still one copy involved. Zero copy in kernel space eliminates all the copies in two cases: for kernel level servers (NFS, for example), or if the application is using sendfile().

    Why not zero copy from user space too? Because it turns out to be quite expensive in inobvious ways, especially on SMP systems, because in order to do it right you must play games with the page permissions and associated things. On SMP systems, this apparently requires cross-CPU synchronization to insure that all CPUs pick up the correct new page permissions -- which, as you can imagine, gets expensive.

  • by AtariDatacenter ( 31657 ) on Saturday April 28, 2001 @04:00PM (#260038)
    I think this is similar to this Slashdot article almost two years ago:

    TCP Equipped Ethernet Card [slashdot.org]
    Josh Baugher writes " A 100 megabit ethernet card with a TCP/IP stack built in. They claim to be able to do 9 megabytes/second with only 2% CPU load (compared to 4.5 megabytes/second at 98% receiving CPU load using Windows NT TCP/IP ( read about this on "geeks" mailing list.) "

  • by bbk ( 33798 ) on Saturday April 28, 2001 @08:34AM (#260039) Homepage
    Some of the newer cards are able to do IP checksums in hardware, a feature required for the newly introduced zero copy networking code to function at full speed. This saves a noticeable amount of processor time for servers or other heavily network loaded systems. And supposedly your quake ping is a bit better ;-).

    A list of drivers and more information is availible here: http://lwn.net/2001/0111/a/zero-copy.php3

    BBK
  • by AntiBasic ( 83586 ) on Saturday April 28, 2001 @03:34PM (#260040)
    About damn time I say. Solaris and *BSD, of course, have had this capability for ages.

    Adding support for zero copy transmit in Linux has been a major chore since all of the networking stacks and drivers are designed for skbuffs which contain only a single linear buffer per packet, whereas other implementations (i.e. BSD, Solaris, NT) allow fragmented packets where the header is in one fragment and the payload is in the other fragments (the payload can be fragmented if it crosses a page boundary). In short, it allows the Linux tcp/ip stack to forego the user space to kernel space transition before transmitting it (a server's ratio of tx to receive is roughly 10:1).

    It looks like not all of the drivers have replaced those skbuffs yet. Its a start...

  • Well duh, no shit the more work you offload to hardware the faster the machine becomes.

    that's not always true. TLB loading is faster in software (running on general purpose circuits) than dedicated handware.

    If you really think about it, the whole idea of an OS is a little absurd. In fact, it wasn't untill computers actually became "powerfull" was it even considered reasonable to let some middle-layer software (an OS) do the work of dedicated hardware.

    what hardware ever took the place of an OS? sounds like you'd prefer a dos-like program loader over an OS.

  • i remember working with lame old system/3x machines that had dedicated controllers (which were coprocessors) for I/O of all sorts. Seemed to do ok - you're right, in that it would be really interesting to see how more modern hardware would perform under these same conditions, with an OS or services layer that took advantage of smart I/O devices...
  • by SlashGeek ( 192010 ) <petebibbyjr@@@gmail...com> on Saturday April 28, 2001 @01:33AM (#260043)
    Well duh, no shit the more work you offload to hardware the faster the machine becomes. I just can't wait for the day we no longer require operating systems. It's no accident that things like console gaming stations have such incredible capabilities. The trick will be getting the hardware vednors to agree on standards so the software houses can write some programs to work on it. Admitidly, with such a system hardware can quickly and systematicly become outdated. But it's no secret that if you want good performance, you start with the hardware. You wouldn't run some hog of a progam to emulate what a video card does, why make the OS do all the work of TCP/IP? Particularly in a server where TCP/IP may occupy much of it's cycles.

    This NIC may not be black magic, but I like to see things like this. It's a step in the right direction. If you really think about it, the whole idea of an OS is a little absurd. In fact, it wasn't untill computers actually became "powerfull" was it even considered reasonable to let some middle-layer software (an OS) do the work of dedicated hardware. High level languages may hold a lot of benefits in standardizing software develpment, but it comes at the expense of performance to use for an OS. As efficient as modern operating systems may be, nothing is as fast as software running directly on hardware. In a perfect world, and operating system would do little more than the core tasks: multitasking, memory management, etc. For what's left of the OS functions, an embedded type processor could handle those, and work in conjunction with the chipset. This would leave the processor almost 100% available for what it is supposed to do, run applications. I would estimate at least 50% increase in speed for most applications by eliminating operating system overhead.

To do nothing is to be nothing.

Working...