Killer NIC Hands-On Testing 134
basscomm writes "IGN has gotten their hands on the 'Killer' NIC recently mentioned here on Slashdot and have written a two part article detailing their impressions: 'The performance boost we got out of the Killer NIC in this testing exceeds Bigfoot Networks' own claims of 10-15% gains by a long shot and certainly seems to validate the potential of the technology. We suspect, however, that the fact that these computers were marginal at running F.E.A.R. in the first place had an impact in the comparison. In many cases the non-Killer NIC machine became absolutely bogged down as particles flew and grenades exploded, enough so that the entire machine would hang for a moment as things got sorted out. Obviously this murdered average fps figures.'"
and they say video games don't make you violent (Score:5, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2)
From http://linux-net.osdl.org/index.php/TOE [osdl.org]
TCP Offload Engine (TOE) is the name for allowing the network driver to do part or all of the TCP/IP protocol processing. Vendors have made modifications to Linux to support TOE, and these changes have been submitted for kernel inclusion but were rejected.
A TOE net stack is closed source firmware. Linux engineers have no way to fix security issues that arise. As a res
Well if that isn't a nifty idea (Score:1)
Re: (Score:2)
Re: (Score:3, Informative)
nforce 5 (Score:1)
Re: (Score:2)
Also spending 40-60 bucks on spell and grammar checking software looks like it might be well worth it in your case.
(first and last nazi post for me)
Huh, I never would have thought (Score:1)
Re: (Score:2)
After reading both articles, I am really interested in this product. I am REALLY looking forward to more real-world numbers. I did find it interesting, that the guy doing the review, still seemed skeptical, and suggested watching for more reviews. Though, now, we have a physics card and a "Killer NIC" card... which do you buy first? New CPU?
Re: (Score:2)
According to the article the card resulted in "....a more than 65% performance gain." in framerates.
that's a huge increase with a capital bold 'H'... Huge.
Few questions though:
--would a regular card help instead of using onboard?
--after reading this how much does a onboard sound card hurt fps compared to dedicated card?
This opened a pandora's box of questions I'd like answers to.
Re: (Score:2)
Re: (Score:2)
Killer NIC? (Score:4, Funny)
Re:Killer NIC? (Score:5, Funny)
http://gearmedia.ign.com/gear/image/article/729/7
http://gearmedia.ign.com/gear/image/article/729/7
http://gearmedia.ign.com/gear/image/article/729/7
You could use the "K" shaped heatsink as a Shuriken and kill someone with it.
Re: (Score:2, Insightful)
Qapla'!
Re: (Score:1)
You could use the "K" shaped heatsink as a Shuriken and kill someone with it.
Or use it to stop zombie Tupac from releasing another album from beyond the grave.
Re: (Score:1)
"Show me a network with a collision and I'll show you a network that needs one less user."
Before anyone asks... (Score:5, Interesting)
Yes it runs Linux...
If you did a double take at the spec's of the Killer NIC's NPU you weren't alone. It's dramatically overkill for common networking processing that the card will encounter. That doesn't mean it's useless, however. Far from it, as a matter of fact. The Killer NIC is actually running an onboard Linux build that handles all its networking duties, and, best of all, is entirely accessible to the end user via console prompt or with what Bigfoot Networks is calling Flexible Network Applications (FNA).
Now, does it run *IN* Linux? Probably not.
This is a pretty cool concept - a self-contained VM in hardware to handle your whole networking stack.
It could have potential security benefits as well, in that it would likely be impossible to use say a buffer overflow exploit in a networking protocol with this card, because the overflow would occur *inside the VM*. All that would happen is your NIC would suddenly die - not *great*, but better than having your machine compromized. The host OS could probably even detect this lockup and 'reboot' the VM on the card.
Re:Before anyone asks... (Score:4, Interesting)
Re: (Score:2)
Or, to put it a little more succintly, "a machine".
Re: (Score:3, Funny)
In other news, the day I trust IGN to do hardware reviews is the day I just give up, buy whatever the internet tells me to and spend 30 minutes punching the monkey to win.
Re: (Score:2)
Seriously I can see the need for this in server machines as a network frontend more than gaming will ever really need it. But if its a way to get it to market, so be it. I hope they have the smarts to make a "professional" version that markets it this way.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
This is not a new concept; it's called TOE (TCP Offload Engine) and is a standard feature of high-end networking cards (especially 10GigE cards.) The problem with TOE is that it completely screws up a *properly written* OS TCP stack, which is why the Linux networking people have pushed back strongly on it (the Windows ones might have as well, I don't know.)
Intel is now pushing something called IOAT (I/O Ac
Misunderstanding (Score:4, Informative)
This card is a complete top to bottom stack (as complete as Linux's stack is, since it *is* Linux's stack). The host OS's networking layer is totally bypassed and all commands are given to the card's stack. It's not really the same thing as TOE at all.
Re: (Score:2)
How is that going to work with things like the Microsoft ISA Firewall Client, Zonealarm-style software firewalls, or Google Desktop? Is the answer "not at all"?
Re: (Score:2)
For the cost of one of these... (Score:3, Insightful)
Re: (Score:2, Informative)
Good luck finding a modern gaming motherboard with a vacant PCI slot if you already have a seperate sound card and Agea PHYSX card since most gaming boards (SLI, etc) have 2 or 3 pci slots and usually one is unusable as it is being covered by the video card.
THis card is nowhere near worth it unless you're a linux junkie who suprisingly has money to burn.
ROI, bitches (Score:5, Insightful)
Which is why spending 300 bucks on a NIC is such a retarded move. Why not spend that money to upgrade the video card, or add more ram, or do something that's going to bring the level of the machine up a few notches?
Re: (Score:2)
In the meantime, this product will exist in the domain of the bleeding-edge early adopter crowd.
Re: (Score:2)
Small gains? Have you completely forgotten how much better the 3dfx Voodoo made games look? It was like night and day. That's why 3d accelerators became mass-market products.
Re: (Score:2)
Re: (Score:2)
Sure, the improved image quality of Voodoo Graphics abd the Rendition Verite were amazing, but they also cost $200-300 (just like this NIC).
I personally waited until I could pick up a Rendition card for around $100 before I jumped on the 3D bandwagon. So did most people. The 3D generation didn't really take off until 1998, when you co
So many ways to "fix" this "review". (Score:4, Insightful)
But, the PRIMARY problem is that they're running the test on two different machines. Even if they're the same make/model/etc, it doesn't matter.
Another item is that you SCRIPT the test. You don't play the game itself.
And, finally, related to what you were saying, you get a machine that does not have trouble running the app in the first place. Upgrade the video card, get a better processor, OR RUN A LESS DEMANDING GAME!
And put a SNIFFER on the network to find out what is happening on the wire. If we're talking a hub, a card that spews packets is going to outperform a card that obeys the protocols if they're played on the same network.
This "review" reads like a crappy ad for that card. There's no real information.
Re: (Score:2)
You may be right, and upgrading the video card or other parts might improve the network performance just as well. On the other hand, it might not.
Re: (Score:2)
No, what they *said*, was "Dropping $300 to switch between one generation of graphics card and the next will generally get you a lot less than a 10% fps gain."
Which is a pretty fucking generic statement, considering the specificity of the results. This bullshit article would have held a lot more water had they run a sc
If they're overloading the CPU... (Score:1)
That's basically the case... (Score:2)
And according to the description that IGN made, that's also how the Killer-NIC handles the problem :
- it's basically a small linux router that it shrunken to fit inside a PCI a card and drivers that directly tap network traffic from Windows before it even enters inside the win32 TCP/UDP stack.
- it's not supposed to magically make the *network* faster.
- it just hopes to that the onboard linux will be better at *
Confused and ignorant (Score:5, Interesting)
How is this different than any other high-end NIC with onboard processor?
By this I am referring to the high-capacity NICs which have been made for the server market for many years by various companies. E.g. Intel has had a series of NICs for ages which have (if I recall correctly) an onboard i860 CPU, RAM etc and it's own little OS in firmware to offload the number crunching from the OS. (And a damn tiny set of drivers as well since all that code was on the board instead of the driver files).
As near as I can tell this is just like any other of these NICs only somebody slapped some pretty graphics and plastic doodads on it and tripled the price.
Or am I completely off base and this really is a quantum leap in areas other than marketing...?
Re: (Score:2)
Read the article ? it talks a little about
Re:Confused and ignorant (Score:4, Insightful)
This Killer "NIC" is a 400Mhz computer with a NIC, that fits in a slot. They replace the entire network stack in Windows with the simplest possible stuff, and the Killer does _all_ the work, including extensively queueing, and lots of real-world software exceptions... I suspect a big part of what they do is making sure that when your CPU is bogged it doesn't context switch into dealing with the NIC as often...
If your CPU _ISN'T_ pegged you'll probably see no improvement at all, though.
They choose a bottleneck and then move it. (Score:2)
That's what I would expect.
Essentially, in this "test", they chose a system with (accidentally) had a specific set of bottlenecks (on board NIC, under-powered graphics card, under-powered CPU, intensive game) and then tested against a similar system with a card designed to compensate for some of those bottlenecks.
Amazing how that works.
It's probably not (Score:2)
Check out the picture. (Score:1, Funny)
No way (Score:3, Interesting)
The test must be flawed. They should have used a seperate 3com nic or something not onboard.
Theres no way that there is a 65% performance gain becuase of the nic card. thats impossible.
Re: (Score:1)
Re: (Score:2)
Re:No way (Score:4, Funny)
(symantics trolls make me sick. please kindly die)
(i bet misspelling semantics really ground your gears didnt it)
Test setup is invalid (Score:2, Insightful)
So what IGN is saying is that the Killer NIC performs better on a machine that is not the same as the control machine. IGN's results are entirely invalid. Heck, the little data that is presented isn't correctly formed.
Re: (Score:1)
Why not make it a router too? (Score:3, Interesting)
Most people who game are plugged in behind a router, because they're sharing their internet connection. We know this decreases our ping, but what can we do? Well, if this card were itself a router, we might just have our answer! If it had a single LAN port, or maybe four (they'd fit), the gaming computer could be connected directly into the internet, with the rest of the home network behind it. Firewall and other network services could easily run from the on-card Linux. Really, it wouldn't need extra hardware apart from the ports themselves. Other software features could prioritize ping-sensitive packets like VoIP and game stuff, so that my roommate's bittorrent doesn't interfere with FEAR.
One disadvantage would be that the gaming computer would always need to be turned on for the router to do its job. Or maybe not: the card could have its own 12V plug and get its own power, so it stays on 24/7 even if the hosting computer is turned off. I expect this really could significantly improve ping numbers (vs standard NIC behind a router) plus it would be seriously cool.
Re: (Score:1)
Re: (Score:2)
Amazing result, but bad conclusion (Score:4, Interesting)
It's also worthwhile to note that the card is bundled with F.E.A.R. and arguably biased towards it -- perhaps the game has code to better take advantage of the capabilities of the hardware or god forbid artifically cripple itself if not running with the hardware. It certainly wouldn't be the first time we've seen such a claim, with the PhysX drivers showing faster performance in software-only mode on very new, very fast cpu's despite a game generally refusing to run with the added physics settings without the hardware.
What's the deal with the driver disk? (Score:2)
The idea itself is interesting, but I think the most interesting part is that it has a usb port and can theoretically be programmed to do certain other non-gaming tasks. Unfortunately it would probably have to catch on before any interesting hacks turn up for it. It doesn't seem likely catch on though unless they reduce the price at least
Re: (Score:2)
quickly obsolete? (Score:2)
Re: (Score:2)
It's not. You can implement a reasonably complete TCP/IP stack by reading RFC-791 through 793 in somewhere around 2000-5000 lines of C code, which boils down to perhaps 64K of ROM/EEPROM space. Takes a month or three of developer time from someone who knows something about what they are doing; less if they've written network stacks before.
so if it does work I imagine that several NIC manufactu
I would like it... (Score:3, Interesting)
"The Killer NIC also has its own USB 2.0 port, which expands its capabilities even more. A BitTorrent client designed for the NPU could run on the card and use an external USB hard drive for storage, which would make it invisible as far as Windows is concerned. Thanks to the Killer NIC's traffic prioritization capabilities, users will conceivably be able to play the most demanding games while using extra bandwidth for BitTorrent, without any performance hits due to BitTorrent CPU load or hard drive access."
Mmm...invisible bittorrent...
Re: (Score:2)
Re: (Score:2)
And even if it was too bad, I would spend quite a LOT more than 300 bucks like you say. I would need a new cpu, new mobo, possibly new ram and a new videocard.
Re: (Score:2)
In other words, bit torrent should be a trivial load. If it isn't, your computer could benefit from something other than a $300 NIC.
Re: (Score:2)
Re: (Score:2)
If that doesn't put the $300 price of a NIC into perspective, then yes, the maker of snakeoil NICs deserves your money more than you do.
Re: (Score:2)
And I bet you would need more than 500 bucks to make a new pc, at least a GOOD gaming pc.
The videocard alone would take more than half your budget.
Re: (Score:2)
15% extra framerate? By offloading the network stack to a dedicated GPU? That's like saying the network stack accounts for 15% of the PC's current load. Given that network's worked fine back when then-top-end entire PCs accounted for less than 1% of an average modern PC's processing capacity, this is INTUITIVELY a load of BS.
The only thing this benchmark shows... (Score:2, Insightful)
Just what I need for Vista. (Score:2)
blah blah FPS blah blah... (Score:2)
If there's an exploit for your TCP/IP stack on the network card, and it manages to compromise your NIC's tcp/ip, then it's still some way off compromising your host machine's OS? Yes/no?
I'm no expert on these things, so I'd be interested to hear from someone who IS as to whether or not it's a useful security measure...
Re: (Score:2)
No such luck. Not when compromising the NIC means you've got control of a busmastering PCI device which can use DMA to scribble malicious code into the host machine's RAM, or, conversely, use DMA to read stuff from the host machine in order to snoop on the user. Note that your standard host-based virus scanner or malware
Re: (Score:2)
Re: (Score:2)
Really dumb NICs aren't bright enough to be hacked, but the smarter ones which offload the stack are more likely to be working with very limited resources like their initial TCP connection table (used for half-open connections during the 3WHS, ie, exchanging SYNs)-- SYN-flooding is more likely to result in a DoS on them then when the host OS is dealing with the TC
Offloaded network stack? (Score:2)
For 2-300$ it's still not worth it to me, but when the price becomes reasonable it might be something to think about if the embedded stack is not vulnerable to corruption by spyware or viruses...
Re: (Score:2)
Could it compromise the host? Sure, if the host is downloading executables via that interface then adding a little special sauce to it doesn't seem like it's beyon
Don't bash snake oil (Score:1, Interesting)
Re: (Score:2)
RTFA (Score:5, Informative)
The whole networking stack runs directly on the card. 100% of all networking load is offloaded from your main CPU onto the CPU on the card.
It is **supposed** to 'intercept incoming 'ping' requests and respond from it's TCP/IP stack immediately'.
Re: (Score:3, Insightful)
They tested one of these cards using an actual game and received a increased framerate and overall game experience vs the identical computer without the card.
Re: (Score:1)
Well sure its obvious a network game wouldn't run as well if you remove the network card.
I jest of course..
Re: (Score:3, Insightful)
Re: (Score:2)
Re:Snake Oil (Score:5, Insightful)
Pings were relatively similar to the standard box, though we did notice latency spikes much less often on the Killer NIC'ed machine.
So yeah....you are so right. They are merely bypassing the cpu with ping requests, and somehow that is magically giving them higher fps and a smoother gameplay experience.
You know, this whole "I'm holier than thou without even reading the article" bs on slashdot is getting really tiresome(I have fallen into the same pit many times myself, I know) It really does inhibit intelligent debate about the article and just makes people feel so much more pompous(as evidenced by frequent use of such words as "snake oil") Oy....
Re:Snake Oil (Score:4, Funny)
Obligatory Ref (Score:2)
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:2, Informative)
Oh, I dunno... maybe responding in 134 microseconds or less?
Re: (Score:3, Insightful)
The whole point is that the stack is offloaded to the card, so your network functions have minimal interaction with the CPU.
You might as well argue that a GPU i
Re: (Score:2)
Re:For $279.99 it better... (Score:5, Funny)
Re: (Score:2)
Yes, I said math coprocessor. As it happens AMD recently beat Intel to the punch and bought the rights to a technolog
Re: (Score:2)
Sure, you might get 200fps on a current generation game, but if we want to increase the resolution by a factor of 4 (not inconceivable) then you're back down to 50fps. Then say for example you wanted to generate 2 different images for stereo vision, you're down to 25fps... etc. That's not considering new future rendering technologies that are more accurate than the current "good enough" hack type rendering.
Dedicated hardware is more cost effective for a given
Re: (Score:2)
One thing that does concern me with the assumption that with greater "power" we'll simply want\need\use greater capabilities is the cost to develop. Right now art costs for bigger and bigger and better games have got to be nutz. Want the resolution to be twice as good, the AI to be much better, and the rag doll effects? Spend ALOT more time building it and then not be able to sell it for more than $50 a pop. I'm sure that tools will also get better but I fear that at some point i
Re: (Score:2)
Currently to get decent looking artwork in games a lot of work goes into making the textures, etc - eventually, i'd wager that much of this can be reduced to procedural synthesis and done by computer (yet more processing power required of course :D).
eg, instead of drawing a metal texture, the 3d artist will simply be able to "tell" the computer that "this surface is made of silver", for example, and the render
Re: (Score:2)
Why waste your general purpose cpu horsepower doing "dumb" and "embarassingly parallel" tasks when it is better used for more complex AI?. Multiple cores are all well and good but my vision is that they'll continue to be used for complex tasks, and the "dumb" (or rather, as you say, embarassingly parallel) rendering processes will continue to be better served by relatively simple, sing