Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Hardware Headaches Inevitable? 73

JaneWalker6847 writes "Don Becker, co-founder of the Beowulf project, describes the inevitability of hardware administration headaches and warns users not to expect a silver bullet to solve these problems." From the article: "We're about to see another revolution, which is in network adapters -- that we [will] talk directly to [them] from application level. That's a massive change in how you interface with them. And that brings about a new round of device drivers completely unlike the device drivers we had 10 years ago. So, that part of the world isn't going to stabilize anytime soon."
This discussion has been archived. No new comments can be posted.

Hardware Headaches Inevitable?

Comments Filter:
  • revolution indeed (Score:2, Interesting)

    by b1ufox ( 987621 )
    FTA

    "We're about to see another revolution, which is in network adapters -- that we [will] talk directly to [them] from application level.

    I hope ioctls do perhaps the same job as long as there is a module properly written to handle a specific ioctl

    Or is it like controlling network firmware directly from Application ? !!!

    Sounds like weird...

    Imagine a malicious program kicking your Network Adapter's butt :) ...

    • by edashofy ( 265252 ) on Tuesday August 29, 2006 @01:43AM (#15998204)
      What they're likely talking about is technology like Chimney [microsoft.com], which, barring lawsuits, will be coming out in or around the time of Windows Vista. Effectively, instead of the TCP/IP stack coming from the OS and running on the main processor, the network card will have a processor and memory and run the TCP/IP stack there. This increases efficiency and reduces reduces latency because the main CPU doesn't have to get involved as much. In the future we will probably see things like SSL encryption being performed on the card as well.
      • by b1ufox ( 987621 )
        So it is TCP/IP stack offloading i guess :)
      • Re: (Score:1, Troll)

        It also gives us a reason to have a TPM chip in our computers.

        My CPU load never goes above 5% with my integrated etherwebs running full on @100Mbps. We don't _need_ this.
        • Re: (Score:3, Insightful)

          by addaon ( 41825 )
          Try 10Gb ethernet.
          • by Korin43 ( 881732 )
            Why? - Seriously, how many people transfer multi-gigabyte files often enough for this to be worthwhile?
            • Re: (Score:1, Funny)

              by Anonymous Coward
              you are so right!

              640KB oughta be enough for everybody!

              grtz HillBilly
            • by the unbeliever ( 201915 ) <chris+slashdot&atlgeek,com> on Tuesday August 29, 2006 @03:39AM (#15998437) Homepage
              Individuals? Not many, at least not on a regular basis.

              In a data center environment? Quite often.
            • by Nethead ( 1563 )
              You've never been in charge of a 100 racks of porn servers. Thank the goddess for FreeBSD!
            • Re: (Score:2, Interesting)

              Transferring large files is not the primary use of high-bandwidth connections. Instead, streaming data and clusters get big bang out of it. 10Gb isn't targeted at consumers or gamers, it's targeted at businesses with lots of fresh data to push around.

              Here's a short list of companies for whom 10Gb Ethernet likely comes in handy:

              • Google
              • Pixar
              • IBM
              • Amazon.com


              And then there are systems on the lower end of the Top 500 supercomputers list.
      • by Masa ( 74401 ) on Tuesday August 29, 2006 @02:20AM (#15998284) Journal
        In the future we will probably see things like SSL encryption being performed on the card as well.
        If I remember correctly, there was once upon a time encryption accelerator cards for servers to release some stress from CPUs. They were discarded as obsolete because the CPU power kept growing and in the end, the pure CPU calculation power was much more greater than with these accelerator cards. It would have required constant hardware investments to keep the cards up to speed with ever-growing CPU speeds.

        It seems that these old cards are on sale on the eBay and [when I look those other search results] also some are sold as new, so I guess that this is still viable technology in some places: http://www.google.com/search?q=ssl+encryption+acce lerator+card [google.com]

        Anyway, I just have this feeling that it's not a good idea to integrate any kind of encryption technology directly to the card.

        Hmm... Do I smell WinModem?
        • Re: (Score:3, Interesting)

          by hpavc ( 129350 )
          I think the downfall of the cards is the SSL standards, the card is tricked out for a small subset of calculations. So it can do IPSec + VLANS (in the case of that Intel server cand) and the cypto cards can do SSL for webservers and VPNs (but the implementation would be whatever the card did, not what apache could do -- though I assume apache could fall back).
        • by hackstraw ( 262471 ) * on Tuesday August 29, 2006 @04:53AM (#15998548)
          Hmm... Do I smell WinModem?

          I'm not sure what that smell is, but its familiar.

          Yes, TCP/IP offloaders, crypto offloaders, physics offloaders, FFT offloaders, have all existed. The only accepted offloader that has succeeded is the GPU, and that is because it was subsidized by the high end graphics people and people with game addictions. The cost/benefit of the other offloaders has not proven itself. Especially when you consider the rate of increase of the CPU speeds and the bottleneck of getting the data to the offloaded chip and back again. The Linux kernel mailinglist has been pretty much anti TCP/IP offloading because the time spent optimizing the drivers and the performance increase was often surpassed in a few months with a faster general purpose CPU and a generic driver that worked with all cards. This is also confounded when you have OS level TCP/IP stuff like ipchains or iptables that need their code in the OS and not on the card. FFT and physics offloaders have not taken off because of the cost/benefit loss when one takes into account the speed loss of getting the data onto the card and then the cost of the cache RAM to put on the card (if its ever enough), and then the time to get the data off of the card.

          Now, what will make these things work?

          A bus that is near or at the speed of memory bandwidth or a problem where the data does not need to go back to the host system. A GPU falls into the second half there. The display needs to know what its told, the computer does not need to know the details. TCP/IP offloaders have failed to catch on because of the price/performance benefit and their lack of ubiquity and commonality. Crypto offloaders are cool. Especially for the geek factor of having the keys stored on the card and zeroed out when tampered with. But again, the cost of writing specific drivers vs known CPU drivers for doing crypto and the rate of increase of cheap commodity CPUs over time often exceeds the cost/benefit of the crypto card.

          Now, give me a fast bus and the ability to use my generic system RAM with one of these offloading cards, and things could change, but until then I expect this battle between the CPU and specific PUs to continue with no real winner.

          • memory bandwidth (Score:3, Interesting)

            by Gr8Apes ( 679165 )
            This comment makes me think again about AMD's acquisition of ATI. Would AMD put an ATI graphics core in the CPU package? (HTT allows for all the bandwidth the GPU could handle - no separate cache needed). Need a faster GPU? By the time you do - there'll be a faster CPU with a new GPU included, and this packaging might be less expensive than the current high end cards.

            This combination would also work fine for 90% of the world's computer users, and possibly be much cheaper. Think Sempron with RAM and a minisc
          • Yes, TCP/IP offloaders, crypto offloaders, physics offloaders, FFT offloaders, have all existed. The only accepted offloader that has succeeded is the GPU, and that is because it was subsidized by the high end graphics people and people with game addictions. The cost/benefit of the other offloaders has not proven itself. Especially when you consider the rate of increase of the CPU speeds and the bottleneck of getting the data to the offloaded chip and back again.

            And every once in a while you here an Intel e
          • RAID cards are another example of a commercially successful offloading hardware device. It is possible to do it in software, but most systems do it in hardware.
            • RAID cards are another example of a commercially successful offloading hardware device. It is possible to do it in software, but most systems do it in hardware.

              True, but only on the high end, price has no object range.

              Cheaper RAID cards are not worth using because in the event of a card failure the odds of you losing your data is still great. I've heard that some of these things cannot read drives that were setup with a different firmware revision of the same brand of card.

              Again, for an in box card, I woul
        • by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Tuesday August 29, 2006 @08:02AM (#15998893) Homepage Journal
          Hmm... Do I smell WinModem?

          Except that WinModems are the exact OPPOSITE of the philosophy that's being espoused here with crypto offload engines, intelligent network cards, etc.

          The WinModem was an attempt to take traditional modem functions and move them onto the CPU, in software. Rather than actually having a box full of circuitry that did the hardware handshaking, data compression, and all that good stuff, you just replace it with a simple device that barely connects the analog telephone line to the computer, and have the computer do all the heavy lifting.

          I think the justification behind this approach is "software is cheap, hardware is expensive." Therefore, you put the 'brains' in software, and your dumb-hardware/smart-software combo is cheaper than the traditional combination of dumb-software/smart-hardware.

          It's a pretty radical departure to essentially go in the opposite direction, from WinModems to these kind of "intelligent network cards," which seem more like a traditional serial modem in philosophy; they do all the work themselves and basically present the computer with a standardized data stream.

          The only way that I could see this whole business being "WinModem-like" is in it being tremendously difficult to program for on non-Microsoft OSes. But that's not a consequence of the design per se, but of how I suspect MS will choose to implement it.
          • by Masa ( 74401 )

            Except that WinModems are the exact OPPOSITE of the philosophy that's being espoused here with crypto offload engines, intelligent network cards, etc.

            Yes, I know. I was referring to WinModems because I think that this kind of technology will fail in similar fashion (well, ok, if we look modern laptops, it can be argued that the WinModem buisiness haven't actually failed).

            The only way that I could see this whole business being "WinModem-like" is in it being tremendously difficult to program for on non-M

      • Re: (Score:2, Interesting)

        by modeless ( 978411 )
        Why would you do that when CPU cores are just about to become plentiful? I think multi-core computing should be the death of dedicated acceleration cards. When you have all these cores just lying around you don't need a dedicated sound DSP, network accelerator, or even (eventually) a video accelerator. On, say, a 64-core system, you could do real-time raytracing without any specialized hardware at all, and still dedicate a core each to network and sound if you wanted.

        This kind of general-purpose architec
        • Re: (Score:2, Interesting)

          Because dedicated cores can solve the same problem more cost efficiently than general cores. Imagine how many general cores would you need to get the capabilities of a GeForce card. And also I'm not an expert in graphics, but I think you would need a truck load of 64-bit cores to be able to do real-time raytracing.
          • Re: (Score:3, Interesting)

            by raynet ( 51803 )
            Except that people are doing real-time raytracing already with todays computers. First real-time raytracing was running on 80486 or Pentium using somewhat limited 160 x 120 resolution, but best raytracers today run on dual CPU AMDs at VGA/SVGA resolution with 100000+ polygons at about 10-20 fps. And because raytracing can be parallised easily and gains almost linear speed up you can use new dual core CPUs to get even higher fps or resolutions. I also recall reading a paper claming that software raytracer on
      • in my linux system. I want open source network drivers that implement the TCP/IP stack without, say, phoning home for instance. Drivers that I could compile or hack myself if I wanted to.

        /coralsaw
        • And what about a fail-safe, "zero bugs here" TCP/IP stack, running on a dedicated card and using an almost human-readable format to communicate with the kernel, or even directly with the apps?

          The firmware in such a card would not even have to be upgradeable ...

          (And don't come whining that zero bugs is impossible. MY programs have zero bugs, thanks to testing each and every possible and impossible case for everything. Takes time? Yup. Zero bugs == priceless.)
          • I am not sure what you're getting to. My objection was against having my network card, the absolute outer net connection point of my machine, become a complex (and thus more bug-prone than today's network cards) beast that can be maliciously or otherwise exploited by any outside party that knows more about the (thick) firmware stack of the card than I do.

            No piece of software, BTW, can be deemed zero-bug, unless mathematically proven to be so. And then of course there's the dimension of concurrency that adds
      • nil nove sub sole (Score:3, Interesting)

        by spectrokid ( 660550 )
        Come on, we have seen the same before with modems. First, modems did everything by themselves, then we started seeing Winmodems which pushed a lot of the work back to the main processor. Intelligent network cards will be more expensive than letting the CPU do all the work. Of course, if Moore's law becomes harder to uphold in the future, then decentralising the computing work might be the only way to make computers run faster. Until then, it will be a niche product for computers where TCP is a significa
      • this all sounds vaugely familar some how...

        http://hardware.slashdot.org/article.pl?sid=06/08/ 08/193237 [slashdot.org]

        Killer NIC anyone?
      • Things like this are already commonly done in high performance computing, where you don't want to interrupt the CPU (which is doing real work) to service message passing requests.

        One example of a production system doing this is the Cray XT3 [cray.com]. You have a PowerPC 440 processor sitting on the card, along with a DMA engine. A request comes over the NIC, it will put it in the proper place in memory that you specified earlier.

      • Sounds like the Wheel of Reincarnation [catb.org] is spinning again. Not that that's a bad thing ...
      • Re: (Score:3, Informative)

        by Detritus ( 11846 )
        The traditional problem with doing this is that when you put TCP/IP on the NIC, you still need a protocol for the operating system to communicate with the NIC, and the CPU on the NIC is much slower than the main CPU. I used to have a box full of smart NICs that people had discarded because they were more trouble than they were worth, even though they had paid premium prices for the onboard protocol processing features.
        • Re: (Score:2, Interesting)

          by Tower ( 37395 )
          It is all a question of tradeoffs, and for most situations, the tradeoff of a little extra CPU against extra money on an adapter (particularly if it can increase latency) is a no-brainer. This is the same way with crypto offload.

          That being said, if you are trying to scale (think a dozen gigabit cards running at high utilization) or a significant number of high-throughput IPSec/VPN clients, then the offload hardware can really show up as a big gain. Even the OTS gigabit ethernet cards these days support of
      • by Eivind ( 15695 ) <eivindorama@gmail.com> on Tuesday August 29, 2006 @05:07AM (#15998585) Homepage
        Unlikely. "tcp offload engines" and similar crap come and die regularily.

        The problem is that general-purpose cpus grow in power so quickly that the offload-engines get ever-larger problems beating them. And they get the aditional problem that they don't get packet-filtering or anything else that is not custom-written for that particular card (if it's even possible to convince the card to do it!)

        It's also nothing new -- these cards have existed for literally decades, and haven't managed to make any kind of inroads, not even in specialized servers.

        Have a look at this year-old Lwn-article [lwn.net] for an example listing some disadvantages.

        • Re: (Score:3, Interesting)

          by iamcadaver ( 104579 )
          This logic does not follow through when you think of GPU's.

          That's what this sounds like, giving the network card the kind of specialized bus and direct communication channels like the Graphics subsystem.
          • by Eivind ( 15695 )
            Sure. I'm not saying that it can't work in *principle*. I'm just saying that though the idea is old (literally as old as the arpanet -- the very first computers on the arpanet had a separate complete *computer* (called an "imp") for doing the entire networking-thing.

            Later, it's been tried literally every 5 years. There actually *exists* "tcp offload engines", they existed 5 years ago. 10 years ago and 15 years ago. They existed in 1969 if you count the imps. (though those where external computers, not car

      • This makes me wonder about the benefits of something like this compared to a reprogrammable FPGA board. Surely that would be more beneficial as it can reconfigure itself on demand with simple software updates, and yet still offers the benefits of hardware calculation for common, logical problems like SSL encryption (as well as reduced latency).
      • Sounds similar to the Killer NIC? [slashdot.org]
    • by RuBLed ( 995686 )
      "Imagine a malicious program kicking your Network Adapter's butt :) ..."

      that's why it is titled Hardware Headaches Inevitable, resistance is futile...
  • by patrixmyth ( 167599 ) on Tuesday August 29, 2006 @01:49AM (#15998221)
    Imagine a BeoWulf Cluster of these #$*&#@ drivers!

    Ok, but seriously, maybe someone can answer me this. Why do we still need to construct massively parallel computing architectures at the platform level? Not saying we should toss the whole concept, but for the foreseeable future won't it make a lot more sense to stick with the Amazon model of chunking up into virtual machines? I know the FA says that this view is a mistake, but he doesn't explain why. Can anyone else?

    • by S3D ( 745318 )
      Why do we still need to construct massively parallel computing architectures at the platform level?
      Better heat dissipation ?
  • Why? (Score:2, Interesting)

    by Anonymous Coward
    Isn't having a stack of software between the network card and your application a _good_ thing? Personally I like to leave the network configuration up to the OS and focus on developing the functionality of my apps.

    Besides, what about hardware abstraction? If we're talking directly to the network adapter, isn't this taking a step back into the past (remember when you had to hand-code ASM to talk to various video/sound cards back in the early days of PC demos and games?)
    • Re:Why? (Score:5, Informative)

      by jonwil ( 467024 ) on Tuesday August 29, 2006 @02:24AM (#15998295)
      What happens now (on windows) is that applications talk to winsock. Winsock sends the data to kernel mode code including tcpip.sys. From there, it ends up in ndis.sys and then the driver for your network card before being sent to the card.

      What this new thing means is that applications send the data to winsock which hands it directly to a new kind of network card/driver which takes the data and header info and creates the TCP/UDP and IP packets on the card itself in card firmware. From there, the card wraps it up in the lower level protocols and then puts it out over the wire (or air if its wireless)
  • Standarization (Score:3, Insightful)

    by bn557 ( 183935 ) on Tuesday August 29, 2006 @02:43AM (#15998333) Homepage Journal
    I don't believe that most places that would benefit from an 'advancement' like this would tolerate a moving standard. If a company revises a card because the standard changes, it sure as hell had better support whatever the previous standard is because some company running 10,000 of these cards in some massive distributed application won't accept anything but plug and go.

    The idea that your original hardware vendor can provide you with a drop in replacement for a failed card is nice, but any decent manager is going to ask you what the plan is for when that vendor goes under. You need to know that you can just buy some card off the shelf and put it in and have it work. At least with our current driver/hardware structure, you know that a 3com nic is going to work. Or if 3com goes under, you can drop an intel nic in. You may have to install the driver for the card, but it's not going to (unless you start messing with dirty cheap hardware) have compatability issues.

    I guess my point here is, there's no way a bunch of companies would target big business with a product like this without there being SOME standard interface. Who wants a multimillion dollar migration to 100% proprietary hardware?
  • by MrFlannel ( 762587 ) on Tuesday August 29, 2006 @03:42AM (#15998446)

    Taking pre-orders now for exciting new products to be released soon!

    Only $50 each!
    Choose from the list below, and many more!

    • MS Word Firewall
    • MS Excel Firewall
    • MSIE Firewall
    • Windows Explorer Firewall
    • and many more!

    If you purchase 10 or more, we'll give you a COMPLIMENTARY Firewall Commander (a $100 value), to simplify the process of setting up rules for multiple applications!
    </advertisement>

    Seriously, why would I WANT to have to update all of my programs because a hole was found in the networking code (that they all share - because it's a full featured drop in library - BSD licenced and everything)?

    Did anyone think this through? Or, is this a follow up to the "OS of the future" article?

    • by mgblst ( 80109 )
      You are right, it does make updating the network code more complicated - I am sure it would have flash memory, but that doesn't solve everything.
  • Well, [manufacturers], publish the specs to every new hardware, so that FOSS devs can program the drivers for you. It's not like you're not using the OSes they gave you for free anyway.
  • by vtcodger ( 957785 ) on Tuesday August 29, 2006 @09:13AM (#15999140)
    The article doesn't mention it, but Donald Becker is, I'm quite sure, the guy who wrote most of the Linux NIC drivers. I think that anything he says about hardware interfaces and their future is probably worth reading.

    It looks to me like he's telling us that drivers are not likely to go away as an issue any time soon. Too bad, but if Becker says so, he's very likely right.

    • Too bad, but if Becker says so, he's very likely right.

      Intelligent people can disagree.

      Who do you listen to when the Linux core developers argues with the FreeBSD core team?

      Being highly knowledgable in an area doesn't give you a monopoly on the truth, or the ability to accurately predict the future.
  • The KillerNic to me sure sounds like this kind of *new* nic
    http://www.killernic.com/KillerNic/KillerOverview. aspx [killernic.com]
    http://hardware.slashdot.org/article.pl?sid=06/08/ 08/193237&tid=230 [slashdot.org]

    It seems to only Offload UDP traffic instead being a full network stack.
    • nforce 570 and 590 AMD as well as 590 Intel does the same thing and it has 2 gig-e ports with teaming
      the KillerNic's pci bus will not let it hit full gig-e speeds.
  • "We're about to see another revolution, which is in network adapters -- that we [will] talk directly to [them] from application level. "

    From an Old skool OSI background I am astounded that someone has said this and this guy is supposed to be a freakin player not some pimply faced youth.

    There are very good reasons for HAL's and Abstraction Layers.

    Some people commented that its moving some of the stack onto the NIC freeing the CPU to do more useful work - but the problem is OS suppliers will still have

    • I think you miss the intended markets for those cards: hard-core gamers and high-performance streaming-media systems. Gamers in particular will pay inordinate sums to shave another millisecond off their game's ping time or get that extra FPS or two, and they don't really care how bad an idea is overall if it gets them what they want.

      • It could be (like that stupid LAN card post a while back) - I thought it was more general thats certainly how I read the article.

        I don't see what changing your NIC will give you a worthwhile decrease in ping times.

        maybe in a LAN party but I doubt they are running gigabit networks with top of the line cisco Gig switches with a 10g Core at the average LAN party.

        • The card's theory is that by moving a lot of the network stack into hardware you can decrease the latency due to OS driver/stack processing. It's a nice theory, pity that part of the latency's only 1-2% of the total latency. But then, buying magic solutions to basic problems isn't new. Think of the audiophools who spend $20/foot on deoxygenated, magnetically-aligned, high-copper-alloy low-resistance speaker cables with special xenon-impregnated insulation and amorphous-bonded iridium-plated connectors becau

  • If you RTFA, you'll see he's talking about Linux, not Windows. He's probably referring to Van Jacobson's network channels [lwn.net]. But the momentum is slowing down as implementation issues are uncovered [lwn.net], and the odds are that it's probably not going to happen anytime soon [lwn.net].
  • We're about to see another revolution, which is in network adapters -- that we [will] talk directly to [them] from application level.

    If you're going to [will] insert words to correct someone else's writing, make sure [them] the changes are actually correct.

What is research but a blind date with knowledge? -- Will Harvey

Working...