Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

"iSCSI killer" Native in Linux 235

jar writes "First came Fibre Channel, then iSCSI. Now, for the increasingly popular idea of using a network to connect storage to servers, there's a third option called ATA over Ethernet (AoE). Upstart Linux developer and kernel contributor Coraid could use AoE shake up networked storage with a significantly less expensive way to do storage -- under $1 per Gigabyte. Linux Journal also has a full description of how AoE works." Note that the LJ article is from last year; the news story is more recent.
This discussion has been archived. No new comments can be posted.

"iSCSI killer" Native in Linux

Comments Filter:
  • AOE? (Score:5, Funny)

    by laffer1 ( 701823 ) <luke@@@foolishgames...com> on Monday July 31, 2006 @10:40AM (#15817123) Homepage Journal
    I didn't know Age of Empires can do network storage! WTG Microsoft!
    • Re:AOE? (Score:2, Funny)

      by Smelecat ( 7286 )
      Use AoE with caution. In a crowded data center, AoE will agro nearby equipment.
  • Will it catch on? (Score:5, Insightful)

    by andrewman327 ( 635952 ) on Monday July 31, 2006 @10:43AM (#15817148) Homepage Journal
    From TFA:
    Some significant caveats mean that not everyone is so keen on the technology. For a start, it's a specification from Coraid, not an industry standard. Its networking abilities are limited. And its detractors include storage heavyweights such as Hewlett-Packard and Network Appliance.


    So will this ever develop into a real standard or will it remain the sole domain of one company? I do not know if I want to invest time and money into it if the latter is true. From a comp sci point of view this is a great approach to networked storage. It uses what people already have to make storage reletively cheap. I am going to wait to see where this technology goes. Maybe it will blossom and become a serious contender.

    • Re:Will it catch on? (Score:4, Informative)

      by SpecTheIntro ( 951219 ) <spectheintro@gma ... minus herbivore> on Monday July 31, 2006 @10:53AM (#15817222)
      For a start, it's a specification from Coraid, not an industry standard.

      I don't know that this is true, because the LinuxJournal article directly contradicts it. (Unless I'm misreading it.) Here's what the LJ says:

      ATA over Ethernet is a network protocol registered with the IEEE as Ethernet protocol 0x88a2.

      So, it looks like the protocol has been officially registered and was granted approval by the IEEE--so that makes it an industry standard. It may not be adopted yet, but it's certainly not something like 802.11 pre-n or anything; there's an official and approved protocol.

      • Re:Will it catch on? (Score:5, Informative)

        by hpa ( 7948 ) on Monday July 31, 2006 @11:00AM (#15817274) Homepage
        So, it looks like the protocol has been officially registered and was granted approval by the IEEE--so that makes it an industry standard. It may not be adopted yet, but it's certainly not something like 802.11 pre-n or anything; there's an official and approved protocol.

        Anyone can register a protocol number with IEEE by paying a $1000 fee. It doesn't mean it's a protocol endorsed by IEEE in any shape, way or form.

    • If it develops into a standard, it would appear that maybe it will have a niche. It sounds like a nice idea that may be worth a shot for some uses. I can't help but wonder if the higher cost of iSCSI and FiberChannel is there for a necessary reason. The nice thing though is that even desktop systems are being made available with multiple network adapters, so one can be dedicated to this sort of storage.
    • considering that it's non-routable, in what way is this "a great approach to networked storage"? Ethernet simply takes the place of a SCSI cable here and the device protocols differ. It lacks some of the availability characteristiccs of SATA/SAS and fails to solve and sharability issues because it's still a block interface. "From a comp sci point of view" it's a total dud.

      The problem with ethernet is that it's hard to make go fast. We have 1G now but 10G is difficult because of all the processing involve
      • Re:Will it catch on? (Score:3, Informative)

        by Harik ( 4023 )
        The non-routable is a killer. Protocol-level bridging, no off-site redundancy, strict dependancies on port location. No thanks, it's a toy protocol that may get some use in the home NAS market, but it was hell to implement a reliable setup in our lab under controlled conditions. I'd hate to have to deploy it 'for real'.

        The only way to really do it is to purchase a dedicated Block Controller (spare ethernet card) and a dedicated Block Data Cable (Cat 5) and hook it up to a dedicated Block Device Multiple
    • I don't know if the RFC has been ratified but the AoE spec if definitely freely available from coraid's site. It is 8 pages long compared to iSCSI's 257 which I really like. There are open source implementations of AoE targets and initiators. It is not the domain of one company and can never be. As far as I am concerned it is already a serious contender.
  • Cheaper? (Score:4, Interesting)

    by DSW-128 ( 959567 ) on Monday July 31, 2006 @10:52AM (#15817207) Journal
    I guess I don't really see how it's cheaper that iSCSI? Sure, there's less overhead from the lack of TCP/IP, so you may not need as massive a network to drive it equally. But I've been under the understanding that iSCSI doesn't require SCSI drives, so you could build an iSCSI target out of the same machine/drives as an AoE host, correct? For some applications, I think the lack of TCP/IP might be a benefit - less opportunity to hack. (Then again, I'd expect anybody deploying something like this or iSCSI would drop the few extra $$$ to build a parallel network that transports just storage.)
    • Re:Cheaper? (Score:3, Informative)

      by hpa ( 7948 )
      The main advantage of AoE is that it's simple enough that you could build it in hardwired silicon if you wanted to, or use small microcontrollers way smaller than what you'd need to run a fullblown TCP stack (this is what Coraid does, I believe.)


      The main disadvantage with AoE is that it's hideously sensitive to network latency, due to the limited payload size.

    • Re:Cheaper? (Score:2, Informative)

      by NSIM ( 953498 )
      You are quite correct, there is no requirement for SCSI drives in an iSCSI implementation, iSCSI refers the protocol, not the drive interface, i.e. it's the SCSI command protocol implemented over TCP/IP. So yes, you can build an iSCSI system out of commodity parts and many people are doing so. if you want get an idea of the options out there for doing this, take a look at: http://www.byteandswitch.com/document.asp?doc_id=9 6342&WT.svl=spipemag2_1 [byteandswitch.com]
    • But I've been under the understanding that iSCSI doesn't require SCSI drives

      Correct, I built a Windows 2003 Cluster (just for testing, not a production system!) using a Linux iSCSI target on an IDE drive and the stock iSCSI initiators on the 2003 boxes. Performance wasn't great but it worked fine.

      With Copper Gigabit Ethernet and Jumbo frames (standard Ethernet is 1500 bytes, but disk blocks are usually 4K so you uneed Jumbo frames to eliminate fragmentation), I'd think you would save a lot of money over Fib

    • I've got an iSCSI setup using http://www.open-e.com/ [open-e.com] (basically a custom debian distro on a compact flash) hooked upto a 3Ware 9550SX with Western Digital RAID disks (all SATA). So the short answer is you can just use normal disks. If you really want it on the cheap you can do a single system with a single disk, although why you would want to I don't know
    • Re:Cheaper? (Score:3, Informative)

      by tbuskey ( 135499 )
      I hacked together an iSCSI setup from some old hardware.

      2 P II 400MHz systems running FC4
      One system had software raid 0 on 2 IDE drives.
      The target has a spare 10GB IDE drive.

      Added 2 10/100T cards with a crossover cable.

      Did a quick dd if=/dev/zero count=some large number of=the raid mirror or iSCSI target.

      The iSCSI target was 30% slower.
      Way cool.
    • Re:Cheaper? (Score:3, Insightful)

      by Zephiris ( 788562 )
      The really silly thing about this is that they claim it's "lower overhead" than TCP/IP because people are having to buy "expensive TCP offloading engines" for iSCSI, when a few seconds of research provided, namely on Wikipedia (http://en.wikipedia.org/wiki/ISCSI), that plain NICs can outperform the offloading ones, and sure, it's obviously going to be lighter than TCP/IP, however, ATA over Ethernet only has basic authentication (MAC addresses, which can be forged cheerily), can't be routed, and isn't very a
  • Not sure if I follow this. Harddrives are well under $1/GB. If you buy several 400 GB drives and just connect them in an old PC thats on the network, aren't you accomplishing the same thing? I have a terraserver at home and it cost http://religiousfreaks.com/ [religiousfreaks.com]

    • If you had to commit ritual sacrifice of several religious zealots in order to pay for your Terraserver, then you may have spent too much on it.
    • by jimicus ( 737525 )
      Maybe cheapie little IDE hard disks are under $1/GB. If you want hot-swap, availability of half-decent RAID cards and disks which actually get to see some testing before they leave the factory, then you'll have to spend quite a bit more.
    • by riley ( 36484 )
      Storage Area Network solutions are not under the $1/GB. Running a network filesystem (NFS, SMB, Coda, etc) are running a local filesystem over networked storage are two different things, fulfilling two different needs.

      iSCSI and AoE don't necessary directly benefit the small/home server market, but for the things that SANs are traditionally used for (data replication across geographically separated sites without any changes to the application software) there could end up being a big win in cost.
    • by rf0 ( 159958 )
      iSCSI is slightly differnet as rather than presenting a file system, it presents a hardware device. So you show it a 1TB device over the network (e.g /dev/sdb) then the client machine can partition that disk up as if it was local. Thats the advantage over just a shared network filesystem
  • Reliability (Score:2, Interesting)

    by Neil Watson ( 60859 )
    People often forget there is a considerable difference in the reliability of ATA drives versus SCSI. If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.
    • Re:Reliability (Score:3, Insightful)

      by dfghjk ( 711126 )
      how is that relevant to the discussion of protocols?

      reliability of SCSI versus ATA is largely imagined and the rest is intentional. drive manufacturers want you to believe their enterprise drives are more reliable and right now those drives are largely SCSI.
    • Re:Reliability (Score:4, Informative)

      by SpecTheIntro ( 951219 ) <spectheintro@gma ... minus herbivore> on Monday July 31, 2006 @11:06AM (#15817316)
      People often forget there is a considerable difference in the reliability of ATA drives versus SCSI. If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.

      This is not necessarily true. [storagereview.com] It all depends on how your network storage is being used. SCSI drives are built and firmware'd for the sole purpose of running a server, and they consistently beat any ATA drive (be it IDE or Serial) when it comes to server performance and reliability. ATA drives just aren't built to handle the sort of usage a server requires--note that this isn't a reflection of quality, but of purpose. But a file server (which is the only thing the SAN would be used for) requires much less robust firmware than a server housing MySQL, PHP, maybe a CRM suite, e-mail server, etc.--and so ATA drives shouldn't immediately be ruled as less reliable. The maturity of the technology plays a more important role than the interface.

    • Re:Reliability (Score:2, Informative)

      by Ahtha ( 798891 )
      I agree there are reliability problems with ATA. We expect ATA disk failures within the first year for all of our ATA RAID systems and have yet to be disappointed. ATA drives just don't seem to be able to handle the pounding they get in a RAID configuration. We still use them, however, mirroring the ATA RAID with another server/disk installation as a backup. Of course, that doubles the cost of the ATA solution, but, it's still cheaper than a SCSI solution.
    • It has less to do with the interface and everything to do with the drive mechanisms. SCSI drives are more expensive not because they use SCSI, but because the customers who use SCSI would rather pay a little more and have a drive that is more reliable.

      Even the enterprise and datacenter are starting to use SATA for the vastly superior price per GB over high speed SCSI or FC drives in tiered storage systems. Store the bulk of your data on cheap SATA drives in a RAID5, then when you use it, move it to a 15k RP
      • Re:Reliability (Score:3, Informative)

        by afidel ( 530433 )
        the odds of 3 drives failing at once are astronomical.

        No, they aren't. Just have an array running for a year or two and bring it down for maintenance, your chances of multiple drive failures are VERY good. Of course that happens even with SCSI drives, but it even more underscores the need for a premium part. Btw I just live through a scare this weekend. We lost one drive after powering up one of our main DB servers, then lost a second about 10 minutes later, luckily the 16 drive array was setup as RAID6 i
    • If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.

      ftp://ftp.research.microsoft.com/pub/tr/TR-2004-10 7.pdf [microsoft.com]

      Section 3 ("Operations Experience") starting on p 16 is interesting, along with Section 4 ("Conclusion") starting on p 19:

      "We already knew that SATA disks and white-box PCs could meet the performance requirements because of testing done in October 2003 [Barclay03]. We were frightened into thinking the failure rate of the SATA di

  • Everything over Ethernet!
  • TFA isn't responding, so maybe I'm missing something but how does this new protocol actually result in cheaper costs per GB? It's already possible to get an iSCSI SAN which uses SATA drives, and one of the major cost differences is the type of drive. What else is new here?
    • Oops, only the linux journal article is down, the cnet article has answered my question: it isn't any cheaper than iSCSI + SATA solutions. $4,000 without any drives, compared to a starting price of $5,000 for a StoreVault (new from NetApp) with 1TB of storage. Other options such as Adaptec's Snap Server start just as cheap.
      • by Tracy Reed ( 3563 ) <treed AT ultraviolet DOT org> on Monday July 31, 2006 @02:31PM (#15819087) Homepage
        I think you are probably looking at the cost to buy Coraid's gear. You do not have to buy their stuff, although I am sure that they prefer that you do. I built my own AoE SAN using regular PC's. Way cheaper. I take the google approach: Use a larger amount of commodity hardware and design the system in an intelligent way to achieve the same performance and reliability at a better price/performance. Coraid hardware is basically just a Linux box with disks exporting AoE volumes. The nice thing about it is that you get their support. But AoE is so simple that you generally don't need support beyond perhaps the mailing list.
    • Just slightly less overhead than iSCSI. It's the same shit though.
    • TFA isn't responding, so maybe I'm missing something but how does this new protocol actually result in cheaper costs per GB?

      The idea is that iSCSI uses TCP, which requires a lot of additional processing, which bogs down both the machine using the storage and the machine that contains the storage. The solution usually recommended is to buy expensive network cards that offload the TCP overhead from the main CPU. With AoE, you don't have the TCP overhead and therefore don't need the more expensive TCP-off

      • "Given the cost of main CPU cycles, though, it seems to me that most systems will have the cycles to spare on TCP overhead."

        not at 10G they won't.

        the question isn't whether iSCSI or AoE is better but rather why you would use either of them.

        Will there be any 10G NIC's that don't offer offload engines? If not, what is the advantage architecturally of AoE?

        What about direct DMA and zero-buffer?
        • the question isn't whether iSCSI or AoE is better but rather why you would use either of them.

          I think the costs and benefits are pretty clear. I'd actually love to see some drives with AoE/iSCSI built into them for home use. Need more storage for your video collection? Just buy an AoE/iSCSI drive and plug it into your switch. Although it would be done differently, unlimited expandability is the main advantage for enterprise environments as well. I don't know how this notion compares with SAN solutio

          • Direct DMA (zero buffer) is hard to do, not easy. Yes, NIC's have DMA capability but that's not what I'm referring to. Ideally you would like a data request to be satisfied between the storage device and the data buffer itself, even if it's a buffer in the app, without any intervening copies. Currently data is copied many times. IB offers this. Current filesystems don't.

            None of this is being done for the home user, so if you see this from that perspective you aren't really understanding it.

            iSCSI, and no
            • iSCSI, and now AoE, were done as competitors to FC and InfiniBand.

              Ah, I see where you're coming from now. I agree that they aren't going to compete with FC or IB in terms of performance. I see them as being useful in environments where performance is less important than low cost and easy expansion.

  • Yes! (Score:4, Interesting)

    by mihalis ( 28146 ) on Monday July 31, 2006 @10:55AM (#15817239) Homepage

    I like the look of this technology. The great thing it has going for it is that most of the non-hard-disk infrastructure (switches and cabling) leverages the tremendous investment in ethernet. That is great.

    The thing that needs work, in my view, is that the bit that links the disks and the rest isn't cheap enough. In fact what would be awesome here is if, say, Seagate provided disks with native ATAoE connectors built-in. They might have to buy Coraid for that to happen.

    In case anyone thinks I'm out of my mind here, don't forget that disks can already be had with ATA interface, SCSI interface, FCAL interface, SATA, SAS - that's five and there are probably more. Yes they might be a bit more expensive, but if they come in under the combined price of "regular ATA disk" + Coraid ATAoE disk adapter then you'd come out ahead. Someone like Seagate would, I think, have the industry-wide clout and respect to succeed in making this an open standard. Something that will be a challenge for Coraid for a long time (I have nothing against them, btw, they are friendly and their mailing list didn't spam me when I signed up).

    When I was on the OpenSolaris pilot project I tried to get people interested in using this with Solaris. I think it might be great for ZFS, for example. At that point the real storage wizards were more interested in iSCSI, but I respectfully disagree, OpenSolaris + ZFS + cheap storage = awesome file server. Emphasis on the cheap. As Sun people will admit, their previous attempts at RAID were more like RAVED (Redundant Array of Very Expensive Disk). Coraid does have a Solaris driver, so this is definitely feasible.

    • How much actual logic is needed to allow a hard drive to communicate in ATAoE? I haven't read the spec, but from the article it seems like not very much... basically the normal ATA packet needs some kind of ATAoE header prepended, and then it gets pumped directly into an Ethernet MAC.

      These days, an embedded Ethernet controller adds, say, $10 to the total cost of a device. And hard disks already have onboard intelligent controllers, so getting them to speak the ATAoE protocol shouldn't be much more than a
    • OpenSolaris + ZFS + cheap storage = awesome file server ... Coraid does have a Solaris driver, so this is definitely feasible.
      Your post is misleading. ZFS is only available on Solaris 10. The only driver available from Coraid for Solaris 10 is in beta and it does not support x86.
      • Your post is misleading. ZFS is only available on Solaris 10. The only driver available from Coraid for Solaris 10 is in beta and it does not support x86.

        Sorry if you found it misleading, but I don't agree. I was talking to Solaris -developers- about this protocol. It was a forward-looking prospect. At the time ZFS was not even released, but I liked the sound of it for some future time, and I also like the sound of AoE so I figured the two might combine nicely. If I had the time I'd get the dev hardware,

    • Re:Yes! (Score:3, Funny)

      by Lisandro ( 799651 )
      I like the look of this technology.

          It's the eyeliner. It doesn't look half as good in the morning.
  • iSCSI killer? (Score:4, Interesting)

    by apharov ( 598871 ) on Monday July 31, 2006 @11:02AM (#15817286)
    In the context of using this in low-cost environments with Linux I can hardly see how this could kill iSCSI. Last week I implemented an iSCSI setup for about 500 euros (target serves out 500GB disk space for non-critical backup) using standard components, FC5, iSCSI Enterprise Target [sourceforge.net] and Microsoft iSCSI Initiator.

    Works great and is a lot (>10x) faster than the about similarly priced NAS device that was used for the same task before.
    • I agree. We're really just talking about the transport layer, since the targets can be whatever kind of device the host supports and that the target unit makes available. So, AoE seems a little - redundant, I'd guess. The SCSI standard is well-defined, been around forever, so I'm not sure why a re-implementation using an ATA command set would make much sense.

      sloth jr
    • I've got an iSCSI killer.

      It's called soap.

      Jeez... didn't your parents teach you proper hygiene?
  • by Anonymous Coward
    iSCSI is a protocol. ATA disks are a physical medium. They work together, and you get the benefits of SCSI commands with the price of ATA disks. Just because iSCSI is the protocol does NOT mean that you need to use SCSI disks. You might even be talking to a RAID of ATA disks and not know it.

    So, why would you need AoE? It's already cheap, and been for sale for some time.
  • Bootable? (Score:2, Interesting)

    by Anonymous Coward
    Is it possible to boot WindowsXP via AoE or iSCSI? I want a diskless WindowsXP box.
    • The BIOS would have to support booting from AoE. OS support from the vendor is irrelevant, as the AoE driver makes the disk accessible in the same manner as an internal ATA disk.

      Since most x86 BIOSes only support PXE as their network boot protocol, I doubt it will work out of the box. Something would have to provide block-level access to the HD in order for the OS to bootstrap, and PXE doesn't do that.

      Coraid (or someone else) would have to make a bootable floppy, CD, or flash drive image that could add
    • Re:Bootable? (Score:2, Informative)

      by KingMotley ( 944240 )
      Yes, you can. Just look for an iSCSI PCIe card. It's basically an ethernet card that looks like a standard ethernet card and disk controller (Most are SCSI controllers, although there is no reason they couldn't make it look like an ATA controller, but you'd lose a lot of features).
    • Re:Bootable? (Score:2, Insightful)

      by wild_berry ( 448019 )
      My inexpert guess would involve getting a Tyan Thunder/Tiger motherboard with LinuxBIOS and compiling and configuring your own ATAoE support. Windows would need to think it's a local disk; LinuxBIOS could pretend that it was.
  • by cblack ( 4342 ) on Monday July 31, 2006 @11:09AM (#15817346) Homepage
    1) Complexity for RAID and volume management is not centralized and is pushed to individual hosts. One of the main benefits of SAN technology is that you can just carve out storage from a single interface and assign it to a server and the server simply sees it as a block device. With AoE each drive is addressed separately by the server, which means it is up to the server (and server admin) to figure out how to handle distributing over multiple drives, handle drive failures, and expanding volumes. This is huge.
    2) It is not a standard and is only really supported by one vendor. This may change in the future but it is significant right now. It is registered with the IEEE but that hardly makes it a peer-reviewed standard with input/improvements from many experts.
    3) No boot from SAN. Until someone makes some sort of mini bootstrap system on a CD or a hardware card implementation of AoE that can be addressed as a block device admins will be unable to host the root filesystem and/or C: drive on an AoE SAN
    4) No multipath (that I can see). Perhaps I misunderstand this, but it seems like there is no way to do multipath IO with this system. That is, all the drives are single-connected to a network. If that network switch goes down, all drives on that network are inaccessible.
    So AoE looks like a neat technology for pushing drives out of the box and potentially sharing them among hosts, but there is no intelligence there. It is just dumb block addressable storage with no added availability or management, and therefore is far from being an iSCSI or FC killer.
    • by Cyberax ( 705495 ) on Monday July 31, 2006 @11:26AM (#15817453)
      You can use Ethernet-based multipath IO, a lot of switches can be stacked to provide redundancy (and load-ballancing).

      AoE is a COOL thing exactly because it's a 'dumb' technology. You can buy a switch, a bunch of disk drives and AoE adapters, a small Linux PC - and your storage system is ready. There is a lot of existing RAID manipulation and monitoring tools for Linux, so RAID configuration is not a problem.

      You also can boot from SAN, it's not a problem. Just add required modules and configs to initrd and place it on a USB drive.
      • It may be cool, but it is WAY too expensive. 4000 dollars for a 15-disk box without disks, come on!

        I am looking for an affordable storage box for my home network, but for this kind of money I expect SMB/NFS functionality, not a dumb ATA interface over ethernet.
        • It may be cool, but it is WAY too expensive. 4000 dollars for a 15-disk box without disks, come on!

          Have you prices rack-mountable boxes with space, airflow and power for that many drives? They cost clost to that much even without the AoE adapters.

          I am looking for an affordable storage box for my home network, but for this kind of money I expect SMB/NFS functionality, not a dumb ATA interface over ethernet.

          Coraid's stuff is obviously not for home use. For home use, use an old PC filled with disks.

          • I do not need hot-swap capability. When I want to add or replace a drive I can just powerdown the unit.
            However, all solutions I have looked at (the Coraid included) have this useless (for me) feature.

            Currently I am looking at a 3E high 19" cabinet I have, to construct some disk mounting hardware (horizontal rails across top and bottom) and put a small board (ITX) in it. The thing can then run as a (Linux) server and export the disks as SMB or NFS instead of AoE, so they are directly accessible to my satel
        • Me too. I need ~5Tb storage for my film and music collection, but I can't find a good solution.

          Plain PC with Linux doesn't suit me because there are only 3 or 4 ATA controllers on a typical motherboard. Additional RAID controllers help but not much.

          AoE solution allows to install literally dozens of cheap disks in a cheap gigabit switch.

          Price on AoE should go down - electronics in AoE controller should cost no more than $20-$30.
    • While I'm not especially interested in network storage, and I know very little about SANs and AoE, I still thought I'd give my input.

      1) The "server", or drive array, handles the RAID, and all space carving (LVM, EVMS). AoE tools then export block devices.

      2) Yup, no argument there.

      3) VMs can boot from AoE, unless you use RedHat in which case it's not stable.

      4) Multipath ethernet (or bonding) can be done trivially at the kernel level on all connected devices. Both to double the throughput, or just increase th
    • while I agree with you, the issue of multipath is moot. Any device with only one port has that port as a single point of failure. You solve that problem with two ports and redundant switches. It is no different for SAN's, iSCSI or FC.

      SAN's management capability is also it's downfall. Expensive, complicated and vendor-specific.
    • It's good that you mention multipath support. Have seens any docs out there on how to do a simple multipath setup with both Linux target and initiators? I tested iscsi about 8 months ago and found 0 docs on it.

      Thanks.

    • 1) Complexity for RAID and volume management is not centralized and is pushed to individual hosts. One of the main benefits of SAN technology is that you can just carve out storage from a single interface and assign it to a server and the server simply sees it as a block device. With AoE each drive is addressed separately by the server, which means it is up to the server (and server admin) to figure out how to handle distributing over multiple drives, handle drive failures, and expanding volumes. This is hu

  • by err666 ( 91660 )
    Ok, so the Coraid people are selling their ATA over Ethernet 15 slot version for $3,995.00. That's apparently around EUR 3133. I can get something proven iSCSI based from Promise here in Germany for 4.499,- (a Promise M500i). Ok, that is almost 50 percent more expensive, but the iSCSI solution is supposed to work under all operating systems (Linux, *BSD, Windows, etc.) more or less out of the box, while for AoE you will have to buy drivers for Windows, and has generally worse support for other operating sy
    • Hey, thanks for the info. Looks like promise has the same enclosure with FC and plain SCSI ports. What are the advantages/disatvantages of iSCSI, FC, and plain SCSI? I am specifically wondering about M500i [newegg.com], M500f [newegg.com], and M500p [newegg.com]? Seems like they all have the same features, but plain SCSI is faster.
  • From reading the news article, it seems that they are selling $3,995 ATA -> ethernet converters (disk sold seperately). Each box will hold 15 drives and offer a simple raid controller inside. It still has the same performance issues of iSCSI (a bit lower overhead, but not by much).

    I dont see what the point is other than the fact that they are offering yet another transport protocol. Given that one can install iSCSI target software on linux/solaris/windows... whats the point? Anybody who read the article
    • Adding to that... I just don't see the point at all. I priced out a home-level file server and came out to $0.53 per gig, and that's including a backup drive to swap in should one of the raid5 array fail. The rest of the hardware was SATA2, hypertransport system bus, dual core machine... I wouldn't expect it to have any problems at all maxing out 3 or 4 gigabit nics. So how is $1/G all that great?
  • by NekoXP ( 67564 ) on Monday July 31, 2006 @11:27AM (#15817470) Homepage
    So. Coraid has not, in a whole year, explained why iSCSI is somehow more expensive (disks + Linux kernel + network.. all the same) than their ATAoE implementation.

    They'll give excuses about the cost of iSCSI hardware offload.. but you don't need that. ATAoE is all software anyway it's just a protocol over ethernet, rather than layered on top of TCP/IP.

    What is wrong with using TCP/IP - which is already standard and reliable? Nothing. We know TCP/IP provides certain things for us.. resilience (through retransmits), and routing, are a good couple, and what about QoS?

    ATAoE needs to be all the same network, close together, they're reimplemented the resilience, you can't use inbuilt common TCP checksum, segmentation and other offloads in major ethernet chipsets because they're a layer too low for it.

    No point in it. Just trying to gain a niche. They could have implemented products around iSCSI, gotten the same performance with the same features, for the same price. Bunkum!
    • One of the problems of TCP is it's very hard to make go fast over 10G. All those issues become moot for AoE since it doesn't use that. GigE ethernet isn't really interesting for attaching large numbers of disks.

      Trouble is that I'd assume all 10GigE NIC's will come with offload engines anyway so there's no savings.

      There is no functional problem with making the product non-routable. Servers need physical security and physical proximity. What you seem to think is a liability is not one.

      • Do you actually have an ATA RAID array that can perform 10 Gigabits/s full-duplex? I would love to see that, I really don't think those disks really exist though (maybe a couple or 10 10Krpm WD ones.. :)

        Right, so in the article this one guy says that "using the second network port and a dedicated switch adds more security". So despite being non-routable he gave it a dedicated network anyway. There's also a guy in the article talking about that he "believes" that iSCSI would have been harder to configure. I
    • There's nothing wrong with TCP/IP and iSCSI, that is until you try implementing it cheaply in hardware so that you can stick a little controller onto each of several dozen disks. That to me seems to be the point behind ATAoE - make it cheap and simple. And the reason to use a separate ethernet network just for ATAoE is because it's basically replacing the IDE/SATA/SCSI/FC connection. Don't think of it as a dedicated network, more of simply the cables that connect the disks to the host(s). They just happen t
  • I'd be far more interested in AoE and iSCSI if I could buy a few bare bridge boards to retrofit some RAID cages I have now.
  • by YesIAmAScript ( 886271 ) on Monday July 31, 2006 @12:13PM (#15817811)
    A wise man once told me there is a fine line between them.

    ATA is a crappy protocol, even when local. It's only good for squeezing that last $0.03 out of the controller cost. Once you are using ethernet cables ($1) and links and PHYs on each end ($4 each), it makes a lot more sense to put some brains back in. Use SCSI. Heck, even ATAPI optical drives (the optical drive in your computer) uses ATAPI, which is SCSI in packetized ATA transfers.

    Also, I'm a bit nervous about the packet CRC validation being done in the ethernet controller/layer itself. The problem is that if an ethernet switch between you and the storage device stores packs and forwards them (as all smart switches do), it may also chose to regenerate the CRC on the way. If it corrupts the packet internally and generates a new, valid CRC for the new, corrupt packet, you have undetected corruption. I'd be a bit nervous about that for my hard drive.

    I do think using GigE is a smart way to attach hard drives to servers. I look at the back of an Apple XServe and see two GigE ports and a fibre channel card. Why can't one GigE port be used to attach to the network and one to the XServe RAID? Why do I need to get a multi hundred dollar card to attach to the XServe RAID when that GigE port is fast enough? It'd sure save a lot of cost, and hopefully reduce the price ot the end user.

    Anyway. I'm pro GigE attachment, not sure I'm for this AoE.
    • ATA is a perfectly fine protocol for block storage and is much leaner that SCSI. The SCSI protocol was used for ATAPI because it already existed and was needed to support a wide variety of devices besides disk. It makes no sense to put your imaginary "brains" back in.

      I'm pretty confident that you can prevent unintended data corruption. TCP/IP manages it so there's your proof of concept :)

      GigE is not a good choice for disk attachment since it is easily outrun by a small number of disks. 10GigE is where it
  • If I'm reading the Wikipedia AoE article [wikipedia.org] right... AoE is a L2 protocol that can not cross routers. That would immediately rule out the office I work in, in which floors and the data centre are on separate TCP/IP subnets. Small offices only, then?

    But, as noted above, if they are claiming that they avoid the cost of ToE NICs for iSCSI, that's a spurious claim, since they are an optional performance enhancer, not a requirement for iSCSI. I've seen surprisingly decent performance without them, with the HP EV

  • I cannot imagine buying the Coraid devices: as others have mentioned, the savings over iSCSI are too small and you risk single vendor lock-in. However, I am intrigued by the possibilities provided by vblade. As I understand it, this module allows you to change a dirt cheap Linux machine into an AoE controller for regular ATA/SATA disks. This would not replace FC based SANs for latency critical applications, but could apparently provide a very nice, low-cost backup device.

    Does anyone here have experienc

  • Ultimate Proof (Score:2, Offtopic)

    by eno2001 ( 527078 )
    This thread [slashdot.org] in the iSCSI Killer story is ultimate proof that teenagers in their parent's basements all around the world have taken over Slashdot. In the days of yore, there would have been a lot of loud rejoicing at ATA over Ethernet. Today, nothing but a bunch of lame jokes based on gaming by high school drop outs. Yes, the days of Slashdot have come and gone. There is no hope.
  • Seriously, I don't think it needs any help whatsoever.
  • by Tracy Reed ( 3563 ) <treed AT ultraviolet DOT org> on Monday July 31, 2006 @01:12PM (#15818308) Homepage
    AoE rocks. It is very easy to set up, way simpler than iSCSI or fibrechannel or any other SAN technology I have used. And it enabled us to have many more options for high availability or clustered filesystems (which we are not yet using but I have been following the progress of GFS and Lustre, learning towards Lustre). We did not buy the Coraid stuff but instead used vblade on our own disk machines. A disk node in our cluster has 4 300G SATA disks which we RAID 5, 512M RAM, and the cheapest CPU Intel currently makes. We have dual core Opterons with 4G of RAM each with no internal disk. They PXE boot and then mount root straight off the AoE. Then we run Xen on the Opteron boxes. This is the killer setup. We can migrate xen domains avoiding downtime for hardware maintenance and if a machines dies we can instantly restart it on another machine because it all runs off the AoE SAN.

    So far I am very pleased. Just make sure you get hardware that can do jumbo frames as this will increase your performance by 50%.
  • Why can't you just use an NBD server and client and setup software raid1?
  • perhaps an interesting idea, but just because I can build a computer out of old, recycled clock parts doesn't mean it is going to become my server. Also, iSCSI adoption has increased something like 40% this year. Windows support for iSCSI will improve dramatically with the next revision, and iSCSI costs are only going to decrease.

    Also, consider management of one of these AoE boxes. What sort of tools are out there to simplify provisioning, deployment, snapshots and backup, etc. In order for this to go an
  • we bought coraid devices, and they are AoE is much simpler (read: cheaper) than iSCSI. when using jumbo frame switches/cards, we were able to get transfer rates very near theoritical limits on gigibit links, something I have never seen on iSCSI or fc for that matter.

    the only thing that bothers me about AoE is there is only a single vendor supporting it at the moment. other than that, it is great stuff. while it is not routable in the sense ip is routable, you can do creative things with ethernet switch
  • If I could go to Best Buy and get a 5.25" enclosure with an ethernet port on the back from Linksys or Netgear or their ilk, pop in my favorite SATA or IDE disk, and stick it on a private gigabit LAN this would be fantastic.

    Right now the cost of entry discourages experimentation. Having to buy a $3,000+ chassis plus all the drives is going to require funding that I have to fight for. If I can implement a proof of concept for under $500, I don't even need my manager to sign off on the expense. I can just d

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...