Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

"iSCSI killer" Native in Linux 235

jar writes "First came Fibre Channel, then iSCSI. Now, for the increasingly popular idea of using a network to connect storage to servers, there's a third option called ATA over Ethernet (AoE). Upstart Linux developer and kernel contributor Coraid could use AoE shake up networked storage with a significantly less expensive way to do storage -- under $1 per Gigabyte. Linux Journal also has a full description of how AoE works." Note that the LJ article is from last year; the news story is more recent.
This discussion has been archived. No new comments can be posted.

"iSCSI killer" Native in Linux

Comments Filter:
  • Re:Will it catch on? (Score:1, Informative)

    by Anonymous Coward on Monday July 31, 2006 @10:52AM (#15817203)
    iSCSI is routable and secure if you use an encrypted tunnel (ipsec native in most implemenations) whereas AoE is local network only and non routable.
  • Re:Will it catch on? (Score:4, Informative)

    by SpecTheIntro ( 951219 ) <spectheintro@@@gmail...com> on Monday July 31, 2006 @10:53AM (#15817222)
    For a start, it's a specification from Coraid, not an industry standard.

    I don't know that this is true, because the LinuxJournal article directly contradicts it. (Unless I'm misreading it.) Here's what the LJ says:

    ATA over Ethernet is a network protocol registered with the IEEE as Ethernet protocol 0x88a2.

    So, it looks like the protocol has been officially registered and was granted approval by the IEEE--so that makes it an industry standard. It may not be adopted yet, but it's certainly not something like 802.11 pre-n or anything; there's an official and approved protocol.

  • Re:Will it catch on? (Score:5, Informative)

    by hpa ( 7948 ) on Monday July 31, 2006 @11:00AM (#15817274) Homepage
    So, it looks like the protocol has been officially registered and was granted approval by the IEEE--so that makes it an industry standard. It may not be adopted yet, but it's certainly not something like 802.11 pre-n or anything; there's an official and approved protocol.

    Anyone can register a protocol number with IEEE by paying a $1000 fee. It doesn't mean it's a protocol endorsed by IEEE in any shape, way or form.

  • by Unknown Relic ( 544714 ) on Monday July 31, 2006 @11:05AM (#15817309) Homepage
    Oops, only the linux journal article is down, the cnet article has answered my question: it isn't any cheaper than iSCSI + SATA solutions. $4,000 without any drives, compared to a starting price of $5,000 for a StoreVault (new from NetApp) with 1TB of storage. Other options such as Adaptec's Snap Server start just as cheap.
  • Re:Reliability (Score:4, Informative)

    by SpecTheIntro ( 951219 ) <spectheintro@@@gmail...com> on Monday July 31, 2006 @11:06AM (#15817316)
    People often forget there is a considerable difference in the reliability of ATA drives versus SCSI. If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.

    This is not necessarily true. [storagereview.com] It all depends on how your network storage is being used. SCSI drives are built and firmware'd for the sole purpose of running a server, and they consistently beat any ATA drive (be it IDE or Serial) when it comes to server performance and reliability. ATA drives just aren't built to handle the sort of usage a server requires--note that this isn't a reflection of quality, but of purpose. But a file server (which is the only thing the SAN would be used for) requires much less robust firmware than a server housing MySQL, PHP, maybe a CRM suite, e-mail server, etc.--and so ATA drives shouldn't immediately be ruled as less reliable. The maturity of the technology plays a more important role than the interface.

  • Re:Another "Killer" (Score:5, Informative)

    by wasabii ( 693236 ) on Monday July 31, 2006 @11:06AM (#15817319)
    AoE is a networked block device technology. NFS and Samba are network file system. One is about making block level access to a device available over the network, the other is about making file operations available.

    In the case of AoE, a single remote block device can be shared between multiple systems. Each client could issue it's own write/reads. in combination with a distributed file system, each node could mount the same FS.

    It's the same as NBD, iSCSI, Shared SCSI, and Fiber Channel.
  • Re:Cheaper? (Score:3, Informative)

    by hpa ( 7948 ) on Monday July 31, 2006 @11:06AM (#15817320) Homepage
    The main advantage of AoE is that it's simple enough that you could build it in hardwired silicon if you wanted to, or use small microcontrollers way smaller than what you'd need to run a fullblown TCP stack (this is what Coraid does, I believe.)


    The main disadvantage with AoE is that it's hideously sensitive to network latency, due to the limited payload size.

  • by jimicus ( 737525 ) on Monday July 31, 2006 @11:13AM (#15817366)
    Maybe cheapie little IDE hard disks are under $1/GB. If you want hot-swap, availability of half-decent RAID cards and disks which actually get to see some testing before they leave the factory, then you'll have to spend quite a bit more.
  • Re:Cheaper? (Score:2, Informative)

    by NSIM ( 953498 ) on Monday July 31, 2006 @11:24AM (#15817443)
    You are quite correct, there is no requirement for SCSI drives in an iSCSI implementation, iSCSI refers the protocol, not the drive interface, i.e. it's the SCSI command protocol implemented over TCP/IP. So yes, you can build an iSCSI system out of commodity parts and many people are doing so. if you want get an idea of the options out there for doing this, take a look at: http://www.byteandswitch.com/document.asp?doc_id=9 6342&WT.svl=spipemag2_1 [byteandswitch.com]
  • by NekoXP ( 67564 ) on Monday July 31, 2006 @11:27AM (#15817470) Homepage
    So. Coraid has not, in a whole year, explained why iSCSI is somehow more expensive (disks + Linux kernel + network.. all the same) than their ATAoE implementation.

    They'll give excuses about the cost of iSCSI hardware offload.. but you don't need that. ATAoE is all software anyway it's just a protocol over ethernet, rather than layered on top of TCP/IP.

    What is wrong with using TCP/IP - which is already standard and reliable? Nothing. We know TCP/IP provides certain things for us.. resilience (through retransmits), and routing, are a good couple, and what about QoS?

    ATAoE needs to be all the same network, close together, they're reimplemented the resilience, you can't use inbuilt common TCP checksum, segmentation and other offloads in major ethernet chipsets because they're a layer too low for it.

    No point in it. Just trying to gain a niche. They could have implemented products around iSCSI, gotten the same performance with the same features, for the same price. Bunkum!
  • Re:Reliability (Score:2, Informative)

    by Ahtha ( 798891 ) on Monday July 31, 2006 @11:33AM (#15817509)
    I agree there are reliability problems with ATA. We expect ATA disk failures within the first year for all of our ATA RAID systems and have yet to be disappointed. ATA drives just don't seem to be able to handle the pounding they get in a RAID configuration. We still use them, however, mirroring the ATA RAID with another server/disk installation as a backup. Of course, that doubles the cost of the ATA solution, but, it's still cheaper than a SCSI solution.
  • Re:Bootable? (Score:2, Informative)

    by KingMotley ( 944240 ) on Monday July 31, 2006 @12:11PM (#15817786) Journal
    Yes, you can. Just look for an iSCSI PCIe card. It's basically an ethernet card that looks like a standard ethernet card and disk controller (Most are SCSI controllers, although there is no reason they couldn't make it look like an ATA controller, but you'd lose a lot of features).
  • Re:Cheaper? (Score:3, Informative)

    by tbuskey ( 135499 ) on Monday July 31, 2006 @01:12PM (#15818302) Journal
    I hacked together an iSCSI setup from some old hardware.

    2 P II 400MHz systems running FC4
    One system had software raid 0 on 2 IDE drives.
    The target has a spare 10GB IDE drive.

    Added 2 10/100T cards with a crossover cable.

    Did a quick dd if=/dev/zero count=some large number of=the raid mirror or iSCSI target.

    The iSCSI target was 30% slower.
    Way cool.
  • AoE rocks. It is very easy to set up, way simpler than iSCSI or fibrechannel or any other SAN technology I have used. And it enabled us to have many more options for high availability or clustered filesystems (which we are not yet using but I have been following the progress of GFS and Lustre, learning towards Lustre). We did not buy the Coraid stuff but instead used vblade on our own disk machines. A disk node in our cluster has 4 300G SATA disks which we RAID 5, 512M RAM, and the cheapest CPU Intel currently makes. We have dual core Opterons with 4G of RAM each with no internal disk. They PXE boot and then mount root straight off the AoE. Then we run Xen on the Opteron boxes. This is the killer setup. We can migrate xen domains avoiding downtime for hardware maintenance and if a machines dies we can instantly restart it on another machine because it all runs off the AoE SAN.

    So far I am very pleased. Just make sure you get hardware that can do jumbo frames as this will increase your performance by 50%.
  • by MagicMerlin ( 576324 ) on Monday July 31, 2006 @01:35PM (#15818503)
    we bought coraid devices, and they are AoE is much simpler (read: cheaper) than iSCSI. when using jumbo frame switches/cards, we were able to get transfer rates very near theoritical limits on gigibit links, something I have never seen on iSCSI or fc for that matter.

    the only thing that bothers me about AoE is there is only a single vendor supporting it at the moment. other than that, it is great stuff. while it is not routable in the sense ip is routable, you can do creative things with ethernet switches and vlan basically giving san like functionality at a fraction of the cost. no longer do you have to keep dual fc/cat6 infrastructure in your server farm.

    it's cheap, and if/when it supports bonding lines, well beat fc in performance (comparing two gigabit fc vs/ bonded gigabit ethernet).

    merlin
  • Re:Will it catch on? (Score:3, Informative)

    by Harik ( 4023 ) <Harik@chaos.ao.net> on Monday July 31, 2006 @02:06PM (#15818830)
    The non-routable is a killer. Protocol-level bridging, no off-site redundancy, strict dependancies on port location. No thanks, it's a toy protocol that may get some use in the home NAS market, but it was hell to implement a reliable setup in our lab under controlled conditions. I'd hate to have to deploy it 'for real'.

    The only way to really do it is to purchase a dedicated Block Controller (spare ethernet card) and a dedicated Block Data Cable (Cat 5) and hook it up to a dedicated Block Device Multiplexer (switch). If you want a replacement for FibreChannel and are willing to live with the limits of direct local physical connections, it's useful.

    Just have fun getting those frames into a xen/vmware virtual host from an external machine...
  • I think you are probably looking at the cost to buy Coraid's gear. You do not have to buy their stuff, although I am sure that they prefer that you do. I built my own AoE SAN using regular PC's. Way cheaper. I take the google approach: Use a larger amount of commodity hardware and design the system in an intelligent way to achieve the same performance and reliability at a better price/performance. Coraid hardware is basically just a Linux box with disks exporting AoE volumes. The nice thing about it is that you get their support. But AoE is so simple that you generally don't need support beyond perhaps the mailing list.
  • Re:Reliability (Score:3, Informative)

    by afidel ( 530433 ) on Monday July 31, 2006 @07:03PM (#15821349)
    the odds of 3 drives failing at once are astronomical.

    No, they aren't. Just have an array running for a year or two and bring it down for maintenance, your chances of multiple drive failures are VERY good. Of course that happens even with SCSI drives, but it even more underscores the need for a premium part. Btw I just live through a scare this weekend. We lost one drive after powering up one of our main DB servers, then lost a second about 10 minutes later, luckily the 16 drive array was setup as RAID6 instead of RAID5, the first good decision we have found from the previous staff =)
  • Re:Another "Killer" (Score:2, Informative)

    by die444die ( 766464 ) on Tuesday August 01, 2006 @12:46AM (#15822948)
    My point was that something being opensource does not really help it in the end. In fact, this seems to rarely boost public appreciation of any product.

The moon is made of green cheese. -- John Heywood

Working...