Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

IP Over SCSI? 22

morzel asks: "One of the advantages of SCSI based systems is that a plethora of devices can exist on the same high-bandwidth bus, including multiple host adapters - at least: that's the theory. While it seems pretty obvious to me to use this as a low latency/high-bandwidth interconnect between a small number of hosts, I've never seen an actual implementation of such a system. Do these, preferrably IP-based systems, actually exist? I'm not in need of a Beowulf style cluster just yet (I don't have an application for them) but I am interested in the possible usage of SCSI as a _fast_ interconnection for small numbers of load-balancing machines in cluster. A combination with the Linux Virtual Server Project could create a killer solution... Right? Thanks for all input/comments on this!" (Read on...)

"I would think these kinds of interconnects would be ideal for small clusters, or larger clusters where groups of eight nodes could be interconnected with each other, with one node acting as the master node. This would probably provide more bandwidth and less latency than ethernet-based solutions, and on the other hand could be a lot cheaper than special hardware."

This discussion has been archived. No new comments can be posted.

IP Over SCSI?

Comments Filter:
  • Dayna made a couple, one called the Pocket SCSI/Link (I have one in front of me). At least one other Mac-ethernet company made one too. Asante?

    Here's how it works (just the basics); you plug to box into your Mac's SCSI port (desktop or PowerBook) and into the ADB port with a pass-through cable (it draws its power from ADB). The other side has a standard RJ45 jack. You load its own ethernet driver (extension) and set your TCP/IP settings just as you would would with any ethernet adapter.

    Intel acquired Dayna and provides a little information about those products.
    http://support.intel.com/support/dayna/scsitrbl. htm
  • You could always hack something using multiple SCSI adapters on each node. The bandwidth would proabably be more thant the standard PCI bus can handle.

    Open/close on files usually has significant latency, but maybe that is not due to the SCSI interface. I guess that is why he asked about IP on SCSI....

    SCSI adapters aren't that cheap, but 100 Mbs switches aren't either. Are there any gigabit NICs available yet?

  • Latency.

    In a clustered environment, latency is one of the most important factors in getting the performance cranked up, especially when under load.
    Bandwidth is not really that important, but getting the message across the wire asap is something else. That's why myrinet solutions are being used - not just bandwith, but ultra-low latency.


    Okay... I'll do the stupid things first, then you shy people follow.

  • by jxxx ( 88447 )
    DEC/Compaq has been using SCSI for their VMS clusters for a while. I think they might have also started using it for their UNIX clusters. For some reason, however, they only use it for the data portion of the communication. The control portion has to go over another medium.

    An issue to consider is host adaptor ID. Most of the SCSI hosts I come across don't make it obvious how one would change the ID. If you have 8 hosts, all responding to SCSI ID 7, you are going to have problems.
    Another is termination. Unless you use a hub (yes, SCSI hubs to exist), you have to set up the network as a bus, with only the end hosts terminated.
  • Another reason is sometimes you have no option.. I had the opportunity to buy a couple Sparc 2's. I needed them to be dual homed, but I could not find a card to provide the second nic (ps, I was looking at what was available there, not what could be bought third party as the budget for this project is very low).

    I looked for information how to do this without luck.. just a couple deja entries of 'I did it, why can't you?'
    Oh, well...
  • I have never heard of IP over SCSI for macs but what I have seen and done is connect a dead powerbook (old one...520c) to my same powerbook with a scsi cable and it acted like a starter for the other one. It simply mounted the dead pb hard drive onto mine and I could repair it. The dead pb had this graphic on the screen that showed the transfers and stuff. The interesting part is that it is built into the firware...I had to use 2 scsi adapters because power books have this special adapter...one was set to be the host and the other was set to be normal. The dead one was the host.
  • I realize this is somewhat OT but it would be an interesting parallel. Firewire should hit 800Mbts by 1Q 2001 and 1600Mbts about a year later. This would be a much more scalable and flexible solution compared to SCSI. You don't want to set up your whole office like this, But for a small cluster setup this would be cool.
  • Aside from the bandwidth increase, SCSI has a much lower latency than ethernet. In a heavily loaded environment, low latency offers better performance than high bandwidth.
  • 100M_b_ps switched ethernet.. = 12.5 meg per second transfers

    160M_B_ps Ultra160 Scsi = 160 meg per second transfers.

    Theorectical of course, your milage may vary.

    Course, your 100mbit ethernet can go 100m, i believe the scsi is limited to about 12m without repeaters.
  • This mailing list summary message has links to various resources. In short, yes, it has been done.

    http://ume.med.ucalgary.ca/usenet/Solaris/0336.h tml
  • I cant see any reason not to use 100Mb ethernet adaptors in a switched environment.
  • I'm a little fuzzy how this works, if one node (tertminology?) was talking to another at 160MB would the whole line be used? If so with 8 devices then performance wouldn't differ greatly from switched ethernet... Also could something like Fibre Channel be employed? expensive I guess :)
  • I remember a friend using IP over SCSI on a mac to network them, so it shoudl be possible. I dont' know if it's possible on linux. Rikard

    -----

  • Check out the following: IP Encapsulation in SCSI Driver [wwa.com], dated Feb 1997, but they had a working linux implementation.

    --Bob

  • I remember reading the Mach source code a few years ago, and they had a layer called "SCSINet" or something like it. It was a way to do IP over SCSI (which, because of its greater bandwidth, lower latency, and priority levels, etc., could lead to neat hacks like distributed shared memory).

    So, yes, it's been done. It's even been done in the open.

    It's hacks like this, and the ability to have multiple ethernet interfaces (think: switched private Gb ethernet) that make me wonder just why in the hell people buy proprietary cluster solutions (DEC's memory channel - 40MB/sec.) when open standards are quite possibly better, and certainly less expensive.

    Makes me wanna puke.

    --Corey
  • Additionally, you can get patches for Linux 2.2.9 and 2.2.14 here:
    Hmm. it is worth mentioning that they are currently running positive discrimination - if you are using most common Webbrowsers under Windows (with the noticable exception of Lynx) you will be refused access to the site.
    --
  • A quick look through the Beowulf project's network drivers page yields this link...
    http://www-internal.alphanet.ch/archives/local/a lphanet/linux/drivers/scsi/IP-over-SCSI/

    It would seem to me that someone already has this working in an experimental stage.

    -CC
  • Yeah, Asante.. (I too have one in front of me ;)
    The Asante model uses a 9v power cube, and will tear up a 3C509 any day of the week.. They also made an Ethernet-SCSI bridge, allowing you to use standalone SCSI devices over a ethernet connection to another bridge. Drive sharing in HW..

    I only wish I had Linux/x86 drivers...
  • With the Dayna I believe you can use one of their devices on a shared chain, assign two IP addesses to the device, and have Ethernet over SCSI between two Macs (The device just 'relabels'). Saw it done by a junior grade hack at Chrysler, who simply didn't know it wasn't an intended use.

    So yes, and we were offtopic.
  • Are there any gigabit NICs available yet?

    Yeah, but they're expensive as hell - Multiwave has them for $300. And a gigabit switch is going to be a pretty hefty price too. :(

    100 mbit switches aren't bad. Netgear makes a nice 8 port 10/100 switch (called the FS108, IIRC), for about $90. I'm planning on getting one sometime the summer, actually: I don't see much need for anything too much faster (though IP over SCSI would be a pretty cool hack).
  • One big problem with PVM over Ethernet is that the majority of (at least naievely coded) PVM programs want to pass relatively small messages (not nearly filling an Ethernet frame) -- you know -- pass 10 ints here, 20 floats there, and ultimately the latency of TCP->IP->Ethernet gets you, even over 100mbit. IP over UltraSCSI on a dedicated bus would most likely reduce the latency introduced by Ethernet, but wouldn't help the latency tax induced by the IP + TCP processing.

    The coolness factor would be high, as would keeping from having a secondary Eth-switch to host the message-passing traffic to keep it off of the primary gen-purpose network.

    TCP/IP as well as Ethernet are general purpose networking solutions, real workhorses. However, high-performance cluster-based parallel programming is not a general purpose use -- it benefits the most from a communications path that is optimized for a constant stream of high-volume, relatively small messages from any one node to any other. Sortof a networking nightmare, eh? Sortof like how a usenet feed is a general filesystem's worst nightmare -- uses the underlying mechanism (transport mechanism in PVMs case: TCP/IP/Ethernet; filestore mechanism in usenet's case: ufs, ext2) in a manner that goes against the grain of the optimization assumptions made by the underlying layers.

    I maintain our department's 8-node Sparc PVM cluster. We use hand-me-down machines that get displaced from other upgrades. We use it for teaching parallel programming, so performance isn't a great concern for the future of humanity, but when the students write code that doesn't use the message passing medium effectively (currently a dedicated 100mbit switch), then they get a bit discouraged when their code runs better on a single machine as opposed to the cluster. Oh well -- part of the learning process!

  • Additionally, you can get patches for Linux 2.2.9 and 2.2.14 here: http://www-internal.alphanet.ch/archives/local/alp hanet/linux/drivers/scsi/IP-over-S CSI/ [alphanet.ch]

Never test for an error condition you don't know how to handle. -- Steinbach

Working...