Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Journal The1Genius's Journal: SAN-based Disk Sharing Whitepaper

The modern technological enterprise is typically a blend of platforms and computer systems of various capabilities with a wide range of specific functions. New servers are added routinely to departmental, back-office, and enterprise wide locations. This internal conglomeration of systems demands sharing of 'data' between these server platforms, and at speeds never before possible.

The day of a single site being controlled by a single platform architecture has long since gone. Now it is routine to have various UNIX platforms mixed into the same network as several Microsoft Windows flavours, and usually some sort of legacy mainframe system along with an expectation that all these platforms, and others, can seamlessly integrate each other's data. Once upon a time, services such as FTP and occasional e-mail messages were considered "normal" interconnection technologies. Today, the expectation is for direct file system-level interaction that completely masks which server is providing the data-and even masks which type of server. Windows workstations should be able to connect to UNIX server Shares as easily as Linux clients connecting to NT servers. The back-end iron of a mainframe application server should allow for a Windows-based Enterprise Information Portal to cull its data.

Applications have matured to embrace such cross-platform capabilities. For example, opening a PhotoShop document in a shared repository on a UNIX system and changing it on a Windows NT system is now possible, even though the basic machine architectures could be the reverse-endian. Text documents, pictures, movies, audio files, and many other media formats have matured to a level of heterogeneity. The challenge now is to provide such transparent sharing services at the high-speed user's demand.

Deficiency of LAN Solutions

Perhaps the most popular idea for cross-platform, server-to-server sharing is to introduce gigabit-type technology between the servers and run a protocol-rich layer on top. Many players in the industry are adopting gigabit Ethernet running TCP/IP between systems. It offers all the value of Ethernet/IP communication; namely, superb application history, cross-platform capability, transparent mounts (via FTP, SMB/CIFS/SAMBA, or NFS), and a range of tools. There is, however, a very fundamental problem with TCP/IP: performance. While the technology is sufficient for today's 10BaseT and 100BaseT applications, the protocol-processing overhead makes realizing speeds above 100BaseT nearly impossible. Most gigabit installations, especially on a Windows NT platform, run at a small fraction of their technical bandwidth. Worse still, when achieving high throughput, the protocol processing swamps the processors at both endpoints, rendering serious challenges for the remaining server tasks. Data Transfers of 1MB of data by applications on IP will incur a 30% to 50% overhead for all of the protocol headers and data segmentation that occurs to make the data migrate from machine to machine with integrity and validity. This turns 1MB of disk-based data into 1.3MB to 1.5MB of network-borne data before distilling back to 1MB of data on the other server's local disk.

It is often assumed that each server is required to talk to other servers, especially if the goal is to retrieve data owned by the other server. If it turns out that the primary goal is to share the data between heterogeneous servers, a more simplistic approach is to wire each server directly to the storage elements.

The SAN Revolution

A storage area network (SAN) typically conjures a picture of several RAID or other storage systems knitted together with a new wiring technology. While this certainly is a SAN, this type of network becomes far more valuable once there are more computers attached to it. The common view of attaching multiple computers to a SAN, unfortunately, is for the purpose of partitioning, or amortizing central storage costs over several servers, with each logical partition dedicated to particular servers. SANs also reduce overall storage costs by not requiring a hot-spare drive for every RAID subsystem that gets installed in the enterprise. With a large number of servers splitting SAN-based disk, a small pool of Hot-Spare drives can service the entire SAN in case of a drive failure.

More and more, storage-purchase decisions are being made separately from server-purchase decisions. This is largely because of enormous storage requirements and expected growth per server. While it may have been acceptable to have 2 to 4 gigabytes of server storage non-RAIDed in the past, servers now routinely control 50 to 100 gigabytes, and IT managers demand more reliability for their data. RAID systems can be expensive; thus, it is a welcome efficiency to have multiple servers amortize the cost of a higher-end RAID. This is "partitioning" and perhaps the first exposures to a SAN in a typical server room.

Partitioning is indeed useful, but what if the servers could actually reach out and address any part of the storage pool regardless of who or what the primary server is or what type of platform is employed?

SAN-based Data Sharing

There is a strong desire to "share" the storage data among the connecting servers. If possible, this would allow the scaling of servers to unprecedented levels and the off loading of the pedestrian duties of backup and off site mirroring on the primary LAN. Servers aside, workstations could also be directly connected to the SAN fabric and have LAN-like sharing at SAN speeds.

Physically wiring a SAN with multiple computers and a pool of storage technically allows the computers to touch any and all data. However, the hard mounting of volumes by multiple servers leads to instant trouble. Computer operating systems were never designed to allow the direct, unmanaged mounting of a volume by multiple servers because the security and data access management of that disk's data is held by the operating system. Allocations of new storage extents, cache coherency among systems and a range of other issues immediately arise. Two operating systems trying to manage over the security of a single piece of disk becomes a nightmare because the two machines, their OSs and their applications are completely unaware of each other and their data needs or actions. A single "bit" out of place on a storage system can result in a data-mart database blending into the office NFL pool stats spreadsheet. Essentially it will never work.

However, the benefits of directly connecting servers to single blocks of storage are too enormous to overlook. Each server would literally have full-bandwidth access to any storage element at hundreds of megabits per second and would not be encumbering any other server to deliver that data, unlike the traditional protocol-based serving today. In the rapidly growing world of enterprise data, such value cannot be ignored.

It should be noted that the realm of NAS systems and the appliances that are being produced for this parallel market of storage have the concepts of many servers sharing single disk partitions nailed. There are a number of solutions that provide seamless access of multi-platform hosts because they totally remove the traditional server by providing disk with an embedded management system that allows many platforms to mount via FTP, CIFS, SMB, NFS, or all of the above simultaneously.

There have been numerous approaches and attempts to solve the "operating system challenge" introduced by SANs. Some early approaches were similar to partitioning, but allowed multiple machines to mount some volumes in a read-only fashion, thus severely limiting the overall usability of the SAN. More typically, software vendors would attempt to create a new file system to handle the complexities of multiple machines.

While some were moderately successful, they were always homogeneous in nature, not cross platform. Attempts at true cross-platform global file systems have been lackluster and have left managers leery with regard to stability. A new file-system type is nothing to be taken lightly in the Information Age; it takes about 10 years for a file system with a valid track record to gain acceptance. Add to that the challenge of connecting various types of operating systems and platforms, as well as attaining data integrity.

To date, there are no successful cross-platform global file-system solutions in the SAN marketplace. All attempts have performed poorly and are extremely risky with regard to data stability.

The Future will be Sharing

SAN technology is valuable on numerous fronts. The initial desire to exploit SANs for centralized administration, the amortization of external storage, and offload of backup and mirroring needs from LANs, is well recognized. SAN-aware equipment is being sold in ever-increasing numbers as a follow-on to existing storage interface technologies, such as SCSI. Back-office and server rooms are now routinely equipped with SAN-ready servers and storage. There is a natural desire to wire together multiple servers and storage elements. The real value is in the ability to share data between elements at will, at high speed, and without compromise.

Tivoli, Veritas, EMC and others are all committed to providing a series of internal SAN choices that will allow for seamless data sharing amongst systems connected to the SAN within the SAN fabric. These solutions require a specific piece of proprietary software to be loaded on all systems that are participating in the data sharing. Here are a couple of examples of existing solutions:

Tivoli's SANergy is software (now in version 2.2) that runs on the computers connected to a SAN that;
        greatly simplifies SAN-storage administration through the power of sharing
        extends industry-standard networking to utilize the Gigabit bandwidth of the SAN
        broadens increased-data-availability protection to include any application
        can reduce the total amount of storage required in a SAN
Tivoli SANergy eliminates the one-to-one relationship between the number of SAN-connected computers and the number of disk volumes needed by those computers. Tivoli SANergy transparently enables multiple computers to share single disk volumes on the SAN-storage. In fact it allows many combinations of computers running Windows NT, Windows 2000, MacOS, Irix, Solaris, AIX, Tru64, Red Hat Linux and DG/UX to all share the exact same disk volumes at the same time--across platforms. And if the applications running on those computers are capable, Tivoli SANergy even enables the transparent sharing of the exact same files at the same time--across platforms.

VERITAS SANPoint Foundation Suite(TM) and VERITAS SANPoint Foundation Suite HA(TM) provide simple and reliable data sharing in an enterprise-level SAN environment. Multiple servers have access to SAN-based data through a single, consistent data image. Sophisticated failover and clustering logic allows load balancing between servers and reduces recovery time to provide higher availability for mission-critical applications.
        Simultaneous access to storage from multiple servers enhances performance by fully utilizing enterprise RAID subsystems
        Consistent data and application images across multiple servers increase scalability and ease-of-use
        Platform-independent hardware support provides seamless management and scalability
        SANPoint Foundation Suite HA(TM) extends availability with automated application and storage failover via VERITAS Cluster Server(TM)

This discussion has been archived. No new comments can be posted.

SAN-based Disk Sharing Whitepaper

Comments Filter:

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...