Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
IBM

The Pros and Cons of Mainframe Linux 259

magellan writes "There is a good article on LinuxWorld.com that goes over some of the pros and cons of Linux on the mainframe. The author, Paul Murphy is an old mainframer and current UNIX user, as well as a frequent contributor to LinuxWorld.com, so he has some good insights. "
This discussion has been archived. No new comments can be posted.

The Pros and Cons of Mainframe Linux

Comments Filter:
  • by tmcmsail ( 302707 ) on Friday May 10, 2002 @01:59PM (#3498059)
    Linux the utility OS, runs anywhere. I have it on Intel & Alpha at home. What hardware do you want to make sing today :-)
    • Isn't that the old processor that HP used to make?
  • I work in a large datacenter with some very powerful machines, and I just don't see Linux having much of a future on mainframes, at least not without some serious kernel improvements. It is an excellent OS, and would be a good choice for a workstation or a low-end server, but would be a very poor choice for a high-end mainframe machine. The linux kernel is highly configurable and it would certainly be possible to get a Linux kernel running on a massively parallel machine, but this was not what Linux was designed for, and performance would not be on par with other more robust Unices. Linux' inferior TCP/IP stack as well as its inferior handling of multi-threading on a large scale is its major weakness in this area. Until these weaknesses are addressed, I would prefer Solaris, Irix, or HP/UX instead, as they were designed from the ground up with mainframe usage in mind.
    • by Anonymous Coward
      I couldn't agree with you more. My company gave the Linux-on-a-mainframe idea a trial run, and I'm sad to say, it just couldn't keep up with the load we were running through it. Both the VM and the scheduler were serious limitations compared to the Unices we have in use.
    • Your sig:

      -atrowe: Card-carrying Mensa member. I have no toleranse for stupidity.

      "tolerance"? . . . oh, the irony.

      • And I just thought he was an idiot for stating that Solaris, Irix, and HP/UX "were designed from the ground up with mainframe usage in mind".

        UNIX was designed to get away from the mainframe usage paradigm, not reinforce it. Read the article. It yields good insight into the differences between the Mainframe Way (machine resources are more important than user demands) and the UNIX Way (users needs are more important than the machine's).

        • Mainframes - designed for the benefit of the machine.

          PCs - designed for the benefit of the user.

          Unix - designed for the benefit of hunt-and-peck administrators and obscure language designers.

          • Unix - designed for the benefit of hunt-and-peck administrators and obscure language designers.

            You're right. C is so very obscure. Hardly any software in use today is written in C.

        • Early mainframes were expensive. CPU time metered and charged by the second. Only large corporations or government could afford them. Second generation mainframes topped out at essentially 64k bytes (7074 had 10,000 10-digit decimal words storage).
          Solaris, Irix, and HP/UX were designed as big UNIX, (which is something rather different from small mainframe). Each probably based on Berkely UNIX and each trying to distinguish itself as something special. The big UNIX did encroach on the mainframes turf, often doing more, better and cheaper.
      • He's being sarcastic.
      • Pity the fool who moderated this up. The sig, while not funny, is clearly meant to be a joke. There are thousands of pathetic sigs like it throughout slashtopia. I think it all is based on the Far Side Comic with the caption "Midvale School of Gifted" where the door says push and the mentle giant is pulling for the life of him and can't get in.


    • Um, Solaris, Irix, and HP/UX (shudder) are *NOT* mainframe operating systems.

      MVS and OS/390 are mainframe operating systems.

      You are talking out of your nether regions, especially when you call linux's TCP/IP stack inferior to HP/UX. I have adminned every operating system mentioned above except Irix, and you sir are grossly incorrect.

    • Of course, the use for Linux on a mainframe _is_ as a low- to medium server, running virtually atop VM. I have not sen anybody seriously advocate running Linux as the base OS on such a machine.

      /Janne
    • Linux will get there very soon. with the new test kernel, they have added the preemtive patch which increases the ability for the OS to process transactions. also, the preemptability will increase as the SMP ability increases as the preemtion is ties to the SMP locks. as the SMP locks get more and more finegrained, the preemtion points become more and more fingrained. you now have the potential for masive scalability.
    • "Linux' inferior TCP/IP stack"

      Zero copy networking not good enough for you? You want networking that somehow zips packets along without even touching them except by telepathy?

      Back under your bridge, troll.
    • by mrm677 ( 456727 ) on Friday May 10, 2002 @03:42PM (#3498762)
      You are incorrectly aggregating mainframes, shared-memory multiprocessors (SMP/NUMAs), and clusters as massively parallel machines.

      Linux makes a great operating system for certains classes of massively parallel machines: clusters. It is low-cost and has a decent MPI (message-passing interface) implementation. It also runs on commodity hardware. Don't be surprised if you see the next ASCI supercomputer using Linux as the OS for each node.

      You are correct in that Linux is not a good operating system for larger shared-memory multiprocessors. It lacks the fine-grained locking necessary to run the same kernel instance across dozens of processors.

      I can't comment on mainframes because I am unaware of their architecture. I do know that high-end UNIX servers and mainframes are different beasts. The former focus on performance while the latter prefers uptime above all. I also believe that IBM, the kind of mainframes, has not used UNIX as their traditional operating system. Thus you are comparing apples to oranges. Linux makes a perfectly decent "mainframe OS" if you are partitioning the machine into multiple virtual machines.

      Also please elaborate on "Linux' inferior TCP/IP stack". And "inferior handling of multi-threading on a large scale". Are Solaris light-weight processes any better?
    • Gee! Someone used Linux and inferior in the same sentence ! Don't you know you will be banned from /. for eternity?
    • ...
      I just don't see Linux having much of a future on mainframes, at least not without some serious kernel improvements.
      Perhaps you don't see the idea. Linux is DEFINITELY NOT an operating system for the whole mainframe (that's VM's job), but merely an operating system for the application alone.
  • It's still Linux, whether you run it on a Celeron or a mainframe.

    You've got the right hardware, but the software still isn't 'mainframe' level.
  • by swagr ( 244747 ) on Friday May 10, 2002 @02:06PM (#3498109) Homepage
    I was about to spend 5 million dollars on a new zSeries setup, but after reading the article I thought "maybe my laptop is good enough for now".

  • (see this report of a test on a 733-MHz Linux system for details on mstone) run on the mainframe.

    Yes, it's a word document. You'd think anyone writing an article for a Linux targetted site would at least check or convert the document to something everyone can read...
  • Besides pricing, the more high end the server goes... the more I will lean toward solaris.

    I am linux-biased folks are going to / me for this.... but I am speaking from experience.
  • Report roasts Linux (Score:4, Interesting)

    by Smackmaster ( 574743 ) on Friday May 10, 2002 @02:21PM (#3498239) Homepage
    Funnily enough I just came across this article on ZDNet [zdnet.com] that talks about how Linux isn't a very good long term server solution. Its here at http://zdnet.com.com/2100-1104-909084.html [com.com]
    • Your message is a bit misleading. The full title of the linked article is "Report roasts Linux on mainframes" It doesn't say that Linux isn't a very good long term server solution, it says that Linux on the -mainframe- may not be a good long term solution.

      .
    • by Simon Brooke ( 45012 ) <stillyet@googlemail.com> on Friday May 10, 2002 @05:40PM (#3499509) Homepage Journal
      Funnily enough I just came across this article on ZDNet [zdnet.com] that talks about how Linux isn't a very good long term server solution

      Yes, but note firstly that this article is making two different points, and secondly that at least one of them is clearly wrong and deliberatly misleading.

      First the article claims that Linux on mainframes isn't price efficient compared to Linux on Intel, and that Intel boxes are emerging which have similar reliability to mainframes.

      Possibly true; I don't know enough about mainframes to know, although I'm certainly not aware of these high-reliability Intel boxes.

      Second, the article launches an ill-informed FUD assault on Linux, saying

      • 'Linux vendors for requiring users to constantly update their software to fix errors'
      • 'current Linux incarnations are relatively immature, as evidenced by the interminable list of errors/patches on Linux providers' Web sites'
      • 'Linux isn't capable of running more complex, critical applications, such as e-mail notification systems'
      Are any of those things true? What does that say about the rest of the article?
      • Are any of those things true? What does that say about the rest of the article?

        I've noticed that whenever Meta group report on Linux they always denigrate it. There have been articles on ZDNET and similar places where positive things have been said by Gartner, IDC etc., but then at the end there are some words of doom from Meta Group: "it may not be ready", "there might be problems", "you can't yet run Linux on 1000 processor machines...".

        For example, look at this article about Linux in investment banks [vnunet.com]. Positive news all the way through until:

        But Meta Group programme director Ashim Pal says the cost of the platform is not the only consideration. 'The operating system is a relatively small part of the total cost of ownership. Purely focusing on the cost of the platform is deluded,' he said.

        If you go their web-site and look for recent documents featuring Linux in the title you will find:

        - Linux on the Mainframe: Nice Place to Visit, But...
        - No Advantage From Linux PDAs
        - Choose Palm or Pocket PC - Linux Only for Custom Apps
        - Linux PDAs Offer Alternative for Low-End and Specialized Markets
        - Companies Should Consider Limited Server-Based Linux Implementations
        - Microsoft Criticizes Linux as Operating System Issues Move to Web Services Level
        - ... Linux Management: More Hype than Substance
        - Linux Dreams of Management Promotion.,
        - Linux: Application Server Tiers or Tears?

        I guess you can make your own minds up. BTW, Meta Group have been having a few problems themselves recently [internet.com].

    • Save your time, according to this articule, Linux in the mainframe will fail because:


      "The company says that, at least for the moment, Linux isn't capable of running more complex, critical applications, such as e-mail notification systems."

      Intel-based servers are emerging with mainframe-like capabilities


      Geezz...

  • by Anonymous Coward on Friday May 10, 2002 @02:22PM (#3498248)
    This advice has never failed me over the years.
    I hope you find it helpful.

    1. Compare price/performance characteristics of the competing hardware prospects.

    2. Ask others who have dealt with the vendor(s) in question for their opinion.

    3. If 1 and 2 do not break the tie, ask yourself "What would Chewbacca do?"

  • by jdbo ( 35629 ) on Friday May 10, 2002 @02:24PM (#3498260)
    Heck, those zSeries suckers are as big as refridgerators - I bet I could put my desktop and my laptop Linux machines on a mainframe, and it'd still be totally stable.

    Sure, I'd have to get a new chair to reach all the way up there...
    • by Anonymous Coward
      > Heck, those zSeries suckers are as big as
      > refridgerators - I bet I could put my desktop
      > and my laptop Linux machines on a mainframe,
      > and it'd still be totally stable.
      >
      > Sure, I'd have to get a new chair to reach
      > all the way up there...

      Reminds me of a story from "The Tao of Programming":

      The Magician of the Ivory Tower brought his latest invention for the Master
      Programmer to examine. The Magician wheeled a large black box into the
      Master's office while the Master waited in silence.

      "This is an integrated, distributed, general-purpose workstation," began the
      Magician, "ergonomically designed with a proprietary operating system, sixth
      generation languages, and multiple state of the art user interfaces. It
      took my assistants several hundred man years to construct. Is it not
      amazing?"

      The Master Programmer raised his eyebrows slightly. "It is indeed amazing,"
      he said.

      "Corporate Headquarters has commanded," continued the Magician, "that
      everyone use this workstation as a platform for new programs. Do you agree
      to this?"

      "Certainly," replied the Master. "I will have it transported to the Data
      Center immediately!" And the Magician returned to his tower, well pleased.

      Several days later, a novice wandered into the office of the Master
      Programmer and said, "I cannot find the listing for my new program. Do you
      know where it might be?"

      "Yes," replied the Master, "the listings are stacked on the platform in the
      Data Center."

  • 99% of people don't understand my IBM Linux shirt.

    "Peace, Love, Penguin? What does that mean...?"
  • Pro: It's cheaper

    Con: It's cheaper
    • Nay. The real Pro is that you get to brag to your friends, "You think that cash register running Linux was neat? I got Linux running on a IBM Z-series mainframe at work!"

      It's all about the bragging rights.

      Same as how no CEO has ever bragged about hiring the cheapest consultant on the block. Perceived value, rather than actual value determines salability. There's a place for a product like this in the marketplace just as there's a place in the marketplace for Subarus. It's fun to be on the cutting edge.
      ----

  • This review is not really about Linux on Mainframes in general, but instead about IBM's attempt to use the zSeries to run lots of virtual machines on one big machine. To quote:

    It is possible to run Linux as a single operating system controlling the entire z800 processor.
    <snip... />
    We mostly ignore this approach here both because it isn't different from running Linux on any other four-way box and because the major benefits IBM advertises for mainframe Linux generally derive from VM's ability to switch between multiple Linux ghosts on the same machine.

    So Although his review is very negative, it is actually being negative about this ghosting idea, which I too do not yet understand the benifits of except in the special case of you being a server host who has a lot of customers with very small load/resource requirements (ie who don't require a whole 1U 1GHz i86 based rack with 128Mb of RAM (or whatever the current bottom of the range rack system are these days) to themselves, but do want to have the entire OS to themselves)

    Furthermore, as he puts it at the end of the article:

    At list price, you could create a rack of 80 Dell 8450 servers

    • which I too do not yet understand the benifits[sic] of

      While I personally would never soend $4 million on any single machine (something about putting all your eggs in one basket) no matter the service contract or gaurantee. Clustering cheaper, more easily replaceable machinery seems to me a better solution in the long run, YMMV.

      I'd love to have virtual machines on an Intel box though. Imagine being able to create a stock install of your favorite distro whenever you wanted a new server for e-mail, web, etc., yet don't have an unused machine and don't wish to risk crashing your e-mail server to setup web. If virtual machine software was GPL, or even cheap commercial software, I would jump on it in a heartbeat.

      • Take a look at the enterprise offerings from VMWare. They do exactly this. At around $3500/license, it's probably going to cost a bit more than your server, but the math might work out for you to cost less than a bunch of individual machines (because you can have multiple partitions running on the same piece of hardware).
  • by Anonymous Coward
    Dean Kent of realworldtech made some interesting comments [realworldtech.com] on this article.
  • I read around the article and found it a little hard to find any PRO there..

    Does Linux really suck or of he just loves his UNIX too much...

    Well, I don't care what it is. Linux is fine on my notebook, and I'm not gonna by those big machines any time soon, but it would be nice if this article title reflects its contents.
  • I can't believe their draconian licensing terms. They sell you 40CPUs, but you can't use them all until you pay huge extra fees. You have to keep paying on a subscription model or they drop support. And now, with congress in their pockets, it would violate the DMCA to hax0r your mainframe to use all the CPUs in the machine in your own computer room. What lengths won't corporations go to trample or rights??

    Then there's the expens.... hmm, what do you know... it says here they run Linux on these boxes. Never mind. This goes to show that Linux is ready for anything; it's not just for cheapskates anymore. This setup rox!

    • I read somewhere (slahbot I think) that said somthing about a hack you could download off the IRC. You put it on a tftp server you can get access to all 40 CPU's. The only downside is it only lets you run windows after that and runs as if it were a 386.
    • And now, with congress in their pockets, it would violate the DMCA to hax0r your mainframe to use all the CPUs in the machine in your own computer room
      The DMCA has to do with copyrighted works. Use of more CPUs than you paid for is not a copyright violation. But, even if it were, it still wouldn't be illegal under the DMCA. What would be illegal is if you did it and offered a tool or a means to others outside your company to do the same thing.
  • OLTP for Linux (Score:5, Informative)

    by Animats ( 122034 ) on Friday May 10, 2002 @02:34PM (#3498320) Homepage
    The classic "on-line transaction processing" (OLTP) for which IBM mainframes are optimized is badly explained. The easiest way for UNIX people to understand it is to imagine an OS whose main job is to run CGI programs.

    Much server work looks like this: request comes in from network, appropriate program gets loaded, maybe talks to a database, runs for less than a second, returns results to network, exits. Typically, a large fraction of the resources used go into the "appropriate program gets loaded" step, doing the same startup ritual over and over.

    There are two UNIX/Linux solutions to that startup overhead problem. One is to build the transaction program into the network application (as in Apache/mod_perl/php). Note that this uses an interpreter to protect the network application from bugs in transaction programs, which is a major performance hit.

    The other approach is to use the regular UNIX/Linux program launch facilities to run a separate program for each transaction (as in CGI programs.) This is safer and easier to maintain, because the CGI programs each run in their own processes, but the cost of program loading (which might include initializing a Perl or Java environment) often dwarfs the cost of doing the useful work.

    A mainframe transaction processor basically maintains process images which are ready to run a transaction, with all loading complete. When a process is needed to run a transaction, it's made by copying one of those process images (with read-only or copy-on-write sharing of pages) and launching it to do the job. The new process runs for a short period and exits. This is a facility that Linux/UNIX lack, because they were intended for interactive use, not server-side transactions.

    Because Linux has copy-on-write semantics for fork(II), it's should be possible to do a high performance transaction facility under Linux. A transaction program initializes itself by loading everything it needs, but without any per-transaction data available. It then goes into a loop waiting for work, and on each request, forks off a copy of itself to do the job. Each copy does one transaction and exits. If it crashes or gets corrupted, only one transaction is affected. Note that there are no expensive exec(II) calls involved in starting a new transaction.

    Has this been done? It's obvious enough that somebody has probably tried it.

    • Fork & Copy on write is still quite expensive in terms of resources, if you're going for high performance you really want a non-forking model.

      The model you describe is basically that used by NCSA httpd, at the time that Apache split off. Apache was redesigned to do it's forks basically at startup time, and then load balance between looping httpd's. This gave much higher performance.

    • Re:OLTP for Linux (Score:1, Informative)

      by Anonymous Coward
      Does undumping a core file count? Emacs is the most obvious example.

      Emacs is really a lisp iterpreter with a substatial runtime. If you had to wait for emacs to bootstrap from the core lisp runtime every time you called exec("emacs"), you could wait a long time. So when installing emacs, you build the core emacs (as temacs), it runs, and dumps core once it has loaded everything you want in the basic install. Then you run undump, and get an executable.

      With the disk image cached, startup is basically just creating a process context, mapping the pages, and bringing up X (skip this part for transaction processing ;-).

      Obviously this is at the application level, and constrains application startup (no handles to external resources), but could probably be hacked into the kernel without to much trouble.

      Michael
    • A mainframe transaction processor basically maintains process images which are ready to run a transaction, with all loading complete. When a process is needed to run a transaction, it's made by copying one of those process images (with read-only or copy-on-write sharing of pages) and launching it to do the job. The new process runs for a short period and exits. This is a facility that Linux/UNIX lack, because they were intended for interactive use, not server-side transactions.

      Maybe I'm an idiot and am completely misunderstanding you, or maybe your analogy is breaking down here, but it seems to me that there is a Linux/Unix analog. For this kind of situation, you can do like many Unix servers do, and create a pool of processes with one controlling/listing process that uses IPC to hand off requests to them. This can drop the startup costs significantly, even if the child processes get swapped out.

      A prime example of this kind of usage can be found in Apache. And it's not overwhelmingly difficult to write a high-performance server, combining such a multithreaded controller with pool of child "transaction" processes. Heck, I've done it myself.

      • A difficulty in discussing this is that there's a breakdown of terminology. Unix invented the idea of seperate "fork" and "exec" primitives. All other OSs at that time only had a primitive which was very similar to the run-time "system" call.

        As I recall from the brief time that I did CICS programming, it was very similar to CGI programing, except that you weren't in a distinct process. Your program was running as something similar to a thread in a non-preemptive environment. Transactions were invoked and they ran uninterupted until they needed to perform I/O, say to the database. At that point, the thread was reassigned to another transaction's code. (I wanted to say that the thread was killed, but that's not accurate because it implies start-up/shut-down costs that just weren't there.) When the results of the database call were available, the transaction would be resurrected to process the data and transmit the results to the end-user. The whole process was designed to maximize throughput, but it required a coding style that was similar to writting CGI for Win-16. ;-)

    • It sounds like you are describing CICS. But the time cost for loading programs is actually very low. CICS caches transactions. Programs that get used often remain loaded and are only unloaded when they are not used or the request queue rolls them off due to high varied transaction activity. It is really quite efficient.
    • Yes, someone has tried this. It's called a fork-per-request server. However, preforking servers (the style used by Apache 1.x) tend to perform much better. You should check out Unix Network Programming [amazon.com] by Richard W. Stevens. It has an excellent breakdown of the various server concurrency strategies available under Unix operating systems.
      --JRZ
      • Oh, hey, I forgot an important point about why fork-per-request isn't as reliable as the mainframe method. Mainframes handle requests separately to guarantee consistency and reliability by giving each request the same execution environment. However, in the fork-per-request model, there's still a central process that touches each request and is vulnerable to change. Imagine, for a trivial example, a case where the central process keeps a counter of requests handled so far. Clearly, this changes with each request and it's vulnerable to standard bugs (say, overflow errors). This sounds stupid, but it's a real problem, especially as you handle large numbers of complex and varied requests, since the amount of work done by the central process quickly becomes nontrivial.

        On the mainframe, this "central process" would be replaced by something like CICS, which has a ridiculously long track record of testing in hundreds of mission-critical environments around the world. It also includes a lot of highly-reliable features to automate common OLTP tasks, to eliminate yet another source of bugs.

        Basically, what I'm saying is: mainframe software infrastructure is really, really reliable. Way better than Linux. If you need an OLTP system that runs on a mainframe and doesn't crash, use OS/390, not Linux.
        • Just a nitpick (since I agree with your general point).
          However, in the fork-per-request model, there's still a central process that touches each request and is vulnerable to change.


          In pre-forking, process A sets up its environment which includes the service port, and a shared memory segment (often just a a simple memory mapped file). Then it creates a bunch of worker forks. At this point, the master could simply do a ps once every second and see if it's children have died, but usually the shared memory is used to communicate activity to the master.

          Since all the forks maintain the file-handles, anyone of them can accept a job request. Thus they can be truely independent, and your issue of a corruptable monitor doesn't apply. (Except in the case that crashed workers need to be replaced by the monitor; but I feel that you were saying something different).

          It's true that sharing information between the workers and the monitor has a potential for corruption, but I'm not seeing how mainframes avoid this (except if you're saying that the functionality normally handled by programmed IPC is relegated to other tasks).
        • mainframe software infrastructure is really, really reliable.
          That's the intention and the attempt. That's what you're paying for. YMMV.
          Things may have changed since, but CICS used to run all of it "joblets" (whatever they were called) in a single process which could be taken out by a single bad module (like any FORTRAN program).
      • Mainframe OLTP systems can be thought of as operating systems that are optimized for fork-on-request servers. It's worth thinking about what would be needed in the kernel to make that type of server work faster. It's the cleanest server model, and offers good opportunities for higher security. Most transactions ought to run in a jail; all they can do is talk to whatever object they had open at startup. Then you don't have to worry about buffer overflows in CGI programs.

        The mainframe people have had this for decades, and it does have major security advantages.

    • Re:OLTP for Linux (Score:1, Interesting)

      by Anonymous Coward
      I think the best that's been done is called the 'tacky' bit (chmod +t) on an executable. Traditionally, that left the executable loaded "in swap", presumably for faster startup.

      Implementations vary, and some are literal in that they were little more than a 'cp' from the filesystem onto the swap file and not much else. Some support shared libs, some don't. With big cheap memory, hitting a cached filesystem was far faster than even a linear read from the swapfile. Some were so bad, the +t bit would generally slow performance.

      Virtual memory systems, such as DEC's VMS, viewed all executable code files as if they were little swapfiles and would always share image pages on a copy-on-write basis. In effect, the OS opened the image file once for all users on the system. You could also tell the OS to pre-load it by fixing it in your choice of either virtual or physical memory.

      I believe Linux is has a partial implementation. It does share resident executable pages across active processes. It might also do something creative with the +t bit, but I have no idea of the implementation. I don't think it has a "complete" implementation, in that you can not pre-load images or fix an image into physical memory, and I don't think it "learns" anything from the first activation experience to speed later ones.

      Reading machine code blocks from disk is only part of the 'image activiation' problem. Page tables have to be grown, zero'ed memory has to be allocated, shared resources/libs have to be mapped, etc.,etc. In some cases, if you can pre-load, the system learn by pre-building any number of data structures that can speed multiple re-activations.

      Then again, the fastest way to design a maximal performance facility is to activate and initialize both the image, and program logic itself, only once. You can then use IPC type tools to exchange requests with it.
      • "I think the best that's been done is called the 'tacky' bit (chmod +t) on an executable. Traditionally, that left the executable loaded 'in swap', presumably for faster startup."

        To get this effect cheaply and repeatably in Linux, copy the relevant binaries under a tmpfs mounted directory.
    • Re:OLTP for Linux (Score:2, Interesting)

      by skidrash ( 238760 )
      You're looking for 'pinning' or 'the sticky bit', the ability to tell the OS,

      "load this executable image in memory and keep it there, next time someone asks for this program use the loaded image, DO NOT LOAD A NEW IMAGE from disk"

      This was a manual optimization for early UNIXes. I don't think any modern UNIX uses the sticky bit in this way because that's an optimization that naturally falls out of the page aging algorithms.


    • There are two UNIX/Linux solutions to that startup overhead problem. One is to build the transaction program into the network application (as in Apache/mod_perl/php). Note that this uses an interpreter to protect the network application from bugs in transaction programs, which is a major performance hit.


      I'm sure others will point this out too, but this just isn't true of mod_perl, at least (I can't speak of PHP, I've never used it.) Checkout the benchmarks in Stas Bekmann's mod_perl User Guide [apache.org]. Skinny: application code under mod_perl is compiled once, the first time it's accessed, and thereafter runs at (in effect) the speed of any Apache - modulo database, network, or bad code -related bottlenecks. I can't imagine any way of having a network accessible program be scalable, short of starting from scratch (no generic httpd) in C. You notice all those "http://www.foo.com/dynamic/foo.c?arg=val" type URLs? No, neither did I...
      • There are two UNIX/Linux solutions to that startup overhead problem. One is to build the transaction program into the network application (as in Apache/mod_perl/php). Note that this uses an interpreter to protect the network application from bugs in transaction programs, which is a major performance hit.

        I'm sure others will point this out too, but this just isn't true of mod_perl[...]

        What part isn't true? Perl compiles to an intermediate language, which is then interpreted. Yes, in mod-perl the application code is compiled the first time it's needed, but it isn't compiled into machine code, it's just turned into B-code, something that's more efficient to interpret that the original source. This is more efficient that CGI, where the compilation gets done over and over, but the B-code still gets interpreted with each request.
  • Sun FUD Campaign (Score:5, Interesting)

    by myst564 ( 196476 ) on Friday May 10, 2002 @02:38PM (#3498345)
    First of all, way to go Slashdot, this article has been out for quite some time. It's received a lot of attention on the Linux 390 mailing list as a Sun FUD campaign as it places a fully loaded z900 on par with 80 Dell servers and the zSeries in general on par with mid-range Sun equipment (and others).

    First, I'm fairly qualified to talk about what the zServer can do. For those of you who don't remember, I'm one of the guys that helped win a z800 for Clarkson University [clarkson.edu] that will be used in our Open Source Institute [clarkson.edu]. I'm also the technical lead for COSI (whatever that means ;) so it is my job to know about what a z800 can do for us.

    Some history, Clarkson University has always had a very good relationship with IBM: they employ a large amount of Clarkson students and graduates (including me, in the Extreme Blue [ibm.com] program. So if you think that biases my opinion, well too bad as I've talked with the guys making sure that Linux runs and is fully integrated on the Linux platform and one of the original Linux S390 authors Boas Betzler.

    All of these people have real experience with what the zSeries can do, as do I since I've seen it in action. A zServer is unique in the sense that you can (with the right model) run Linux S390, VM, zOS and other guest operating systems in Logical Partitions. These all act independantly of one another just as if they were seperate machines on a network. This is great if you have DB2 with maybe a web frontend because both of those machines can talk at memory speed via HiperSockets and the only outside link is the network connection to the web server, which is at Gigabit speed (did I mention that you can do full speed gigabit with one of these things on multiple interfaces?).

    This article basically says that you can take a midrange Sun server and do everything that a z800 can do but much better. I don't know of any Sun Server that can run N Linux clients in a VM at full speed.

    They aren't the solution to every problem, but a zServer certainly is a better solution that what is presented in the article. I really don't have the time to go in to detail with everything as it's a lengthy article but suffice it to say that this is no where near 100% accurate.

    • . A zServer is unique in the sense that you can (with the right model) run Linux S390, VM, zOS and other guest operating systems in Logical Partitions. It's not totally unique. There are other virtualization systems, for example Vmware for x86. VM is definatly a much better implementation than Vmware, not least because guest OS's are now written expecting to be run on VM, but the concept is identical.
      • VMWare isn't even close. The zSeries by default has support for 15 (16 total, 1 reserved for the system) Logical Partitions. That's 15 different, independant Operating Systems being run simultaneously right on the hardware. Now, in each of those 15 LPARs you can run VM which multiplexes that LPAR's resources amoung the VM guest operating systems.

        VM has been around since the 60's, and is incredibly optimized: so much so that it's very close to actually running on the hardware.

        In the VM you have complete access to all devices on the zServer (assuming the guest OS has drivers for them). Can VMware do this? No.. could it, possibly... but it's no where near the maturity that VM is.
        • Furthermore, VMWare runs on x86 which is absolutly horrible at running virtual machines (arguably, it is horrible at the concept of multitasking in general). s/390 hardware on the other hand was designed for this, and has been optimized over a span of decades.

          Finkployd
        • So what you are saying is that it's a better implementation. Which is exactly what I said.
    • by cnladd ( 97597 )
      "This article basically says that you can take a midrange Sun server and do everything that a z800 can do but much better. I don't know of any Sun Server that can run N Linux clients in a VM at full speed. "

      Please point out the point in the article where this is mentioned, because I don't recall this ever being said. :)

      As a matter of fact, there are several points in the article where the author mentions that Sun (and other, traditional UNIX solutions) are intended to be used for completely different purposes than mainframes. From what I've read, he doesn't seem to be saying "Linux on an z/Series system outright sucks". I see an article who's point is "Don't spend $5M+ on a z/Series with Linux when one (or several, or whatever) PC or Sun system will do.

      Looking at the sidebars from the article, it appears that this is just part one in a series of three. From some of the statements in these sidebars, it appears that the main focus of this is trying to cut through much of IBM's FUD and point out that Linux on the mainframe isn't always the right way to go. He specifically states that in the second part he'll cover the areas in which he feels Linux on the mainframe makes sense.

      Finally, as far as being a Sun FUD campaign, what the hell makes you think that? I haven't seen one shred of evidence to support that. Sure, he only has figures for PCs and Suns, but he states that it's because that's all he had access to at the time. He came right out in the clear and admitted that so that everyone reading the article can keep that in mind. Having worked on both Sun and HP systems, I'm convinced that - if the Sun statistics are acurrate - the stats will be similar on a similarly configured HP box as well.

      Now, just calm down a bit, okay? This is starting to sound like the arguments that I hear over the cube walls sometimes, with the mainframe folks cutting the UNIX and NT folks, and vice versa. This is the kind of crap that makes UNIX and NT folks think of the "mainframers" as a bunch of old bigots with blinders on. I think we all realize that different types of systems are valuable for different purposes, and most of us who read Slashdot regularly knows how to keep an open mind.
    • I don't know about Linux (has anybody any information about Linux on one of the Sun Big Irons?), but a Sun Starfire (E10K) can run up to 16 Solaris domains, a Star Kitty (SF12K) can run up to 9 and a Star Cat (SF15K) can run up to 18 (event to lower mid-frames have this possibility). It's not running N clients in a VM but running x different OS'ses on 1 platform.The number of domains you run is limited by the number of system boards, i.e. hard partitioning. And I don't know if it's through, but I've heared rumors that Sun is working on soft partitioning as wel, so you're not limited to the number of system board anymore.

      These domains can communicate with each other through their "Giga-plane" via IDN (Inter Domain Networking). So I guess they can communicate at Gigabit speed with each other.

      So Unix/Sun boxes might not be able to do exactly what a z800 series can, but the 2 things you've explained are possible in some way or another.
  • At least this site will inform the user that story content has changed...

  • I don't believe student dorms have either the floor space or the electrical power to handle an IBM mainframe..... and, there is no plumbing available for cooling !

    I guess I'll stick with my old P-200 for the moment....

  • Skipping to the middle of page 6:
    In the Unix world, however, processes are not created. Processes spring magically into existence and start to run when their contexts are loaded. As a result, most Unix CPUs have hardware context management allowing them to completely switch processes within one instruction cycle -- or even to run more than one instruction stream concurrently.

    I'll grant the point of the paragraph - that UNIX process/destruction occurs more often that MVS address space creation or even task creation. And I'll buy that some "UNIX CPUs" can handle multiple instruction streams. But have any really implemented a single-cycle context switch?

    • by Anonymous Coward
      I know IBM has demonstrated this with some of their next generation PPCs. Alpha has had single-cycle context switch for years (I don't know if tru64 or alphaLinux supports it, but OpenVMS does.)
    • I scratched my head over this as well. SPARC processors *do* have the ability to switch between user and kernel contexts in a single instruction. This includes pushing and popping the registers on a special hardware "stack" (see this [google.com] for more info). This might be what he was talking about.
  • From the article: As of February 26, 2002 IBM said: [...] The Linux code to exploit the 64-bit architecture will be available from the IBM developerWorks Web site later. Linux for S/390, currently available on G5, G6, and Multiprise 3000 processors, will be able to execute on zSeries servers in 31-bit mode.

    I checked, and yes, the IBM site [ibm.com] did say "31-bit" mode. Won't this break a lot of Linux apps?

    • In short, no ;) However, with the z800 0LF announced IBM has support for full 64bit in the kernel (there are some restrictions in certain situations). But you can choose.
    • Nope, linux apps run fine on it (obviously, since quite a number of large companies are doing it).

      Interestingly, the reason for this 31 bit addressing is rather funny. When IBM mainframes went from 24 bit to 32, programs that were designed to run in the 24 bit mode ("under the line", as we say in the biz) needed to use 24 bit addresses and could not deal with larger addressing. The missing bit is used to denote whether the address is 24 bit or 31 bit.

      Finkployd
      • Great! So at some point they can just declare that 24 bit addressing is obsolete, and use that bit as a switch to 63 bit mode. Or maybe the new addressing scheme should reserve the top two bits, and then they could choose between 31, 62, and 126 bit addressing modes.

      • Makes a good myth, but no.
        This is working from the old stuff forward, so I may be missing something about the new stuff.
        Addressing is done by a 0-4095 byte displacement from a base register (of 1 thru 15). There is an RX mode that uses both a base and an index register (from same set)
        The standard IBM calling sequence for FORTRAN, COBOL, PL/I, etc. typically by BALR 14,15 after loading GR 15 with entry point.
        GR 13 points to a register save area, typically in the body of calling program. Non-Recursive.
        GR 15 points to the Entry Point in the Called Program.
        GR 14 points to the Return Address in the Calling Program.
        GR 1 points to a parameter list. Successive addresses. Last parameter is marked with high bit set.
        Probably one or two other changes. Pretty transparent, actually, unless you do things like tag bits in the non-address part of an address register.

        The 31/24 is a PSW thing, like the ASCII/EBCDIC bit.
        In 31-bit mode, the old condition code from BALR has to be stored somewhere else.
        LA (Load address does not necessarily zero the high-byte).
        Format change in the PSW (Program status word)

  • This article [linuxjournal.com] was in linux journal about two years ago, but most of the discussion is still fairly interesting to read. He brings up a lot of very good points and has some interesting numbers to back him up.
  • There is another article on ZDNET link [com.com] that roasts Linux on the mainframe. I think people were too harsh toward sun when they published their report, but the reality is that Linux is not ready for the mainframe YET.
  • My Experieces (Score:5, Informative)

    by FJ ( 18034 ) on Friday May 10, 2002 @03:54PM (#3498845)
    I work for a large shop and here are my experiences running Linux on the Mainframe.

    First, I'm a mainframe person. I like the mainframe. I've used Linux at home for about 6 years so I was chosen to be on a "proof of concept" with running Linux under VM. I've been doing OS/390 & z/OS support for about 4 years. I'm in the "30 & under" crowd and I've seen both the Unix & mainframe side of support.

    We've played with TurboLinux, SuSE, & the RedHat beta for the zSeries. We're running zVM 4.2.

    First, lots of things work really well. It was strange seeing the normal Linux boot messages appearing in zVM. We've been primarily using the 2.4 series kernel, but we have tested things with the 2.2 series. We've played with Oracle, WebSphere, DB2 Connect, Samba, Apache, IBM HTTPD server. The only technical problem we really had was Samba caused kernel crashes. Some patches from the IBM z/Linux site fixed it.

    The biggest problems we have had are philosophical and percepteion based. Here are some of the difficulties:

    We had to force our customers to a shared outage window. Even VM needs to be IPLed every year or so. If they can't tolerate a 6 hour window every quarter or 6 months, we won't support them on the zSeries. A second box could make it a true zero downtime machine, but we are initially targeting the low usage, non critical machines.

    Lots of people have the delusion that the zSeries processor is hundreds of times faster than other processors. It isn't. It's fast, but not several magnitudes faster than the other processors out there. It's also not designed for heavy computational applications. Don't try, you'll hate the results. It can be done on a limited basis, but don't try and compute PI. It works better on I/O related applications which are traditional mainframe strengths.

    A lot of the code on the zSeries for Linux is the first generation to be released there. A lot of the performance perks for that platform are not there yet. If there is enough adoption, ISVs will make the performance better, but right now a lot of them are testing the water.

    Some people have the illusion that if you take a piece of crap application on Solaris or NT and run it on Linux, it will run better. The OS typically doesn't make your piece of crap any better.

    When people buy an Intel or Solaris server, they typically get the most memory & disk space they can afford. This is the worst thing to do under VM. We had a lot of people want 2GB of RAM and 100GB of disk space. Later analysis showed they could survive with much less memory (some as little as 128M) and used almost none of the disk. The reason for this is simple. Whey you buy a Sun or Intel server, upgrading them is a pain, so you do the pain up front. Under VM you can change the amount of memory & allocate more disk very easily. This was a big learning curve for people, and not just the Unix people. The major difference we found in the memory is because Linux uses it as disk cache. On the zSeries the hardware has lots of it's cache on on it.

    People needed to understand they were sharing CPU & memory. Performance tuning has a very big impact. On Intel or Sun who cares if your application is looping endlessly. On VM everyone cares. Lots of our Unix sysadmins really hated this fact and the customers couldn't fathom it. You want to put applications with LOW USAGE on this platform. The idea behind sharing is that nobody needs all of the CPU all of the time. If you run at 100% on a 4-way Pentium CPU, you won't like sharing CPU with dozens of other virtual servers and they won't like sharing with you. This was probably the most difficult thing to stress to the users.

    This isn't emulating Intel. It took a while to get people to understand that VM wasn't emulating an Intel machine and that the nice pre-compiled intel binaries don't work. Lots of people went out looking for software from ISVs and the ISVs said "Sure we support Linux". What they didn't say is "We support Linux/390". There is a very big difference. Linux is not just Linux on Intel and it took some education to get this through to the users.

    Once we convinced people that it isn't running Intel, they tried to recompile their favorite programs and found out that for some applications a "simple" recompile wasn't enough. I would imagine that the power-pc folks had similar problems, but some programs take a little investigation.

    There were some really nice aspects of running on a zSeries.

    Disaster Recovery is easier. Mainframe DR has been established for decades and it isn't terribly different with Linux on the mainframe. Much more simple than having dozens of individual machines to recover.

    The hardware never fails. It may be expensive, but CPUs have a 30 year mean time to failure, the disk is all raid, multiple IO chanels help ensure there is not single point of failure. Hardware can typically be swapped out without taking an outage. CPUs can by dynamically added.

    If you want to copy an existing virtual server and make a test copy, that can be done in minutes. That makes it really nice for developers who want to do the "what if I do this" tests.

    VM's programmable operator facility makes for some nice system automation. You can also create Rexx scripts for your operations so they never even need to logon to Linux to do certain work.

    Creating a new server is easy. No more running through the install screens. Once you have one customized, just use it as a template for new servers.

    We were able to have certain drives shared as read-only across all images. This makes support a little easier. We made one Linux have the drive read-write. When we changed it there, we just unmounted & remounted it on the other images (a Rexx script made that painless) and it was magically everywhere. We can even take down the read-write linux to be sure something isn't accidentally changed. We've been experimenting with sharing lots of Linux mount points this way. We estimate we can concentrate about 100GB down to 2 GB which cuts down the overall cost. The majority of code on all Linux images are the same and will tolerate being shared, so as long as your environment is stable and you do some planning, you can dramatically cut down on disk usage. The amount of disk you save is directly related to the number of images your machine can handle.

    The virtual-linux to virtual-linux IP traffic happends at memory-to-memory speed. It's also very nice not to worry about network issues when trying to debug a problem because there is no physical network.

    Recovery is easier if an image won't boot. Just attach the drive to another, running image and fix the problem. No need to physically go to the machine.

    Sorry to ramble, but this is what we have found. Linux on the zSeries has it's place and does work, but it's not a solution to every problem. Few things are.
  • Market research company Meta Group takes a swing at Linux, saying that Linux mainframes will soon be irrelevant and that the OS isn't mature enough to handle critical business applications.

    Your the one that not MATURE!
  • First I must say that my story is a long time ago (like the author of the article). I was an operator on an IBM 3083 17 years ago and left a few months after they had installed VM. We had the mainframe connected to the comanies bottlings plants across the whole country (running system 36 minis) and during the day, especially later in the afternoon most of the various plant's accounts/processing data would come in. The night shift would then start the accounting and processing jobs around 7PM and run them through the night and a few hours before the morning shift came in the Storage backup system would run (HSM) and then during the day the coders would do their thing etc.

    The whole point of VM (and the mainframe) was that it is optimised for business systems, AFAIK, and unlike heavy scientific computing loads, there is seldom a need for incredible processing power in the CPU, but there is a need for distributed processes and extremely good I/O since most business tasks are often thrashing around on the disk getting and updating customer/financial info etc.

    I don't think the zSeries would be doing as well as it is (eBay, Swedish and Japanese telcos etc) if there wasn't some advantage to this system. Probably, what sways a lot of these deals is that if your machine has any problems IBM will have a technician there pronto and their staff (at least in those days) were very professional and well trained.
  • hm. the article on lw.com states that sendmail can handle about 2 million boxes on an IBM Mainframe. Well, according to this article [ibm.com] at ibm.com, a Mainframe of the same architecture can handle 250+ Million of Users (IMAP, POP3 and SMTP). Guess Linux can step back in this specific case against the not really well known TPF [ibm.com] Operating System (currently used at the IT Farms of Airlines and large Banks).
  • "Thank you for your interest. We are performing system maintenance at this time. LinuxWorld will return shortly."
  • If you still have the article somewhere can you please post a link to it?

    Thank you

Basic unit of Laryngitis = The Hoarsepower

Working...