The Pros and Cons of Mainframe Linux 259
magellan writes "There is a good article on LinuxWorld.com that goes over some of the pros and cons of Linux on the mainframe. The author, Paul Murphy is an old mainframer and current UNIX user, as well as a frequent contributor to LinuxWorld.com, so he has some good insights.
"
Gotta be better than M$ Windows Datacenter. (Score:3, Funny)
Alpha? (Score:1)
Re:Gotta be better than M$ Windows Datacenter. (Score:1)
and a site that is so current, too:
"Last modified Wednesday, September 19, 2001 11:53:39"
A friend once saw a blue screen in NT and said "Ah Windows running in it's natural state."
Linux has scalibility problems (Score:1, Interesting)
Re:Linux has scalibility problems (Score:1, Informative)
Re:Linux has scalibility problems (Score:3, Funny)
-atrowe: Card-carrying Mensa member. I have no toleranse for stupidity.
"tolerance"? . . . oh, the irony.
Re:Linux has scalibility problems (Score:2, Insightful)
UNIX was designed to get away from the mainframe usage paradigm, not reinforce it. Read the article. It yields good insight into the differences between the Mainframe Way (machine resources are more important than user demands) and the UNIX Way (users needs are more important than the machine's).
User needs? no no no (Score:1)
Mainframes - designed for the benefit of the machine.
PCs - designed for the benefit of the user.
Unix - designed for the benefit of hunt-and-peck administrators and obscure language designers.
Re:User needs? no no no (Score:1)
You're right. C is so very obscure. Hardly any software in use today is written in C.
Re:Linux has scalibility problems (Score:2)
Solaris, Irix, and HP/UX were designed as big UNIX, (which is something rather different from small mainframe). Each probably based on Berkely UNIX and each trying to distinguish itself as something special. The big UNIX did encroach on the mainframes turf, often doing more, better and cheaper.
Re:Linux has scalibility problems (Score:1)
Re:Linux has scalibility problems (Score:1)
Re:Linux has scalibility problems (Score:1)
Thanks,
dustym
Re:Linux has scalibility problems (Score:1)
That's why schools in the US are so abysmal. They have gone the route of trying to shovel in dry facts and data into the empty heads of the young rather than teaching them how to think. It's like mental welfare.
This is about MAINFRAMES not minis (Score:2, Insightful)
Um, Solaris, Irix, and HP/UX (shudder) are *NOT* mainframe operating systems.
MVS and OS/390 are mainframe operating systems.
You are talking out of your nether regions, especially when you call linux's TCP/IP stack inferior to HP/UX. I have adminned every operating system mentioned above except Irix, and you sir are grossly incorrect.
Re:This is about MAINFRAMES not minis (Score:1)
Re:Linux has scalibility problems (Score:2)
/Janne
Re:Linux has scalibility problems (Score:1)
Re:Linux has scalibility problems (Score:2)
Zero copy networking not good enough for you? You want networking that somehow zips packets along without even touching them except by telepathy?
Back under your bridge, troll.
Re:Linux has scalibility problems (Score:4, Informative)
Linux makes a great operating system for certains classes of massively parallel machines: clusters. It is low-cost and has a decent MPI (message-passing interface) implementation. It also runs on commodity hardware. Don't be surprised if you see the next ASCI supercomputer using Linux as the OS for each node.
You are correct in that Linux is not a good operating system for larger shared-memory multiprocessors. It lacks the fine-grained locking necessary to run the same kernel instance across dozens of processors.
I can't comment on mainframes because I am unaware of their architecture. I do know that high-end UNIX servers and mainframes are different beasts. The former focus on performance while the latter prefers uptime above all. I also believe that IBM, the kind of mainframes, has not used UNIX as their traditional operating system. Thus you are comparing apples to oranges. Linux makes a perfectly decent "mainframe OS" if you are partitioning the machine into multiple virtual machines.
Also please elaborate on "Linux' inferior TCP/IP stack". And "inferior handling of multi-threading on a large scale". Are Solaris light-weight processes any better?
Re:Linux has scalibility problems (Score:2, Funny)
Re:Linux has scalibility problems (Score:2)
Doesn't solve anything.... (Score:1, Troll)
You've got the right hardware, but the software still isn't 'mainframe' level.
Thanks for the article... (Score:4, Funny)
Confused author? (Score:1)
(see this report of a test on a 733-MHz Linux system for details on mstone) run on the mainframe.
Yes, it's a word document. You'd think anyone writing an article for a Linux targetted site would at least check or convert the document to something everyone can read...
Is not better than solaris (Score:2, Interesting)
I am linux-biased folks are going to / me for this.... but I am speaking from experience.
Report roasts Linux (Score:4, Interesting)
Re:Report roasts Linux (Score:2)
.
Re:Report roasts Linux (Score:5, Informative)
Yes, but note firstly that this article is making two different points, and secondly that at least one of them is clearly wrong and deliberatly misleading.
First the article claims that Linux on mainframes isn't price efficient compared to Linux on Intel, and that Intel boxes are emerging which have similar reliability to mainframes.
Possibly true; I don't know enough about mainframes to know, although I'm certainly not aware of these high-reliability Intel boxes.
Second, the article launches an ill-informed FUD assault on Linux, saying
Re:Report roasts Linux (it's only Meta Group) (Score:1)
I've noticed that whenever Meta group report on Linux they always denigrate it. There have been articles on ZDNET and similar places where positive things have been said by Gartner, IDC etc., but then at the end there are some words of doom from Meta Group: "it may not be ready", "there might be problems", "you can't yet run Linux on 1000 processor machines...".
For example, look at this article about Linux in investment banks [vnunet.com]. Positive news all the way through until:
But Meta Group programme director Ashim Pal says the cost of the platform is not the only consideration. 'The operating system is a relatively small part of the total cost of ownership. Purely focusing on the cost of the platform is deluded,' he said.
If you go their web-site and look for recent documents featuring Linux in the title you will find:
- Linux on the Mainframe: Nice Place to Visit, But... ... Linux Management: More Hype than Substance
- No Advantage From Linux PDAs
- Choose Palm or Pocket PC - Linux Only for Custom Apps
- Linux PDAs Offer Alternative for Low-End and Specialized Markets
- Companies Should Consider Limited Server-Based Linux Implementations
- Microsoft Criticizes Linux as Operating System Issues Move to Web Services Level
-
- Linux Dreams of Management Promotion.,
- Linux: Application Server Tiers or Tears?
I guess you can make your own minds up. BTW, Meta Group have been having a few problems themselves recently [internet.com].
Re:Report roasts Linux (Score:2)
"The company says that, at least for the moment, Linux isn't capable of running more complex, critical applications, such as e-mail notification systems."
Intel-based servers are emerging with mainframe-like capabilities
Geezz...
Simple rules of thumb when buying hi end hardware (Score:4, Funny)
I hope you find it helpful.
1. Compare price/performance characteristics of the competing hardware prospects.
2. Ask others who have dealt with the vendor(s) in question for their opinion.
3. If 1 and 2 do not break the tie, ask yourself "What would Chewbacca do?"
Linux unstable on a mainframe? (Score:4, Funny)
Sure, I'd have to get a new chair to reach all the way up there...
Re:Linux unstable on a mainframe? (Score:1, Funny)
> refridgerators - I bet I could put my desktop
> and my laptop Linux machines on a mainframe,
> and it'd still be totally stable.
>
> Sure, I'd have to get a new chair to reach
> all the way up there...
Reminds me of a story from "The Tao of Programming":
The Magician of the Ivory Tower brought his latest invention for the Master
Programmer to examine. The Magician wheeled a large black box into the
Master's office while the Master waited in silence.
"This is an integrated, distributed, general-purpose workstation," began the
Magician, "ergonomically designed with a proprietary operating system, sixth
generation languages, and multiple state of the art user interfaces. It
took my assistants several hundred man years to construct. Is it not
amazing?"
The Master Programmer raised his eyebrows slightly. "It is indeed amazing,"
he said.
"Corporate Headquarters has commanded," continued the Magician, "that
everyone use this workstation as a platform for new programs. Do you agree
to this?"
"Certainly," replied the Master. "I will have it transported to the Data
Center immediately!" And the Magician returned to his tower, well pleased.
Several days later, a novice wandered into the office of the Master
Programmer and said, "I cannot find the listing for my new program. Do you
know where it might be?"
"Yes," replied the Master, "the listings are stacked on the platform in the
Data Center."
con: People look at you funny (Score:1)
"Peace, Love, Penguin? What does that mean...?"
Re:con: People look at you funny (Score:1)
http://www-1.ibm.com/servers/eserver/pll/pll_pa
Pro/Con (Score:1)
Con: It's cheaper
Re:The real Pro (Score:1)
It's all about the bragging rights.
Same as how no CEO has ever bragged about hiring the cheapest consultant on the block. Perceived value, rather than actual value determines salability. There's a place for a product like this in the marketplace just as there's a place in the marketplace for Subarus. It's fun to be on the cutting edge.
----
The logic of IBM's approach? (Score:2, Interesting)
This review is not really about Linux on Mainframes in general, but instead about IBM's attempt to use the zSeries to run lots of virtual machines on one big machine. To quote:
It is possible to run Linux as a single operating system controlling the entire z800 processor. />
<snip...
We mostly ignore this approach here both because it isn't different from running Linux on any other four-way box and because the major benefits IBM advertises for mainframe Linux generally derive from VM's ability to switch between multiple Linux ghosts on the same machine.
So Although his review is very negative, it is actually being negative about this ghosting idea, which I too do not yet understand the benifits of except in the special case of you being a server host who has a lot of customers with very small load/resource requirements (ie who don't require a whole 1U 1GHz i86 based rack with 128Mb of RAM (or whatever the current bottom of the range rack system are these days) to themselves, but do want to have the entire OS to themselves)
Furthermore, as he puts it at the end of the article:
At list price, you could create a rack of 80 Dell 8450 servers
Re:The logic of IBM's approach? (Score:2)
While I personally would never soend $4 million on any single machine (something about putting all your eggs in one basket) no matter the service contract or gaurantee. Clustering cheaper, more easily replaceable machinery seems to me a better solution in the long run, YMMV.
I'd love to have virtual machines on an Intel box though. Imagine being able to create a stock install of your favorite distro whenever you wanted a new server for e-mail, web, etc., yet don't have an unused machine and don't wish to risk crashing your e-mail server to setup web. If virtual machine software was GPL, or even cheap commercial software, I would jump on it in a heartbeat.
Re:The logic of IBM's approach? (Score:2)
Possibly not as bad as it seems (Score:2, Informative)
Pro and con of IBM's mainframe Linux? (Score:1)
Does Linux really suck or of he just loves his UNIX too much...
Well, I don't care what it is. Linux is fine on my notebook, and I'm not gonna by those big machines any time soon, but it would be nice if this article title reflects its contents.
What about our rights? (Score:1, Flamebait)
Then there's the expens.... hmm, what do you know... it says here they run Linux on these boxes. Never mind. This goes to show that Linux is ready for anything; it's not just for cheapskates anymore. This setup rox!
Re:What about our rights? (Score:1)
Re:What about our rights? (Score:1)
OLTP for Linux (Score:5, Informative)
Much server work looks like this: request comes in from network, appropriate program gets loaded, maybe talks to a database, runs for less than a second, returns results to network, exits. Typically, a large fraction of the resources used go into the "appropriate program gets loaded" step, doing the same startup ritual over and over.
There are two UNIX/Linux solutions to that startup overhead problem. One is to build the transaction program into the network application (as in Apache/mod_perl/php). Note that this uses an interpreter to protect the network application from bugs in transaction programs, which is a major performance hit.
The other approach is to use the regular UNIX/Linux program launch facilities to run a separate program for each transaction (as in CGI programs.) This is safer and easier to maintain, because the CGI programs each run in their own processes, but the cost of program loading (which might include initializing a Perl or Java environment) often dwarfs the cost of doing the useful work.
A mainframe transaction processor basically maintains process images which are ready to run a transaction, with all loading complete. When a process is needed to run a transaction, it's made by copying one of those process images (with read-only or copy-on-write sharing of pages) and launching it to do the job. The new process runs for a short period and exits. This is a facility that Linux/UNIX lack, because they were intended for interactive use, not server-side transactions.
Because Linux has copy-on-write semantics for fork(II), it's should be possible to do a high performance transaction facility under Linux. A transaction program initializes itself by loading everything it needs, but without any per-transaction data available. It then goes into a loop waiting for work, and on each request, forks off a copy of itself to do the job. Each copy does one transaction and exits. If it crashes or gets corrupted, only one transaction is affected. Note that there are no expensive exec(II) calls involved in starting a new transaction.
Has this been done? It's obvious enough that somebody has probably tried it.
Re:OLTP for Linux (Score:2)
The model you describe is basically that used by NCSA httpd, at the time that Apache split off. Apache was redesigned to do it's forks basically at startup time, and then load balance between looping httpd's. This gave much higher performance.
Re:OLTP for Linux (Score:1, Informative)
Emacs is really a lisp iterpreter with a substatial runtime. If you had to wait for emacs to bootstrap from the core lisp runtime every time you called exec("emacs"), you could wait a long time. So when installing emacs, you build the core emacs (as temacs), it runs, and dumps core once it has loaded everything you want in the basic install. Then you run undump, and get an executable.
With the disk image cached, startup is basically just creating a process context, mapping the pages, and bringing up X (skip this part for transaction processing
Obviously this is at the application level, and constrains application startup (no handles to external resources), but could probably be hacked into the kernel without to much trouble.
Michael
Re:OLTP for Linux (Score:2)
Maybe I'm an idiot and am completely misunderstanding you, or maybe your analogy is breaking down here, but it seems to me that there is a Linux/Unix analog. For this kind of situation, you can do like many Unix servers do, and create a pool of processes with one controlling/listing process that uses IPC to hand off requests to them. This can drop the startup costs significantly, even if the child processes get swapped out.
A prime example of this kind of usage can be found in Apache. And it's not overwhelmingly difficult to write a high-performance server, combining such a multithreaded controller with pool of child "transaction" processes. Heck, I've done it myself.
Re:OLTP for Linux (Score:2)
As I recall from the brief time that I did CICS programming, it was very similar to CGI programing, except that you weren't in a distinct process. Your program was running as something similar to a thread in a non-preemptive environment. Transactions were invoked and they ran uninterupted until they needed to perform I/O, say to the database. At that point, the thread was reassigned to another transaction's code. (I wanted to say that the thread was killed, but that's not accurate because it implies start-up/shut-down costs that just weren't there.) When the results of the database call were available, the transaction would be resurrected to process the data and transmit the results to the end-user. The whole process was designed to maximize throughput, but it required a coding style that was similar to writting CGI for Win-16. ;-)
Re:OLTP for Linux (Score:1)
Re:OLTP for Linux (Score:2)
--JRZ
Re:OLTP for Linux (Score:2)
On the mainframe, this "central process" would be replaced by something like CICS, which has a ridiculously long track record of testing in hundreds of mission-critical environments around the world. It also includes a lot of highly-reliable features to automate common OLTP tasks, to eliminate yet another source of bugs.
Basically, what I'm saying is: mainframe software infrastructure is really, really reliable. Way better than Linux. If you need an OLTP system that runs on a mainframe and doesn't crash, use OS/390, not Linux.
Re:OLTP for Linux (Score:2)
In pre-forking, process A sets up its environment which includes the service port, and a shared memory segment (often just a a simple memory mapped file). Then it creates a bunch of worker forks. At this point, the master could simply do a ps once every second and see if it's children have died, but usually the shared memory is used to communicate activity to the master.
Since all the forks maintain the file-handles, anyone of them can accept a job request. Thus they can be truely independent, and your issue of a corruptable monitor doesn't apply. (Except in the case that crashed workers need to be replaced by the monitor; but I feel that you were saying something different).
It's true that sharing information between the workers and the monitor has a potential for corruption, but I'm not seeing how mainframes avoid this (except if you're saying that the functionality normally handled by programmed IPC is relegated to other tasks).
Re:OLTP for Linux (Score:2)
That's the intention and the attempt. That's what you're paying for. YMMV.
Things may have changed since, but CICS used to run all of it "joblets" (whatever they were called) in a single process which could be taken out by a single bad module (like any FORTRAN program).
Re:OLTP for Linux (Score:2)
The mainframe people have had this for decades, and it does have major security advantages.
Re:OLTP for Linux (Score:1, Interesting)
Implementations vary, and some are literal in that they were little more than a 'cp' from the filesystem onto the swap file and not much else. Some support shared libs, some don't. With big cheap memory, hitting a cached filesystem was far faster than even a linear read from the swapfile. Some were so bad, the +t bit would generally slow performance.
Virtual memory systems, such as DEC's VMS, viewed all executable code files as if they were little swapfiles and would always share image pages on a copy-on-write basis. In effect, the OS opened the image file once for all users on the system. You could also tell the OS to pre-load it by fixing it in your choice of either virtual or physical memory.
I believe Linux is has a partial implementation. It does share resident executable pages across active processes. It might also do something creative with the +t bit, but I have no idea of the implementation. I don't think it has a "complete" implementation, in that you can not pre-load images or fix an image into physical memory, and I don't think it "learns" anything from the first activation experience to speed later ones.
Reading machine code blocks from disk is only part of the 'image activiation' problem. Page tables have to be grown, zero'ed memory has to be allocated, shared resources/libs have to be mapped, etc.,etc. In some cases, if you can pre-load, the system learn by pre-building any number of data structures that can speed multiple re-activations.
Then again, the fastest way to design a maximal performance facility is to activate and initialize both the image, and program logic itself, only once. You can then use IPC type tools to exchange requests with it.
TMPFS (Score:1)
To get this effect cheaply and repeatably in Linux, copy the relevant binaries under a tmpfs mounted directory.
Re:OLTP for Linux (Score:2, Interesting)
"load this executable image in memory and keep it there, next time someone asks for this program use the loaded image, DO NOT LOAD A NEW IMAGE from disk"
This was a manual optimization for early UNIXes. I don't think any modern UNIX uses the sticky bit in this way because that's an optimization that naturally falls out of the page aging algorithms.
Re:OLTP for Linux (Score:2)
I'm sure others will point this out too, but this just isn't true of mod_perl, at least (I can't speak of PHP, I've never used it.) Checkout the benchmarks in Stas Bekmann's mod_perl User Guide [apache.org]. Skinny: application code under mod_perl is compiled once, the first time it's accessed, and thereafter runs at (in effect) the speed of any Apache - modulo database, network, or bad code -related bottlenecks. I can't imagine any way of having a network accessible program be scalable, short of starting from scratch (no generic httpd) in C. You notice all those "http://www.foo.com/dynamic/foo.c?arg=val" type URLs? No, neither did I...
Re:OLTP for Linux (Score:2)
Sun FUD Campaign (Score:5, Interesting)
First, I'm fairly qualified to talk about what the zServer can do. For those of you who don't remember, I'm one of the guys that helped win a z800 for Clarkson University [clarkson.edu] that will be used in our Open Source Institute [clarkson.edu]. I'm also the technical lead for COSI (whatever that means
Some history, Clarkson University has always had a very good relationship with IBM: they employ a large amount of Clarkson students and graduates (including me, in the Extreme Blue [ibm.com] program. So if you think that biases my opinion, well too bad as I've talked with the guys making sure that Linux runs and is fully integrated on the Linux platform and one of the original Linux S390 authors Boas Betzler.
All of these people have real experience with what the zSeries can do, as do I since I've seen it in action. A zServer is unique in the sense that you can (with the right model) run Linux S390, VM, zOS and other guest operating systems in Logical Partitions. These all act independantly of one another just as if they were seperate machines on a network. This is great if you have DB2 with maybe a web frontend because both of those machines can talk at memory speed via HiperSockets and the only outside link is the network connection to the web server, which is at Gigabit speed (did I mention that you can do full speed gigabit with one of these things on multiple interfaces?).
This article basically says that you can take a midrange Sun server and do everything that a z800 can do but much better. I don't know of any Sun Server that can run N Linux clients in a VM at full speed.
They aren't the solution to every problem, but a zServer certainly is a better solution that what is presented in the article. I really don't have the time to go in to detail with everything as it's a lengthy article but suffice it to say that this is no where near 100% accurate.
Re:Sun FUD Campaign (Score:1)
Re:Sun FUD Campaign (Score:1)
VM has been around since the 60's, and is incredibly optimized: so much so that it's very close to actually running on the hardware.
In the VM you have complete access to all devices on the zServer (assuming the guest OS has drivers for them). Can VMware do this? No.. could it, possibly... but it's no where near the maturity that VM is.
Re: (Score:2)
Re:Sun FUD Campaign (Score:2)
Re:Sun FUD Campaign (Score:3, Insightful)
Please point out the point in the article where this is mentioned, because I don't recall this ever being said.
As a matter of fact, there are several points in the article where the author mentions that Sun (and other, traditional UNIX solutions) are intended to be used for completely different purposes than mainframes. From what I've read, he doesn't seem to be saying "Linux on an z/Series system outright sucks". I see an article who's point is "Don't spend $5M+ on a z/Series with Linux when one (or several, or whatever) PC or Sun system will do.
Looking at the sidebars from the article, it appears that this is just part one in a series of three. From some of the statements in these sidebars, it appears that the main focus of this is trying to cut through much of IBM's FUD and point out that Linux on the mainframe isn't always the right way to go. He specifically states that in the second part he'll cover the areas in which he feels Linux on the mainframe makes sense.
Finally, as far as being a Sun FUD campaign, what the hell makes you think that? I haven't seen one shred of evidence to support that. Sure, he only has figures for PCs and Suns, but he states that it's because that's all he had access to at the time. He came right out in the clear and admitted that so that everyone reading the article can keep that in mind. Having worked on both Sun and HP systems, I'm convinced that - if the Sun statistics are acurrate - the stats will be similar on a similarly configured HP box as well.
Now, just calm down a bit, okay? This is starting to sound like the arguments that I hear over the cube walls sometimes, with the mainframe folks cutting the UNIX and NT folks, and vice versa. This is the kind of crap that makes UNIX and NT folks think of the "mainframers" as a bunch of old bigots with blinders on. I think we all realize that different types of systems are valuable for different purposes, and most of us who read Slashdot regularly knows how to keep an open mind.
Re:Sun FUD Campaign (Score:1)
These domains can communicate with each other through their "Giga-plane" via IDN (Inter Domain Networking). So I guess they can communicate at Gigabit speed with each other.
So Unix/Sun boxes might not be able to do exactly what a z800 series can, but the 2 things you've explained are possible in some way or another.
Re:Sun FUD Campaign (Score:2)
You use mainframes for reliability and concentration, where something critical doen't work if it's distributed.
80 rackmount x86 machines has better price/performance, at least until something like Chernobyl goes off in all 80 of them at the same time.
Mainframes and their operating systems are great for certain applications, but Linux generally isn't part of those applications.
Yet.
*Revised* Version (Score:1)
It won't work here. (Score:1)
I guess I'll stick with my old P-200 for the moment....
Puzzling claim, somewhat buried (Score:2)
I'll grant the point of the paragraph - that UNIX process/destruction occurs more often that MVS address space creation or even task creation. And I'll buy that some "UNIX CPUs" can handle multiple instruction streams. But have any really implemented a single-cycle context switch?
Re:Puzzling claim, somewhat buried (Score:1, Informative)
Re:Puzzling claim, somewhat buried (Score:2)
31-bit mode (Score:2)
I checked, and yes, the IBM site [ibm.com] did say "31-bit" mode. Won't this break a lot of Linux apps?
Re:31-bit mode (Score:1)
Re: (Score:2)
Re:31-bit mode (Score:2)
Re:31-bit mode (Score:2)
This is working from the old stuff forward, so I may be missing something about the new stuff.
Addressing is done by a 0-4095 byte displacement from a base register (of 1 thru 15). There is an RX mode that uses both a base and an index register (from same set)
The standard IBM calling sequence for FORTRAN, COBOL, PL/I, etc. typically by BALR 14,15 after loading GR 15 with entry point.
GR 13 points to a register save area, typically in the body of calling program. Non-Recursive.
GR 15 points to the Entry Point in the Called Program.
GR 14 points to the Return Address in the Calling Program.
GR 1 points to a parameter list. Successive addresses. Last parameter is marked with high bit set.
Probably one or two other changes. Pretty transparent, actually, unless you do things like tag bits in the non-address part of an address register.
The 31/24 is a PSW thing, like the ASCII/EBCDIC bit.
In 31-bit mode, the old condition code from BALR has to be stored somewhere else.
LA (Load address does not necessarily zero the high-byte).
Format change in the PSW (Program status word)
Linux Journal Article (Score:1)
This article [linuxjournal.com] was in linux journal about two years ago, but most of the discussion is still fairly interesting to read. He brings up a lot of very good points and has some interesting numbers to back him up.
Report roasts linux on the mainframe (Score:2, Insightful)
My Experieces (Score:5, Informative)
First, I'm a mainframe person. I like the mainframe. I've used Linux at home for about 6 years so I was chosen to be on a "proof of concept" with running Linux under VM. I've been doing OS/390 & z/OS support for about 4 years. I'm in the "30 & under" crowd and I've seen both the Unix & mainframe side of support.
We've played with TurboLinux, SuSE, & the RedHat beta for the zSeries. We're running zVM 4.2.
First, lots of things work really well. It was strange seeing the normal Linux boot messages appearing in zVM. We've been primarily using the 2.4 series kernel, but we have tested things with the 2.2 series. We've played with Oracle, WebSphere, DB2 Connect, Samba, Apache, IBM HTTPD server. The only technical problem we really had was Samba caused kernel crashes. Some patches from the IBM z/Linux site fixed it.
The biggest problems we have had are philosophical and percepteion based. Here are some of the difficulties:
We had to force our customers to a shared outage window. Even VM needs to be IPLed every year or so. If they can't tolerate a 6 hour window every quarter or 6 months, we won't support them on the zSeries. A second box could make it a true zero downtime machine, but we are initially targeting the low usage, non critical machines.
Lots of people have the delusion that the zSeries processor is hundreds of times faster than other processors. It isn't. It's fast, but not several magnitudes faster than the other processors out there. It's also not designed for heavy computational applications. Don't try, you'll hate the results. It can be done on a limited basis, but don't try and compute PI. It works better on I/O related applications which are traditional mainframe strengths.
A lot of the code on the zSeries for Linux is the first generation to be released there. A lot of the performance perks for that platform are not there yet. If there is enough adoption, ISVs will make the performance better, but right now a lot of them are testing the water.
Some people have the illusion that if you take a piece of crap application on Solaris or NT and run it on Linux, it will run better. The OS typically doesn't make your piece of crap any better.
When people buy an Intel or Solaris server, they typically get the most memory & disk space they can afford. This is the worst thing to do under VM. We had a lot of people want 2GB of RAM and 100GB of disk space. Later analysis showed they could survive with much less memory (some as little as 128M) and used almost none of the disk. The reason for this is simple. Whey you buy a Sun or Intel server, upgrading them is a pain, so you do the pain up front. Under VM you can change the amount of memory & allocate more disk very easily. This was a big learning curve for people, and not just the Unix people. The major difference we found in the memory is because Linux uses it as disk cache. On the zSeries the hardware has lots of it's cache on on it.
People needed to understand they were sharing CPU & memory. Performance tuning has a very big impact. On Intel or Sun who cares if your application is looping endlessly. On VM everyone cares. Lots of our Unix sysadmins really hated this fact and the customers couldn't fathom it. You want to put applications with LOW USAGE on this platform. The idea behind sharing is that nobody needs all of the CPU all of the time. If you run at 100% on a 4-way Pentium CPU, you won't like sharing CPU with dozens of other virtual servers and they won't like sharing with you. This was probably the most difficult thing to stress to the users.
This isn't emulating Intel. It took a while to get people to understand that VM wasn't emulating an Intel machine and that the nice pre-compiled intel binaries don't work. Lots of people went out looking for software from ISVs and the ISVs said "Sure we support Linux". What they didn't say is "We support Linux/390". There is a very big difference. Linux is not just Linux on Intel and it took some education to get this through to the users.
Once we convinced people that it isn't running Intel, they tried to recompile their favorite programs and found out that for some applications a "simple" recompile wasn't enough. I would imagine that the power-pc folks had similar problems, but some programs take a little investigation.
There were some really nice aspects of running on a zSeries.
Disaster Recovery is easier. Mainframe DR has been established for decades and it isn't terribly different with Linux on the mainframe. Much more simple than having dozens of individual machines to recover.
The hardware never fails. It may be expensive, but CPUs have a 30 year mean time to failure, the disk is all raid, multiple IO chanels help ensure there is not single point of failure. Hardware can typically be swapped out without taking an outage. CPUs can by dynamically added.
If you want to copy an existing virtual server and make a test copy, that can be done in minutes. That makes it really nice for developers who want to do the "what if I do this" tests.
VM's programmable operator facility makes for some nice system automation. You can also create Rexx scripts for your operations so they never even need to logon to Linux to do certain work.
Creating a new server is easy. No more running through the install screens. Once you have one customized, just use it as a template for new servers.
We were able to have certain drives shared as read-only across all images. This makes support a little easier. We made one Linux have the drive read-write. When we changed it there, we just unmounted & remounted it on the other images (a Rexx script made that painless) and it was magically everywhere. We can even take down the read-write linux to be sure something isn't accidentally changed. We've been experimenting with sharing lots of Linux mount points this way. We estimate we can concentrate about 100GB down to 2 GB which cuts down the overall cost. The majority of code on all Linux images are the same and will tolerate being shared, so as long as your environment is stable and you do some planning, you can dramatically cut down on disk usage. The amount of disk you save is directly related to the number of images your machine can handle.
The virtual-linux to virtual-linux IP traffic happends at memory-to-memory speed. It's also very nice not to worry about network issues when trying to debug a problem because there is no physical network.
Recovery is easier if an image won't boot. Just attach the drive to another, running image and fix the problem. No need to physically go to the machine.
Sorry to ramble, but this is what we have found. Linux on the zSeries has it's place and does work, but it's not a solution to every problem. Few things are.
from ZDnet (Score:1)
Your the one that not MATURE!
IBM VM (Score:2)
The whole point of VM (and the mainframe) was that it is optimised for business systems, AFAIK, and unlike heavy scientific computing loads, there is seldom a need for incredible processing power in the CPU, but there is a need for distributed processes and extremely good I/O since most business tasks are often thrashing around on the disk getting and updating customer/financial info etc.
I don't think the zSeries would be doing as well as it is (eBay, Swedish and Japanese telcos etc) if there wasn't some advantage to this system. Probably, what sways a lot of these deals is that if your machine has any problems IBM will have a technician there pronto and their staff (at least in those days) were very professional and well trained.
Linux vs TPF (Score:1)
What have you done! (Score:2)
Where is the article? (Score:2)
Thank you
Re:Linux Sucks! (Score:1, Offtopic)
Re:finally (Score:5, Informative)
And GNU has nothing to do with porting of Linux to any platform.
And your corporate mainframe doesn't run NT. Or if it does, you're using a definition of "mainframe" with which I was not previously familiar.
And preemptive multithreading and protected process management is not new to 2.4.5. That's something that has been in every Unix since, oh, about 1970. It's also something that has been in every enterprise-class system in the past twenty years. I would hardly call it a boon to admins.
Mainframes and NT... (Score:1)
Most of the time, these boards have their own memory - some even have thier own hard drives (I would imagine there are some which are simply a TN5250 hack for comm with the mainframe, and only draw power from the backplane - all memory, ports, and drives mounted to the module).
In other words, NT doesn't actually run on the mainframe, or VM - but rather on a dedicated processor board. I know these solutions exist for IBM hardware - I wouldn't doubt that there are similar solutions for other mainframe manufacturers as well (either by the manufacturer or licensed third parties).
Re:finally (Score:2, Informative)
Not to be a GNU pundit, but...
> And GNU has nothing to do with porting of Linux to any platform.
... is demonstratably false. Whether or not the individual people who port the Linux kernel to a new architecture are or are not GNU affiliates is, simply put, irrelevant. The first step to getting Linux (or BSD, or whatever) on a new system is porting GCC to its architecture. While this is sometimes done by the people responsible for the Linux porting effort, most of the time this is done by members of the GCC team -- getting a new port to work without breaking all the others requires a great deal of cooperation and support.
Not to mention a working linker. Assembler. The list goes on. Who wrote those?
Lately I've heard a lot of Linux weenies dissing GNU and RMS as out-dated hippies who are prone to overestimating their importance. Unfortunately for these people, GNU is the only reason Linux exists. It's not like Linus wrote his kernel and there just happened to be a binutils chain, compiler, libc, etc just sitting there, ripe for the taking without someone doing a HECK of a lot of work. Probably more work than goes into developing the Linux kernel itself.
Unlike some morons, I'm not here trying to say that Linux/Linus don't deserve a lot of credit; they do. But people who disagree with RMS and his policies often decide that that makes it okay to write revisionist history and downplay his importance to the OSS movement. Without him, there is no movement. Like him or not, don't forget it.
You are NOT talking about MAINFRAMES (Score:5, Informative)
Re: (Score:2)
Kids these days (Score:4, Funny)
That ain't big iron! The only system that runs both of those is the Alpha. And the Alpha ain't a mainframe.
Re:Kids these days (Score:2)
Re:Kids these days (Score:2)
NT Workstation/Server 4.0 run on PPC just fine (well asside from the lack of applications).
Re:finally (Score:2)
No it wasn't. GNU is a license, and licenses are notoriously bad for writing code (or doing anything except sit on a disk or shelf). Linus wrote Linux for the 386, but there was nothing stopping you running it on the 486 even on day 1. In fact, even today, you can run 386 code on a Pentium IV. The only disadvantage is that you'll not get the best optimizations. The GNU license was applied for version 0.12 in Jan 1992, before it was really a practical OS, and in March 1992, it became 0.95, and was really a usable OS.
Re:finally (Score:1)
The GPL is the licence you're thinking of.
GNU is the name of the operating system project started by Richard Stallman. It is not a "group" that "ported Linux to the 486" or a "licence". The first functional implementation of GNU was created by Linus Torvalds when he ported GNU to run over his Linux kernel. The fact that this combination is largely GNU software yet is called "Linux" is the bone of some contention from Stallman and others who worked on GNU, but that's another thread.
Have I been trolled?
Re:William Shatner (Score:2)