Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Main Linux Distros Port To IBM's S/390 200

SuSE has announced that they are going to release a beta SuSE Linux for IBM's S/390. A beta version will be out in late June. TurboLinux has signed an agreement to port their Linux distribution to S/390 as well. The only major distributor that is missing here is Redhat. What do you think about Linux distributions and the S/390??
This discussion has been archived. No new comments can be posted.

Main Linux distibutions port their Linux to IBM's S/390

Comments Filter:
  • Having to slug away with good 'ol OS/400 on a virtual terminal here at work has been quite an experiance for me...RPG is fun and all, but using the massive computing power of the S/390, and hopefully, the reset of the IBM Mid/Mainframes in the future, with a *nix is a boon to many of us IT types.. Plus, with IBM backing the Open Source movement, I think we could see a lot more acceptance in the "real world". 'Course, we can't let that go to Big Blue's head *LOL*
  • The main points here are:

    The S/390 is a superb, multi-user, highly secure stable machine; very scalable and powerful.
    Linux is very popular and getting ever more so with Microsoft's security faux pas; it is fairly stable, and fairly secure so people are turning to it in droves for database/app serve/web serve type uses.

    Put the two together and you get a powerful machine, which is no longer esoteric and scary. It becomes more 'everyday' and easily accepted. Okay, this will lower the average salaries of an S/390 tech, but it will open up the S/390 to companies who wouldn't have thought of it before.

    Oh, and sources at IBM say Red Hat is working on it.

    The Parking Lot Is Full []
  • Aside from the target market, ISPs, not many folks will have a financial or work-related interest. I certainly won't. I've only worked at one company that even had a mainframe, and I was a user, not an admin at all. I'm still not an admin, and it's just not that likely I'll ever work with an S/390 or its successors.

    However, I still care. This is really neat stuff. It tells me stuff about Linux, Linux kernel hackers, and IBM that are all good to hear. Just knowing that people are using Linux in this way, and that it works, is something I care about.

    There are plenty of things which I care about that have little or no practical use to me. I'm probably never going to go to Mars, or other planetary systems. I still care about the research being done on them. I may never visit South Africa, or buy anything made there, or know anyone who lives there. But I'm still glad that the ANC won without a bloodbath. Only someone with no empathy or imagination would think that only those directly affected can care about something.
  • Hmmm... Actually, you could set up several VMs, use the internal networking, make one the controlling node and the others child nodes, and run a Beowulf internally in one S/390. I would think that that would put you well in the running for the "And the Point of This Was?" award.
  • Obligatory Disclaimer: I do work for IBM, and I am not representing the company or any part of it ... just speaking my own mind. Sheesh. :)

    You might be surprised.

    IBM announced some time ago that Linux compatibility was going to be added to AIX. While this may seem kinda backwards - I think it would be better to just make AIX fully Linux compatible, or scrap it entirely and run with Linux, period - it's an important signal of general acceptance of what's been called 'a college project gone horribly right'.

    Don't forget, IBM's been in the computer business longer than pretty much anyone else (if someone's older, I'd like to know who), and nobody learned a lesson the way they did about trying to dictate market terms -- remember Microchannel, or the PCjr? Sometimes the best lead can be taken by following along a while and seeing what happens.

    What I think we have here is not that there's an expectation that companies will ditch MVS in favor of running Linux on their S/390s, it's that companies that had no use for Big Iron with a cryptic and user-hostile OS may have use for Big Iron with a widely-distributed, widely-supported and widely-understood OS on it. After all, it's better to sell a nude S/390 than to not sell a fully loaded one.

    ikaros, who, being a lowly field tech, does not have access to top secret marketing plans, but this sounds rational to me.

  • will it be commercially sucessful. remember NT on RISC? it is an interesting reinvigoration strategy for S/390.... just got to fire up that S/390 in my basement and get to work on it. Or maybe IBM will give me one. ;-)
  • What you can't play 3d monster maze on your Cray? Why not? I know for a fact you can play Quake on a Cray do why not 3d monster maze.

    With repect to the topic at hand I personally don't play with s/390s on a daily basis busis but allowing your mainframe (if you already have one for your SAP/Peoplesoft stuffs) to function as a web server as well is smart strategy in leveraging existing resources. Bringing linux to the high end computing market is probably important than some of the other features we linux freaks has been clamouring for because those computers have sufficient omph to do mission critical apps.
    By the way I want a s/390 ;-)
  • As noted elsewhere in this article/thread: The new (June 2000) issue of Linux Journal ( has an excellent article that serves to answer many of the why's and wherefore's of this issue. Unfortunately, the article, by Adam J. Thornton, and titled, "_The Penguin and the Dinosaur_," unlike Linux is not yet 'open,' so we'll have to wait a month to see it in all its glory (and glorious it is! Includes much of the functionality/info of the 'Unix vs. NT' site!)

    Mr Thornton addresses the compatibility issue by pointing out emulation engines (Hercules at and saying that "Linux on the S/390 is just as much Linux as Linux/PPC, Linux m68k or Linux/Alpha...its the stock kernel, rather than a subset or extension...if it comes with source, (chances are) your can build and run it with minimal effort." Then he talks about "Think Blue Linux and the "Iron Penguin" and the fact that they have 420 RPMs running for binary installers.

    Now think a Beowulf cluster of 50,000 machines, installed in an hour, with each machine having available a minimum of 550Mbps PPP comm channel between the machines. Shared memory is no problem (think /usr, /bin, and /lib shared across the cluster at bus speed). More shared/pooled memory for PVM/MPICH/DCE or CORBA (pick 'em). 50K Apache/JSERVE/mod_PHP/mod_PERL/mod_so/mod_REBOL machines with virtually infinite (as compared to PC's) memory/resources that never need to be backed up (done by the VM) and never (NEVER!) hard fail. There are 420 RPMs (and growing) available for this platform and the resources who can understand and continue this development exist in almost every company of any size in the world. YUM YUM YUM!!!! WORLD DOMINATION!!!!

    What was your question?????
  • Many of the newer IBM mainframe machines are air cooled (no water chiller needed) and could probably be run in a normal house with a decent air conditioning system.

  • I think I covered this in point #1, we can already run mutiple instances of linux in virtual machine mode. The subject of this thread is a *native* port....
  • It runs in native mode (bare metal). Done that. It
    runs in LPAR - done that, too. And it runs as a
    guest under VM - do that all day every day. It's
    good stuff.

    Linux on S390 is an ASCII machine - USS under
    OS/390 is EBCDIC - that's often a nightmare when
    trying to port software - ask me, I know - it's
    my day job!

    Linux on S390 supports ext2fs. It's reasonable to
    expect there will be a CMS filesystem driver some
    day as well.

    All in all, Linux on S390 is pretty darn good.
  • Port done by Alan and the kernel team, with input from exists in every kernel after 2.2.13 (princeton u.) and is stock in every source image since 2.2.14. Keep in mind that without Linus' say-so, it ain't in the kernel and it ain't Linux!

    Marist College and Princeton University did it first. I don't know where Dr. Vepstas (the guy who did it in a lunch hour) teaches, but I don't think he's an IBM fellow.

    IBM arrived to this, like Apache, late. However, they saw the value in how this was done, saw it worked and decided (rightly) not to screw with it.


  • While I'd agree that Linux on S390 doesn't need CICS to be useful, CICS isn't strictly a mainframe thing. IBM has versions of CICS for AIX and OS/2. There are 3rd party software developers who do products like UniKix which is a CICS emulator for many of the commercial UNIXes. There isn't any reason why one of those couldn't be ported to Linux on S390.

  • The reliability of a mainframe comes not from the software (though that is a factor) but from the hardware.

    Wrong. It comes from redundancy built into both the hardware and software. The hardware is only as reliable as the software running on it and vice versa.


  • Umm, it's MVS, not VMS. MVS is run by 390 machines, VMS is run by VAX'es and Digital Alpha chips. -Steve
  • It's called ThinkBlue Linux, or Iron Penguin.
  • TurboLinux already has an Itanium port, and it was out well before RedHat's.
  • This is a very smooth move on IBM's part. I would've never imagined IBM and Linux ever being mentioned in the same breath...they seem like such polar opposites, y'know?

    I work on a fairly small mainframe (oxymoron) running OS/390, programming COBOL on loan systems on a small co-op bank--I've got a PII/450 running W98 on my desk that only gets used as a 3270 emulator (and Web browser when the boss isn't looking). There's nothing new, sexy, or "hip" about S/390 mainframes, but they are dead solid reliable. They just don't break.

    So of course our CIO is hellbent on scrapping our ugly but functional COBOL-based systems for the Brave New World of client/server, VB 6, SQL Server 7, Microsoft everything, distributed processing, blah blah blah. Forget what works, it isn't New! and Improved! so we dump it and go for eye candy. This is why I like seeing stuff like this--keep that super-reliable raised-floor gear but bring it forward into the 21st century, or at least the mid-90s.

    This sure seems like the best of both worlds--mainframe reliability and *nix flexibility. Now, when the heavy iron operations people meet the Linux geeks, that is gonna be fun...

  • On the website, the S/390 looks to be no bigger than an ordinary PC - or is that just the way the pic looks? Also, how much are they? It's exciting to think that linux may be the instrument to bring greater variety to the hardware world ... it's already doing so, but slowly.
  • While the availability of our familiar distributions is nice, it's not that important in the scheme of things. Of more significance, by far, is that fact that IBM is now officially supporting the S/390 port. It had previously been available for free download, but that's not going to convince an IT manager to install it as a mission critical system. The fact that it is now supported by IGS (IBM Global Services), however, is likely to make them sit and and take notice. IBM offering the same consulting and implementation services that they offer for other S/390 OSes is a major boost. It make Linux/S390 into a mainstream platform. The importance of that shouldn't be underestimated.
  • I think you can get S/390 cards for your PC. You won't get the mainframe reliability - which is the whole point of having a mainframe - but you will get a good chunk of the speed.

    Even an S/390 emulator might run Linux at reasonable speeds, if you run the emulator on a modern PC or workstation.
  • The average S/390 is capable of running numerous virtual Linux boxes at once (several thousand I read). For major software developers (read megabucks budget), any or all of the distros could be loaded into a VM to test the product.

    Having thousands of concurrent Linux boxes running offers another option. Web server software to date has been developed on the basis of multiple users using a single server. OK, there is load balancing, but this doesn't alter the paradigm, it merely loosens a few of the constraints. The S/390 opens up the option of each user connected to the site having one (or more) virtual Linux boxes of their very own to process their transactions (or whatever). What could your e-commerce site do with that ?
  • Actually the point of the articles and press releases is that Linux will now run natively on an S/390 and not just inside a virtual machine on OS/390. That is pretty damn cool :)

    Yeah, that is pretty damn cool, but not terribly useful. I think it would be much more useful running under a virtual machine. Linux just can't possibly take full advantage of that hardware, what with all the billions of CPUs (err... maybe not billions :-) and crazy I/O channels.

    I'd love to see some performance specs on Linux vs OS/390!

    I'm sure it would get creamed. Except for Intel chips (maybe others too, but I know not Alpha or PPC), Linux is generally not as fast on the hardware as operating systems which were designed specifically for that hardware.

    On a related off-topic topic (?), does anybody know if Compaq is still doing that free Tru64 for personal use thing? I'd love to give that a test drive on my Alpha (or, if nothing else, just steal the math libs and compiler to run under Linux with the Tru64 emulation libs :-)
  • Yes, I agree with AC. Will Volkerding put S-390 support in the Slackware Dist?

    Linux rocks!!! []
  • I want to see you run a mainframe in your house.

    I've got three-phase in my house, don't you?

  • A few of you have indicated that this instance would be a guest OS, probably under VM. That fact isn't made clear in the announcement and AFAIK that is not a requirement to run Linux on a S390 server. That is, one of the big hurdles to running Linux on S390 is to NOT run it as a VM guest instead running it native on the HW. The big problem is to support native IO, channels and whatnot. The other problem is that Linux does not support any 'native' S390 FS so you have implement some extensions to the OS in order to overlay a Linux FS on top of the basic FS functions on the S390. This, instead of attempting to create PDS's and so on within Linux. All of this work seems to be a follow on to work directed at doing more or less the same thing for AIX on S390 which has been around for several years. In fact OS/390 pretty much IS a Unix variant running on S390 - at least it's certified that way.
  • RS/6000, now S/390... What kind of reasons did you expect? That is IBM. They actually think different:

    Wow, everybody does Linux! We should not stay out of this. We should do some Linux too. Let's close this project, this and that, then spend $x.xxE6 on this and see.

  • Heh, just call me picky today... :-)

    ``You can still buy VAX architecture systems from Compaq blah, blah, blah...

    er, so? how's that an error in my comment (which was also marked AFAIK)''

    It surprises me how most people don't remember that VMS was born on the VAX architecture. Not too different from the folks who think that PCs started with Pentiums. I'll concede that it was more of an oversight or omission than an error.

    ``a FAT-like file system and drive letters on steroids.. BARF. And I dunno who make's Compaq's hardware, but it bites.''

    Methinks you're referring to an older version of VMS. It's had a journaled filesystem (the Spiralog file system) since v7.0 or maybe the later v6.x releases (if memory serves). Don't ask me about it though; we never used at the last VMS site I worked at though I know some people swear by it (while others swear at it). It's not IBM's jfs, DEC's advfs, or whatever Veritas's is called but it's not really FAT-like either or so I understand. Now Files-11 is/was sort of FATty but I've never actually had a corrupted file on a File-11 disk while I've had, and seen countless other people have too, plenty of experience losing files on FAT-based filesystems (via the infamous cross-linked cluster problem). I suspect, though, that most of those losses were due to essentially being root on the PC when using DOS/Windows and its propensity to crashing. If I ran around on VMS systems with BYPASS privilege turned on all the time, I would expect more problems.

    As for the drive letters: I never found the drive naming to be a problem. It was tons better than PCs had at the time. I suppose it depends on whet you used first. Personally, I'd find it somewhat annoying to go back to drive letters nowadays. In fact, I currently find it annoying as hell that Linux still uses sequentially assigned drive letters in the SCSI subsystem when other, more transparent, naming conventions exist (especially, since I was using a PC UNIX in the early '90s that didn't have this limitation). One wonders why the kernel developers seem to hate the way System V handles SCSI devices. Oh, well.

    Just what are the complaints about Compaq's hardware? I've never heard anyone complain about the VAX and Alpha hardware before other than about the price. Now Compaq's PC hardware? That's another story and I do know of techs who will say it ``bites''.

    ``our main cluster has three nodes, and it has yet to compensate for a single hardware failure (even some HDD failures have crippled the thing, and they're supposed to be RAID!!)''

    Not sure what you're driving at with this comment. What kind of ``cluster'' is this, I wonder. It sounds like you mean hardware fault tolerance. Buy a Tandem or a Stratus. They're hard to beat FT-wise. Of course, you gotta have some pretty deep pockets to consider those hardware platforms.

    rtscts? Gee, I'm still getting by with xonxoff.


  • According to most recent polls, Debian is either the most popular or second most popular Linux distro.
  • Anyone who receieved a Linux Journal in the last few days will notice there is a HUGE article about the S/390 and the linux port done by both IBM and hte Open Source community.

    It also goes to mention that in the IBM port, there are a couple of core level modules (i'm not much with a S/390 so not sure what they are) that are object code only, IOW, not open source.

    I have no problem with this other than the fact that Debian, which is mentioned as having a port in the works (the writer of the article is a core debian maintainer), prides itself in being a "Free Software Only" distribution. I'd really like to hear some comments on why that is or should be any different for this case.

    The big advantage from what I can tell is not even runnign linux as the core OS, but running it under the very powerful VM mechanisms in the S/390 (the article explains how the VM is actually tied into the HARDWARE, which just plain rules) allowed the writer of the full free software kernel port (which is not finished, IBM did a private port and announced it this month) to START OVER *41,000* COPIES OF LINUX + THE APACHE WEBSERVER! Good LORD!

    So don't kid yourself, with native power like that, no one is going to even bother running linux standalone on one of these things, not to mention there is much cheaper, adequate hardware that will run linux by itself - a S/390 is a very high end machine and chances are it's going to be a cold day in hell before the suits let you throw your "Free OS" directly on the machine.

    But that doesn't matter, from the article I gathered it's going to be much more popular in a VM. :)

  • Very interesting point about the recruiting of S/390 programmers. I'm probably one of the last of the COBOL Mohicans--got out of school in 1987 with a degree and scads of COBOL training and went right to work on IBM heavy iron. Now it's 13 years later and at age 33 I'm the youngest mainframe programmer I know.

    Most 390s are probably still running old COBOL stuff, that's my guess. And the knowledge pool for that old software is drying up. So if a business can keep that monster humming in the basement and shift to a "modern" language, thus allowing them to recruit new talent...hey, why not? And maybe they can teach a couple of the new guys some COBOL to keep the old applications running until they can be ported over. :)

  • Well I've played on it at work - lynx, apache
    and samba all compiled fine.

    It looks mostly like linux on intel - configure
    can barf when it sees *-s390-* as the host
    to configure against but thats an application
    configuration problem, easily worked round.
  • Why the hell was this moderated up? It sure as hell isn't insightful, every article posted on /. about Linux on the 390 has said the same thing as this guy, only much better. No offense to Sp0ng or anything but at least a couple moderators must be smoking crack.
  • by FascDot Killed My Pr ( 24021 ) on Thursday May 18, 2000 @02:24AM (#1064396)
    If you give me a free S/390 I'll do a review for you...
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • Caveat: I know absolutely nothing about the S/390.

    This doesn't relate to how fast the actual computers are. Most of the supercomputers I use run under 300 MHz (even the ones purchased this year). Of course, most of those supercomputers are actually glorified clusters...

    You are right, Mhz isn't everything. For the 390, it is fairly telling. I don't recall any superscaler implmentations. I don't beleve it has a fused FP multiply and add. For FP it just isn't all that fast, nor even for 32 bit integer math. On the other hand it's BCD math can be quite fast (single instruction for most ops, something like 40 or 60ish digets supported in hardware, more possable with OS assist). Translate table instructions.

    Of course, speed all depends on what the job is. In my case I run memory/floating point intensive quantum mechanics calculations.

    If it's I/O you need fast a 390 can do it. If it is something else you need fast, there is almost certonally another computer that will do it better. Frequently even made by IBM :-)

    For FP Fortran code, I would guess a Alpha would really run much faster (in absolute terms, or per dollar) then a 390, unless you have quite a bit of I/O going on. Even then maybe. I'm sure Compaq has a F90 compiler.

    Of corse the 390 is very very very good at making sure that the answer you get is right. It's somewhat fault tolerent, but more importantly for many applications it actaully notices many kinds of breakage and will let you know about them rather then blondering on with the wrong answers.

  • There may be some life left in my Apple ][ yet!

  • Redhat's been pretty busy with the Itanium port. Maybe no time for this one ... yet?

  • So what's a sysadmin to do when registering one of these with The Linux Counter []?

    At this point there are 76148 machines registered; one of these could increase this number by 50%!

  • any imagineable? uhh, no, for highly parallel tasks, that don't ned to communicate much with each other (many science apps), the speed of the cluster scales pretty much linearly with the number of machines, and there really isn't much limit on the size of the cluster.

    I can imagine some really big clusters.

    However, I will grant that there's a great many computational problems that don't parralellize that way, and some of those it may very well be impossible to build a cluster that's faster than a top-end mainframe (or supercomputer).

    All depends on your workload...

  • I work at a 1/2 Billion dollar bank (Granted not huge by any means) and we are running on a Unisys mainframe running MCP/Unixware. Right after we bought our mainframe Unisys started coming out with their Clearpath mainframes which have 2 operating environments, MCP/Unixware and NT running on the same hardware. The new mainframes are up to 32 CPU's dynamically able to reallocate CPU's to either environment. This all sounds pretty cool but it is NT. The upside is that the mainframe programs do communicate with NT servers and if you do it internally it is much faster.

    I have had some experience with IBM AS/400's and I know that they are rock solid and I hear the same things about the S/390's. I don't know why you wouldn't run a VM Linux session to at least check it out. There are a lot scarier things out there than Linux on mainframes!!

  • I heard that they were going to phase out totally in favor of Linux.

    I wonder what the banks will think of that?
  • I work for one of the largest banks in canada. We have 6 OS/390 servers. One production, one development / backup, and a Y2K one (Which is getting hauled off in a month I think). We also have a similar setup at an alternate site 30 miles away. The systems are all located on a raised foor a long with 100 sun 4500, VAX, Tandem, etc etc. We have close to 1 billion dollar worth of hardware at each site plus we have 20 Nearline STK lsm as well.. Do you really think were not protect against power surges. Also we have these huge generators in the basment as well. (Attually pretty cool if you ever saw them something out of a sci-fi movie)
  • This is an interesting concept...What about redundancy for integrity? Not to play the Devil's advocate here, but assume that you are running Linux on an S/390, and doing the work of 400 Sun server (figure courtesy of []), and there is one HUGE power spike and the S/390 goes down. Goodbye enterprise because that was your ONLY system. Sure, you can run 41,000 copies of Linux on one, but it's still only one machine. In short, there are advantages (one system provides simplicity for maintenance) and disadvantages (as stated above) associated with the return of the mainframe computer. As for the idea of IBM phasing all of their OS's out in favor of Linux--good luck. With some of the bank tellers I've dealt with, I'll be pleasantly surprised if they would even be able to handle the idea of Linux in their place of business. That's all I have to say about that.

  • I guess my stupid questions is Why? What advantages are there to running Linux on a mainframe?
  • >There are several thousand supported >instructions on IBM's assembler for OS/390. Are you talking directives provided by the assembler or S/390 instructions? You can write an S/390 disassembler in less than several thousand lines including tables. Maybe you're talking about assembly language macros which interface with operating system services (e.g., STORAGE, WTO, ATTACH, etc.)?
  • Actually, I want Debian. It's the one distribution that you can get for every single platform (that I can think of anyway).

    It's an administrators dream.
  • Being a person trained to use JCL I welcome the addition of LINUX as an OS for s/390. Should make things a little easier to do anyways.
  • Don't laugh. It may not be Linux but.... html

    Lunix for the Commodore 64 and 128.

  • Well, let's see. Ever since my 390 replaced my refrigerator in my kitchen, I've been looking something a little less cumbersome than MVS and cheaper than OE. This is a God send... Ok, really,... The only people who could afford (or atleast have a use for a 390) aren't saving that much cash by running linux. Is it worth the risk? A high end company invests in a 390 for it's mission critical enterprise solutions (..insert other buzz words here..) Are they going to use a tried and true OS/390 that's been hacked and rehacked for 30+ years and is possibly the most stable OS around or are they going to go with the new Linux port?
  • SuSE must have taken in account the possibility of running 1000's of Linux instances on every installed S/390 to decide if there would be enough users to make a port worthwhile.

    If you allocate the equivalent of a c64 to each user, S/390 might very well surpass the i386 world Linux userbase.

    I challenge you all to download the ISO in the name of Discordia. Fnord

  • Now I have to clean out my basement. There's nowhere else in the house big enough for an S/390 and the disk farm that I'm going to want attached to it. And I'm definitely going to need something faster than a single cable modem. My heating problems are solved!
  • by Jeff Mahoney ( 11112 ) on Thursday May 18, 2000 @02:56AM (#1064415)
    Granted I don't work for IBM, but I'd hazard a guess there isn't a snowball's chance in hell of that happening.

    From the few commercial UNIX vendors that have been offering Linux support - NONE, AFAIK, have announced any plans to migrate completely.

    I've posted this before, but Linux just isn't there yet for the enterprise environment. Sure, some sites are deploying it in that environment, but major feature sets that enterprise users demand aren't implemented. Personally, I find admins that deploy Linux in a "mission critical" environment irresponsible [*] I've seen a few clustering packages for Linux, and quite bluntly - they all suck at this point.

    I *am* a Linux user. I have been since 1995. But I'm also a realist, and an admin in the enterprise environment. While I might consider deploying Linux for a small non-critical system (like my workstation), I wouldn't dream of deploying it for "critical" applications.

    Now IBM migrating to Linux on the S/390 is just an entirely different argument. Not only are you suggesting that IBM would migrate to Linux, you're suggesting that IBM would dump its huge investment in a specifically NON-UNIX operation systems strategy.

    I know people that run S/390s with MVS, and I don't think they'd ever considering giving up the consistantly proven reliability of MVS for anything UNIXish. Indeed, that's not even a problem with Linux, but UNIX in general. (Yes, UNIX is reliable - but not next to a mainframe)


    [*]: Unless they fake failover with something like the Cisco LocalDirector.
  • One thing about the OS390 port is that now you can say to management
    Linux scales from a single floppy disk right up to an OS390 mainframe, and can do useful work at both ends. A Linux-based router or simple firewall will fit on a floppy, and IBM now support Linux on OS390.
  • Linix has software reliability - and can be hardened even more. But the platforms it runs on die when the bits get dropped.

    On mainframes, no bits drop. (Actually, they do. But the mainframe fixes it and keeps on going. So as far as the software is concerned the computer is perfect.)

    Now suppose you want to do a reliable web server for an enterprise:

    - You could do a farm of PCs running Linux and Apache. But when a processor failes you lose the transactions in progress there.

    - You could port Apache to (or write a web server for) an ordinary mainframe OS.

    - You could port Apache to a mainframe Unix. (Has been done for UTS - Amdahl's mainframe SVR4. But while that will run on IBM mainframes it isn't from IBM.)

    - You could port Linux to a mainframe. Apache and EVERYTHING ELSE UNIX/LINUX comes along for free.

    Lots of other uses for Linux on a mainframe, of course. Mainframe reliability, capacity, and speed, combined with Linux reliability and functionality, is a powerful combination. But I bet enterprise-reliable web servers are the first "Killer App".
  • I think the whole point is, you'd have one machine doing nearly everything. Have one LPAR running /DB\/?2/ maybe running some legacy cics/cobol apps, have another one running linux as a webserver / samba server. It can talk to the db2 server with a ridiculously high-bandwidth connection. need another server for something, or want a test environment? Just fire off another one.

    MVS, while unpleasant at best, from a user's perspctive, has power roughly equivalent to an aircraft carrier. If you want to abuse that metaphor a little (and I do), maybe linux is the fleet of planes based off that carrier?

    Anyway, we have some big iron here where I (for the next week at least) work, and there are aspects of it that rock nads. It's uber-reliable, and relatively low cost, in some respects. For companies that already use mainframes, this is simply beyond cool. We all know how well linux integrates with disparate systems. MVS is about as poor at that as linux is good, so this could be a piece which ties everything together.
  • The only problem with porting Linux to a mainframe is that Linux is probably too slow. I work with an S/390 at Northern Illinois University, and have experienced firsthand just how slow mainframes run. Contrary to popular belief, mainframes are not fast machines - the average Sun server can run circles around a mainframe in terms of instructions executed per second. However, mainframes make up for slow processor speed with massive IO capabilities - and given that most data processing tasks are IO bound, this is good design. However, Linux wasn't designed to be run on systems where every single processor cycle counts, and MVS was. Granted, MVS is a piece of junk from the user's perspective, and I would rather run Linux any day. But I don't think that Linux will make successful inroads into the mainframe community simply because it is a processor-cycle intensive operating system; this isn't a problem on PC's which have processor cycles to waste, but on a mainframe, where every clock cycle counts, Linux would probably be more of a drag on the system than anything else. Think about it - if you have to process 250 million records, you don't need the OS taking away any more clock cycles than absolutely necessary, and a kernel written in C with portability in mind can't possibly be as efficient as one written specifically for the hardware (and probably in assembler).
  • They're actually quite large.

    Depends on the "they" to which you're referring, and on what "quite large" means. According to the S/390 Multiprise 3000 Reference Guide [], the "Base (CPC) Frame" for said machine is 520mm wide, 1110mm deep, and 819mm high, or 20" wide, 43" deep, 31.5" high for us Yanks, and, according to this page from the S/390 Integrated Server Technical Application Brief [], that box is about the same size (533mm wide, 1038mm deep, 819mm high).

  • should two separate teams even be working on this?

    No, of course not.

    The same applies to Linux/x86, of course; let's pick the one distribution that should be the only one on x86. :-)

    As far as I know, the "port", in the sense of "kernel, glibc, compiler, binutils (and perhaps gdb)" (i.e., the part of a Linux distribution that contains the most platform-dependent code - other than perhaps the X server) has already been done, just as it's been done for x86, Alpha, etc.; I presume what TurboLinux, SuSE, etc. will be doing will be combining that with their distributions to make S/390 versions, to go along with versions for whatever other processors the distributors in question support.

  • 1. We've been able to run Linux on S/390's for a long time, in virtual machine mode. This is pretty neat but has limited practical usage. 2.IBM has had Linux available for the S/390 since January! 3.A mainframe is VERY expensive to purchase and maintain (or lease). Who's going to make that kind of investment to run an operating system that wastes a good percentage of those expensive MIPS when OS/390:MVS does a much better job? 4. Mainframes are used for mission critical, enterprise level proccessing. Who's going to tell the 25,000 users who depend on the mainframe to do thier jobs, that we're going to switch operating systems and then rewrite or recompile the 18,000 jobs and associated programs that execute every night? Not me. I like my job. *--> "Go away or I shall taunt you a second time!" *-->
  • by Loundry ( 4143 ) on Thursday May 18, 2000 @02:58AM (#1064437) Journal

    I knew nothing at mainframes until I worked at a shop where one was used. Coming from a Windoze/UNIX background I was really really surprised to learn that there is this whole other mainframe universe in which there are many people working, coding, and living as if Windoze and UNIX didn't even exist. (Well, of course they're all aware of Microsoft.)

    I got to learn a little bit about OS/390 (the operating system which runs on those mainframes) and it's a nightmare (in this UNIX bigot's opinion). lrecl, fb or vb, PDSes, GDGs, ftp commands like 'put BFDG.XD.DIWDOS(+1)', ISPF, fortythousand acronyms, gawd. From what I understand, IBM didn't even consider supporting TCP/IP until about ten years ago or so -- for a very Microsoft reason: they don't want to support any protocols they can't control (see also Direct3D vs. OpenGL and kerberos). There are several thousand supported instructions on IBM's assembler for OS/390. This is because there was such a huge number of assembler programmers for OS/390 IBM kept adding instructions to make programming easier. If I understand correctly, I think there is even a "print" instruction in OS/390 assembler.

    90% of IBM's products =~ m|\w\w?/\d{1,4}|;

    But the IBM of today is, what appears to me, a very different company. The prospect of running Linux on IBM is, in my mind, revolutionary for IBM. The prospects of Linux on IBM look really cool -- kind of like compacting hundreds of linux boxen into one big, black, airstreamed box with a big, red, candylike power switch that screams "Flip me!" So I think this is great. The more Linux, the better.

  • Agreed.
    And hey, it's not just banks. I know of insurance companies, airplane manufacters, grocery store chains, universities, brokers, construction companies, etc. that all use s/390s

  • Sorry, but most fortune 500 shops rely upon IBM mainframes to crunch through the data in their core business applications.

    Although Linux is an elegant OS with a bright future, at the moment it suffers from youth and the deficiencies of its original platform: the PC.

    1. Raw I/O throughput. The strength of a mainframe resides primarily in its enormous capacity to move data through I/O channels. Separate I/O controllers handle most devices (like the I20 architecture), so the main CPUs are free to focus on computing tasks. The PC is not even in the same league--yet.

    2. Advanced enterprise features, such as hierarchical storage management. Although Linux is moving towards LVM (Linux Volume Manager) to handle disk space, the mainframe data management facilities go one further: the OS will automatically migrate unused data from "small," fast hard drives to slower, larger hard drives, and finally to removable tape storage. This means that, unlike Linux where we manage mount points and disk partitions, the OS takes care of moving data around on all of its volumes to ensure best access. To the user (and an administrator), the sum total of all available hard disks and all cataloged tapes represents the complete collection of available data: terabytes upon terabytes of storage!

    3. Another advanced feature: machine partitioning. Although incorporation of the User Mode Kernel is a step in the right direction, OS/390 (and high-end UNIX platforms, such as SUN, as well) allow an adminstrator to _partition_ a machine into completely isolated units, or partitions. Not to be confused with the Virtual Machine capability much discussed with Linux on S/390, but a partition is simply a fixed allocation of CPUs, memory, and I/O devices to an instance of a running OS/390 system.

    What that means is 1 box may be split into multiple partitions, and each partition may have completely separate disk drives, memory, CPUs, etc. Basically, each partition becomes it's own machine, which can be useful for segregating activities onto different sets of resources (e.g., a test or development partition and a production partition). S/390 can do this because of hardware support, but unfortunately, efforts such as the User Mode Kernel do not achieve quite the same results: the "partitions" or "user mode kernels" still share the same underlying kernel data structures. If one UMK craps out, it could potentially bring down the whole machine.

    Of course, give the Linux/Open Source community another 6 months, and it will solve all of these in spades. ;)
  • I can't imagine IBM making that big of a commitment to remove OS/390 on such a successful product as the S/390. There's tons of S/390s out there running OS/390 and doing just fine.

    This is not to say Linux won't be good. However, reliability is one of it's big selling points and I'm not sure many of IBM's current S/390 customers will want to just hop into Linux when they already have a proven product.
  • and second, they shouldn't be doing it for business reasons, since there already is UNIX on os/390

    Meaning a native port of some flavor of UNIX, or S/390 Open Edition? If the latter, then you may already have given the reason:

    It is very, very strange as UNIX goes.

    meaning it may be easier to put Linux on an S/390 (or in a virtual machine or logical partition on an S/390) than to put some New Economy Dot Com applications on Open Edition.

    Unfortunately, none of the architecture dependant GNU utilities will compile on this beast, since the hardware isn't even similar to anything unix boxes are used to running on.

    General-register-based architecture, 16 general-purpose registers, 4 (or is it 8 or more, now?) floating-point registers, memory-to-register and register-to-register arithmetic instructions - not all that different from VAXes, 68Ks, x86's; it's just another general-register-based CISC box. (Yeah, it has specialized instructions, but so do the other CISCs for which GCC generates code; you don't necessarily have to use them.)

    The relatively short offsets in instructions may be the biggest problem.

    If suse is going to port linux

    Linux has already been ported; presumably SuSE and TurboLinux will be integrating the kernel, glibc, GCC, binutils, GDB, etc. changes into their distributions.

    they may encounter the hardest part in porting things like gas and gcc, since AFAIK they don't know how to spit out binary for this CPU as of now.

    There's been S/370 support in GCC for a while,a s I remember; the S/3x0 config directory of the EGCS source [] includes notes and checkins that suggest support (e.g, the 1.3 version of the README file [] says that it currently "supports three different styles of assembly", including MVS using the HLASM assembler, S/390 Open Edition, and "ELF/Linux for use with the binutils/gas GNU assembler".

    There's also, in the GAS CVS tree [], tc-i370.c [] and tc-i370.h [] files (which are for S/360 and S/390 as well as S/370, according to the comment).

  • Yes and No. The Linux kernel was ported to the MF. Linux does I/O pretty well, so without actaully trying it I could not say. Hmm which is faster COBOL or C? Most of what you typically do on a mainframe is in COBOL. Yuch! I too have worked on MF and find them to be real MF. No they are not that fast. That is because of the way the OS works. UNIX for years has been able to run on MF, and yes very few people run UNIX on MF I guess. This is more of just a show that Linux can run anywhere. Also I think that when they ported it to the MF that they would take into consideration that MF are more I/O than CPU and that is why they have a s390 branch in the kernel. Think about it this way. If you can run Linux on a MF then you can start to port all its applications over to the MF. Also with MF you can partition regions. SO that one region you have Linux and another you have MVS. This would good for someone who wanted to run lets say oracle, but had a MF. They could keep the hardware and have a less expensive software replacement.

    Just my .02cents though

    send flames > /dev/null

  • I suppose that building a mainframe distribution is very costly. (I suppose they need at least one expensive actual mainframe to do the testing, don't they?)

    And then, since it is mainly GPL software, you could buy just one copy (disc? tape?) of a distro and install it in all the virtual machines of all the mainframes in the company. So you have a maximum number of sales as big as the number of Data Processing departments that run S/390s. I expect this number to be small, at least, compared to the number of individual-owned PCs.

    So I think that the number of sales of these distributions has to be very low (comparing to PC distros). And media sales is the main revenue of distribution makers.

    Am I wrong?
  • This would undoubtedly be a neat hack, and nice to have, but first, I'm guessing this is going to impact roughly 0.02% of slashdot, and second, they shouldn't be doing it for business reasons, since there already is UNIX on os/390.

    At my job I spend a lot of time working with IBM's Open Edition, which is a UNIX that IBM implemented on top of os/390. It is very, very strange as UNIX goes. A lot of common things you associate with unix aren't there like a password file, and other common facilities. Things are put in very strange places on the filesystem, and the way the backend works, (i.e. how it interfaces with MVS) is far weirder than weird. That said, it's a very interesting system that seems pretty stable.

    Unfortunately, none of the architecture dependant GNU utilities will compile on this beast, since the hardware isn't even similar to anything unix boxes are used to running on. If suse is going to port linux, they may encounter the hardest part in porting things like gas and gcc, since AFAIK they don't know how to spit out binary for this CPU as of now.

    (FYI for people who aren't familiar with OS/390 - it's IBM's mainframe OS. These types of boxes generally start in the $60,000 range for one that probably isn't worth using, and range in price up to the multi-million dollar range. On the one I work on, each individual CPU costs $200K.)

    That's why I say probably most slashdot readers won't care. For the vast majority of people, they never work with a mainframe because the only people who can afford to have mainframes are large organizations. (The federal reserve has some bitchin huge mainframes)

  • I always thought that the big distros that everyone knew about were Slackware, RedHat, Suse, Debian, Mandrake and Caldera. Corel is a new comer and it wouldn't make sense for them to port a distro focused on usability to a mainframe. Caldera is probably losing its spot as a "Main Linux distrobution" as it seems that people in the Linux comunity don't like them too much and anytime they open their mouths people here seem to take the attitude "Caldera shut the hell up!" I thought though that Debian was bringing a port over?

    So Linus, what are we doing tonight?

  • Fair enough: you are correct. That's the point of "user mode kernel:" if it's in user-space, it can crap out just like any other user-space app, so the rest of the system is safe.

    However, valuable as UMK is for some applications, it's (yet) not in the same league as mainframe partitioning.

    1. The OS and system software on mainframe partitions may be upgraded separately without affecting other active partitions. UMKs, too, can be upgraded, but if the real kernel has to be upgraded, the whole machine goes down.

    2. How well does the UMK protect the underlying real kernel (and thus other user-space apps) for excessive resource consumption? For example, what if an errant application running in an active UMK (or just buggy code in the kernel used by the UMK) starts spawning threads like crazy, will the UMK protect the rest of the real machine from adverse effects?

    3. The UMK kernel, valuable as it is, is still limited in it's ability to differ from the true underlying kernel. For example, can a kernel within a UMK provide a different thread-scheduling policy that the underlying real kernel?

    BTW: don't get me wrong, I love UMK and what it can become. It's quite an accomplishment to put such a thing together. Further, I may be incorrect about the completeness of it's operation. However, my whole point is to emphasize how in IBM's mainframe environment complete isolation of distinct partitions is very easy, and it's a feature that the Linux commmunity may wish to emulate as it moves further into enterprise computing territory.
  • You'd be shocked to find that these machines are still being made, keeping up with the latest technology, and faster and more reliable than the best cluster systems.

    They arn't so much faster and more reliable then the latest cluster systems so much as actually being the latest cluster systems. Oh, and being far more reliable (lots of check logic, ECC on the cache, and register files, not just main memory). Not so much the faster part though. The CPUs only run a few hundrad Mhz (last year 300Mhz to 400Mhz was extreamly fast for them). Of corse they have dedicated IO processors, and small tens of CPUs is a common size.

    Most of the innovations in clustering in the micro world is re-inventing what mainframes have allready been doing. Then again so was having caches, and out-of-order execution. That's not to say micros didn't invent something, or that finding the right time to re-use what had gone before isn't hard in and of itself.

  • by delevant ( 133773 ) on Thursday May 18, 2000 @03:47AM (#1064474)
    This Linux port is not, generally speaking, intended for day-to-day JCL stuff. Nope. IBM wants ISPs to buy a single S/390 to run their server farms.

    Your basic S/390 can run 200-300 Linux server images under VM. Taking the usual uptime and hardware failure figures into consideration, these 200 Linux "servers" will be VASTLY more reliable than the equivalent "real" Linux servers. In any large hosting environment, you've got machines crashing hard every week -- the MTBF really comes back to bite you when you're dealing with hundreds of physical units.

    IBM doesn't think anybody in the world will go replace MVS with Linux. They're trying to grab the hosting market. Don't forget, when we talk about running 200 Linux servers, they're not talking about 200 hosting accounts -- they're talking about the equivalent of 200 actual servers, each of which would have bunches of hosting accounts on it.

    Nobody is going to switch their bank transaction stuff over to Linux. IBM's just aiming for Sun. Besides which, I'm sure they're thinking about eventual transitions, etc.

    . . . of course I could be wrong.

  • Unless they get sponsorship from some huge company. Not that they couldn't do it, just that mainframes are so hugely expensive, that I really doubt they could afford one, or that they own even a small one already. Possibly if some huge company "donated" an LPAR for a period of time it'd be possible, but otherwise I don't know how they'd actually get a hold of the platform to do it.

    (I hope I'm wrong about that. Debian kicks ass)

  • Why not just download the compilers and math libs that have been ported from Tru64 to Linux?

    Hmmm.... well, that's an idea. Although I'd still like to try Tru64.
  • This is the Final victory! The lowly adaptable geeks have finally conquered the mighty slow-moving Dinosaur Pen Big-Iron!!!!

    Let's celebrate!!!

    Here's my mirror []

  • Or at least that's the only thing I can think of. IBM doesn't want people to throw away MVS just because Linux has arrived on the scene.

    Instead, they want an ISP to buy a 390 and pop a couple hundred Linux "servers" on it, each with their own wad of hosting accounts.

    This might actually make sense, as if you're running a truly gigantic hosting farm then you're probably getting bit by MTBF on a regular basis. Using the 390 would significantly reduce your exposure to outages etc.

    Now, since each Linux image would be a "real" server, even if it didn't exist in the physical world, the ISP could use their normal admins etc. -- they'd just need to hire somebody to run VM for them.

    This way they can run monster hosting farms on reliable hardware, probably save a fortune in power requirements (one 390 vs. 300 PC-level boxen), and they don't have to all start learning VM or MVS.

    It's obviously not for everyone, but I really do think it might be useful for lots of companies that wouldn't otherwise even think about mainframes.

  • by Dr. Sp0ng ( 24354 ) <mspong&gmail,com> on Thursday May 18, 2000 @02:26AM (#1064501) Homepage
    The S/390 is a big mainframe, correct? If so, then Linux on that thing is phat. Not as the main operating system, of course - but as another operating system running on the same machine. That's what so cool about these things - you can run several operating systems at the same time on the hardware. So you could have the main OS serving up huge databases or whatever and then have Linux with Apache serving up web pages, Samba serving up shares, etc. I want one.
  • The only major distributor that is missing here is Redhat.

    Does Debian have a proposed port? I'd love to load up Linux on my S/390 here at home... *dreaming* ;)

  • by Cee ( 22717 ) on Thursday May 18, 2000 @02:28AM (#1064503)
    The only major distributor that is missing here is Redhat.
    And Debian is not a major distribution? I think you should be a bit more impartial in your comments. "The only" is a quite strong expression.
  • On a point of pedantry, I don't see how SuSE and TurboLinux porting their distros to S/390 constitutes "Main Linux distributions port their Linux..". This is only two distributions. Similarly, I would argue with the statement "The only major distributor that is missing here is Redhat" - what about Debian, Mandrake, Caldera and Corel?

    I'm only playing Devil's advocate, but this sort of logic is only a small step away from that of people who think RedHat==Linux.

  • by Black Parrot ( 19622 ) on Thursday May 18, 2000 @02:32AM (#1064509)
    > Apart from a few academics running dinosaur equipment, who cares?

    Actually, mainframes are quite rare in academia, unless you want to count the registrar and business office. They are found most commonly in business environments: banks, corporate payroll systems, etc.

  • by finkployd ( 12902 ) on Thursday May 18, 2000 @02:33AM (#1064511) Homepage
    Apart from a few academics running dinosaur equipment, who cares?

    I'll bet you think large companies do all their computing on PC's huh?

    You'd be shocked to find that these machines are still being made, keeping up with the latest technology, and faster and more reliable than the best cluster systems.


  • by jms ( 11418 ) on Thursday May 18, 2000 @07:33AM (#1064512)
    From what I understand, IBM didn't even consider supporting TCP/IP until about ten years ago or so -- for a very Microsoft reason: they don't want to support any protocols they can't control

    I don't think that was why they ignored TCP/IP for so long. IBM wasn't playing the undocumented protocols game at the time, at least not on their mainframes. You can order manuals from IBM that exactly describe each and every detail of their communications protocols -- enough information to actually implement the protocols, and there were third-party hardware vendors who did just that.

    The issue with TCP/IP was more likely that it's a very CPU intensive protocol. It kills your performance. TCP/IP peppers the processor with a constant stream of little interrupts for each packet, and the internal design of OS/390 (and VM also) is optimized for a small number of interrupts that each do a lot of work. For instance, you can tell the hardware to scatter-read 200 blocks from disk into non-consecutive memory, then generate a single interrupt when finished. IBM terminals are designed so that the terminal buffers everything you type until you press the send key, then the terminal creates a single data stream that describes all the changes you made to the screen data, and sends it all at once, generating a single interrupt.

    It works a lot like slashdot. I'm typing away in the Comment window, making lots of changes as I go, but I'm not echoing each character off of the slashdot server. Instead, when I press preview or submit, everything I've typed is forwarded at once.

    The heavy interrupt rate of TCP/IP is a big issue. The main reason that a mainframe can support thousands of users, all sitting at 3270-like terminals, is that most people tend to spend their time doing things like moving their cursor around the screen, backspacing, using the arrow keys, and typing a lot of text, only pressing enter/send occasionally. When you're using ordinary telnet over TCP/IP, each time someone presses a key, the CPU is interrupted, has to wake up that user's editor process to handle the incoming character, and most likely echo the character back out. When you are using a 3270 editor like XEDIT, you can busily type an entire page, moving the cursor around the screen, inserting and deleting text, all the while your user process is completely idle -- maybe even swapped out -- on the mainframe, until you press return. This lets mainframes support much larger numbers of interactive users and TCP/IP would have broken that.

    Back then, IBM had their own networks, and they were all running mainframes and mainframe networking protocols. Educational sites like ours were mostly on an IBM hardware network called BITNET with some cool features of its' own, (we had instant messaging in 1982!) and TCP/IP just wasn't important. The internet hadn't become important yet, and no one would have even considered degrading their mainframe performance by adding a TCP/IP stack, unless there was a damn good reason, and there wasn't. IBM had devised more efficient protocols, optimized for their hardware model, and everyone was using them.

    One of the "features" of unix-like systems is that the TCP/IP stack is buried in the kernel, and the TCP/IP overhead is buried in the kernel overhead. On VM, the TCP/IP stack runs as its own process, and shows up in the process list, so you can see just how much of your CPU is being wasted on receiving and reassembling packets. It's a lot.

    When IBM finally started seriously supporting TCP/IP, they had a lot of trouble getting good performance, because it breaks their interrupt model. One of the products that came out of that was an outboard TCP/IP coprocessor -- a dedicated PC with an ethernet card and an IBM channel card. The PC would receive data from the ethernet, reassemble the packets, batch them up, and present a bunch of them to the processor at once, reducing the number of interrupts. TN3270 also helped -- TN3270 does what the 3270 hardware did -- buffers all of the user's screen changes, and keeps track of the cursor, lets you do inserts and deletes, and sends a summary of all the changes when you press return.

    IBM's spent more time and effort on their TCP/IP stacks now that they have become more important.

    There are several thousand supported instructions on IBM's assembler for OS/390. This is because there was such a huge number of assembler programmers for OS/390 IBM kept adding instructions to make programming easier. If I understand correctly, I think there is even a "print" instruction in OS/390 assembler.

    The 370 instruction set is a fairly standard instruction set. It does have a handful of really oddball instructions, but certainly doesn't have thousands of instructions. What you are describing are the macro libraries. The traditional programming language for the IBM mainframes is and has always been 370 assembly language. The operating system provides extensive assembler macro libraries, and when you are programming, you use those macros in-line, so they look like instructions. There's an entire, fairly powerful programming language just for writing macros, because they are used so heavily by application code.

    But describing the contents of all those macro libraries as instructions is like decrying the C programming language for having thousands of instructions like printf() and strcpy(). Those macro libraries are the equivalent of the ".h" files in /usr/include.

    Yah, there are a lot of acronyms. If you thought you had a lot of macro names to keep track of, you should have tried a little VM internals programming. There is a two-volume, 1500 page book with dense text, describing tens of thousands of eight character macro definitions and equates like "VMDIORBK" and subroutines with eight character names like "HCPDSPCH"


    It took IBM until the 1990s to release an assembler that could handle symbols with more then 8 characters.

    Back in 1993, I downloaded whatever the latest Linux kernel was at the time (0.99pl10, I think), and just for grins, ran the IBM C compiler against the source. It truncated all of the function names down to 8 characters, and of the modules that did compile, the load module had over a thousand duplicate symbols. I started writing a huge macro with entries like:

    #define insert_vm_struct MMINSVMS

    to map each and every function name down to 8 characters. Eventually, I gave up in disgust. I knew that it wouldn't have worked without a huge amount of rework, but I just wanted to see how hard it would be to get a clean compile. Never got one.

  • A long time ago, in a phone company far, far away, though after the Death Star exploded, I was working at Bell Labs in a building with mainframes in the basement and Vaxen and PDP11s scattered all over. A friend of mine was in (or maybe running, by then) the Unix System V porting group, and they did a port of System V Unix to mainframes, and installed it on the Amdahl in my building.

    It wasn't perfect - getting backspace-echo to work well on that sort of I/O controller just wasn't going to happen - but it was pretty close, and you could at least use vi. I was taking a compiler course at the time, doing a lot of compilation, and the choice of timesharing a Vax with ~40 people or using the Amdahl with ~2 people was pretty obvious :-)

    Why would you port Unix to Big Iron? Well, not only could you use the blazingly fast 10+ MIPS of CPU (when Vaxen were the canonical 1 MIPS), but more importantly, the distributed I/O architecture lets you do immense quantities of disk I/O to run databases. Not only is this Entertaining Research, but it was valuable for phone company billing and equipment-configuration-management applications, allowing more flexible Unix development environments, and it was a much better development environment that Vax-sized machines for the 5ESS phone switch development folks, who needed to compile and build programs that were huge then and large even today.

    On the other hand, fsck took a *long* time to run, since the machines had a lot of disks, and this was back when Unix file systems really did need to be checked every time you booted :-) That was one of the things that prompted the development of multi-threaded fsck programs, since checking one at a time was immensely annoying.

  • > efforts such as the User Mode Kernel do not
    > achieve quite the same results: the "partitions"
    > or "user mode kernels" still share the same
    > underlying kernel data structures. If one UMK
    > craps out, it could potentially bring down the
    > whole machine.

    Wrong. If one umk craps out, it affects nothing else. Every umk has its own data structures, completely separate from every other kernel on the system.

  • Hate to burst your bubble, but the "mighty slow-moving Dinosaur Pen Big-Iron" is faster than any cluster imaginable.


  • Mainframes are known for being very powerful (in IO speed, not necessarily CPU speed), but very unfriendly. Normally, one sets up smaller, more friendly computers that connect to the mainframe, and nobody but the High Priests actually works with the mainframe itself.

    Many companies have a huge mainframe investment. They have a lot of money tied up in the hardware, software, and data onboard. Other companies just starting up find that their bandwidth actually requires a mainframe--think of an online stock brokerage or online bank.

    Unfortunately, it is really hard to get mainframe people--admins, programmers, and the like. It's a relative cakewalk to get Unix/Linux types.

    Linux on the mainframe allows easy access to the mainframe bandwidth and the data already there, as well as better access to a techie base.

    Think of it this way: you are running a trading firm (already using a mainframe for trade databases), and you want to become an online trading firm. You need a very powerful web site that can handle heavy bandwidth. that site needs to be able to communicate with a slew of online users (so many that your network guys are installing a T-3 rather than multiple T-1s), and it needs to hit that mainframe database, fast.

    Install Linux on that mainframe, compile Apache on it, and build your website onto the mainframe. The web site is now on a machine that can take full advantage of a T-3, and will access the database within the same machine. Effectively, your database connection is TCP over loopback. Finally, you can attract really good sysadmins, programmers, and Web designers because it's easier to find Linux talent than OS/390 talent.

  • "Mainframes are known for being very powerful (in IO speed, not necessarily CPU speed)"

    This used to be true, with I/O devices directly connected running at channel speeds. However with the advent of cross-bar technology and SANS (12.6GB/s on a Starfire and 100MB/s full duplex disc access using fibre) it is true no longer. A big UNIX box can beat the pants off a mainframe in terms of CPU, I/O and cost.

    Where UNIX doesn't come anywhere near the mainframe is in handling a complex workmix and availability. These are the major reasons why you find enterprises running online transaction processing on the mainframe and datawarehousing on a cheaper, more powerful box.
  • As someone else has mentioned, someone has already start a port for Debian. The May 9th edition of the Debian weekly newsletter gives the following link:

    Quite a good paragraph from it says...

    "I've found many friends of Debian within IBM. Debian is seen here as a well respected, high quality distribution. A debian-s390 distribution also seems to fit well with the idea that IBM just doesn't want to be in the distribution business."

    There's a mailing list at and a site at if you're interested.

    Debian just gets better and better!
  • Cluster are catching up in speed, but have light years to go before hitting the reliability and stability of a big iron.


  • ``There was a PC370 add on card for MCA bus IBM PS2's. There was an RS370-390 add on card for older RS6000's.'

    IBM's always seems to have had things like this. Anyone else remember the 370 emulator for the XT that even let you run a version of the VM/CMS operating system on your desktop. (We were a big CMS shop back in those days and I lobbied to get a couple of these cards but they were much too pricy for us.)

  • From a purely functional viewpoint the monopoly nukement trials seem to do chock piles of goodness. Look at IBM: it's grown to a very clever player from the ultimate bully. I think every sufficiently big power concentration (governmental/corporational/religious/whatever) should be limited like that; what do our liberalists think? The worst problem is that there should always be a bigger power establishing the limits :P

    Running Linux on IBM mainframes in their virtual-machine "userland" is nothing new in itself (was noticed on /. way ago), but large production deployment is only possible with official support.

  • by Black Parrot ( 19622 ) on Thursday May 18, 2000 @02:35AM (#1064538)
    At Linux PR [] you can read a bit about the e-commerce apps coming out for that environment.

  • by Anonymous Coward on Thursday May 18, 2000 @02:37AM (#1064539)
    Following the recent announcements that SuSE and TurboLinux will be releaseing Linux for the IBM S/390 and RedHat's release for the Itanium, Slackware have announced a release for the Commodore 64.

    "It just seemed logical to go for a machine with a huge userbase." Said a spokesman with a funny last name who was probably called Rob or something. "Linux scales remarkable well to small machines. In fact much better than it does large servers."

    Critics of the company are sceptical about whether the system will be reliable since it comes on tape.

    "I just used the CD record feature on my stereo" said Rob. "It works for music so why not data?"

    When asked whether a spectrumversion would be available, Rob said "It all depends on the success of this version. We're hoping to port it to all Z80 based machines, and possibly even pre-electronic machines".

    Charles Babbage was not available for comment
  • by mrBoB ( 63135 ) on Thursday May 18, 2000 @02:38AM (#1064540)
    dude, did you read that article [] from linuxplanet awhile back? Twas awesome! If you thought that article was awesome, check out this sweet piece [] of hardware.
  • Yes, Debian has an S/390 port. Check out the mailing list archives for details. The porters seem to be making progress, but they still seem to be in the bootstrapping stage according to this message. []

  • > However, valuable as UMK is for some
    > applications, it's (yet) not in the same league
    > as mainframe partitioning.

    True enough. I never claimed I was creating the next VM.

    > UMKs, too, can be upgraded, but if the real
    > kernel has to be upgraded, the whole
    > machine goes down.

    Yup. But if you ever have a setup where essentially everything is inside a UMK, and the hosting kernel is stripped down to the point that it's just providing processes, device drivers, and a filesystem, then you can run that forever, and just upgrade the UMKs.

    > How well does the UMK protect the underlying
    > real kernel (and thus other user-space apps)
    > for excessive resource consumption? For
    > example, what if an errant application running
    > in an active UMK (or just buggy code in the
    > kernel used by the UMK) starts spawning threads
    > like crazy, will the UMK protect the rest of
    > the real machine from adverse effects?

    The UMK, just like a native kernel, runs in a constant amount of memory. You configure it with 64M, that's all it will ever use. You configure it with 4 processors, it will never have more than four processes running at once. So, you can protect the native kernel from excessive resource consumption by sticking things inside a virtual machine.

    > The UMK kernel, valuable as it is, is still
    > limited in it's ability to differ from the true
    > underlying kernel.

    No it's not.

    > For example, can a kernel within a UMK provide a
    > different thread-scheduling policy that the
    > underlying real kernel?

    You seem to think that the UMK is somehow not a full kernel. It is. The underlying kernel is just a provider of resources. If a new version of the kernel provided a funky new scheduling policy, UMK would support it, regardless of what is supported by the underlying kernel.

    > However, my whole point is to emphasize how in
    > IBM's mainframe environment complete isolation
    > of distinct partitions is very easy

    Yeah. Nothing comes close. Not even Linux plus UMK. Maybe this is a small step in that direction and maybe some people will find that useful, but there is a long way to go.


  • Well that's a good question. I'm a Debian developer, although I have never touched a S/390.

    The main debian S/390 porter at the point is not an actual official Debian developer. Debian is supporting him though, with a mailing list, &etc because such a port is a very cool thing.

    Before such a port is actually blessed as official debian, it would have to be 100% free, which would mean it would have to use the fully free kernel port. However, pragmatically it doesn't make sense to force someone to wait until that is ready before they begin porting debian over to the platform

    So in summary, debian is, and will continue to be, 100% free software. A correlary to that is that you may port debian to run under non-free software, like IBM's kernel modules, if you want to. Just like people who don't have access to mainframes can run it under the non-free vmware.
  • Two errors in your post:

    1. ``VMS is by Digital for Alphas''

    You can still buy VAX architecture systems from Compaq (cringe -- I still have problems associating Digital's products with Compaq). The top of the line system is slower than most all of the current Alpha line (even the workstations) so I can't imagine who'd buy them nowadays though I suppose some organizations would have their reasons.

    2. ``VMS sucks. I can't put my finger on why, but it just does''

    You're wrong. My guess is that if you had any real experience with VMS it was on a poorly configured and managed system. I've always considered VMS and UNIX to be more alike than either of their most rabid proponents are willing to admit.

    Funny how the state of the art in clustered systems is still a VMScluster (IMNSHO). Most of what the UNIX community calls clusters is really just a failover capability. Now that Tru64 has 99.44% of the functionality of a VMS-based cluster, including a common system disk, er, I mean, root filesystem, it should be assuming that title real soon now. What would float my boat would be if Compaq were to provide the details of how they do their clustering to the world so that Alan Cox could crank out a set of patches to provide Linux with this capability. Should only take him a weekend or so, right?

  • Drool.... I want one :-)

    Good article link, BTW. Somebody moderate that up.
  • There is no way Linux will replace the OS/390 on IBM's mainframe. It is the most mature OS in existance, it is written specifically for the hardware, and it can do too many things Linux cannot.


HELP!!!! I'm being held prisoner in /usr/games/lib!