Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

IBM Runs 41,000 Copies of Linux on Mainframe 226

An anonymous reader wrote in to send us a story from Bloomberg about IBM making mainframes act like hundreds of servers. The best part is the bit at the end where they mention testing it by running 41,000 copies of Linux. "
This discussion has been archived. No new comments can be posted.

IBM Runs 41,000 Copies of Linux on Mainframe

Comments Filter:
  • by Anonymous Coward
    I love it when people who know nothing about mainframes make comments like this. That's the beautiful thing about slashdot -- you can be an idiot without much effort.
  • by Anonymous Coward
    Large ISP customers prefer their own box to run their own software on, not just a directory structure on someone else's box. Generally, this problem is solved by massive numbers of boxes, which the ISP makes sure are up, etc, for the customer.

    Now picture all of those individual boxes being a single mainframe. Each customer would still have the security of having their own box, installing their custom software in a custom configuration, but all of the boxes would be virtual. From a customer's perspective, things would still be the same; remote login, no problems with other users overrunning their stuff (it's their individual machine), etc. The ISP only has to maintain a single machine; the mainframe.

    Also, need to install another box? Just create another VM. Of course this may only scale to 41,000 boxes, but after that you just buy another mainframe and a few mainframes take much less space, power, etc, than thousands of PCesque boxes.


  • by Anonymous Coward
    I think this was best described as a man bites dog story. Mainframes are great, Linux is great.

    Linux on IBM hardware is great but the real value is the contributions IBM's OS architects are making to Linux.

    IBM's AIX is the best Unix. Read for yourself.

    http://www.computerworld.com/home/print.nsf/(fra mes)/000327CEEA?OpenDocument&~f

    http://www.dhbrown.com/dhbrown/opsysscorecard.cf m

    More importantly IBM is taking their expertise with scalability, reliability, maintainability and adding major functionality to Linux. Such as their addition of a Journal Filing System.

    In a couple years IBM will have thousands of developers servicing Linux, developing code, and architecting critical missing pieces that Linux needs to mature.

    A couple mainframes running Linux is just a nice story compared to the real contribution IBM is making.
  • by Anonymous Coward
    >Now that I'm browsing at +2, I do avoid a lot of junk, but I sorta miss seeing my own post

    The post that was the parent's parent is rated -1:

    [This is why solaris/irix are dead. (Score:-1, Flamebait)]

    Well, how the heck did you see that post? And for that matter if you find a -1 discussion interesting/annoying enough to comment on, how can you trust the moderation at slashdot enough to surf at 2?
  • Off topic, but your sig (HELP STOP PLATE TECTONICS), reminds me of a bumper sticker I saw that i wish I could find. It said:


  • A script kiddie with one of those could DDoS himself! :)
  • Scratch any bank, insurance company, car manufacturer, etc. etc. in the United States, and you will find a whole passle of mainframes doing all the heavy lifting.

    Mainframes are far from dead. Most big business relies on them.

    *sigh* If only we could kill COBOL...


  • Solaris is hardly dead OR dying.

    Maybe on the Intel front... But then Sun never really put much stock in x86 Solaris. I wouldn't use x86 Solaris unless it was the last choice - I'd run Linux for that. Get a cheap Sparc and run Solaris on that.

  • This probably would be a textbook example of a "well-behaved" monopoly... remember.. monopoly's can be good for consumers, it just usually happens that they aren't.

    one would think that IBM learned their lesson on what it means to be a "poor behaved" monopoly.
  • I'm pretty sure they've managed to port Apache natively to OS/390 itself.
  • The issue of running applications that take advantage of 'divvying' the work between different machines would be a moot point, really.

    You see, if I wanted an applicaiton to do that, instead of running 64 copies of an OS, talking via MPI or the such, I'd run one that could divvy out the work to 64 to 128 threads, running one machine.

    Granted, though, more distributed applications are available for Linux and the such then OS/390. That along could make it worth it. It wouldn;t be a 'performance' boost over running one that can thread properly in an OS/390 environment.
  • Nope, it's not the fact that thousands of 'processes' flooding the machines as it is thousands of places to flood the network. 41,000 behind 1 T1 line won't do all that much, compared with 1,000 hooked up to 1,000 T1's.. ;-P
  • Same type of thing. Just never been done on a scale of 41,000 copies.. ;-P
  • Oh, sorry. I forgot about the natural disaster survivalability of the Compaq and Dell servers. How silly of me.. ;-P

    Of *COURSE* they'd have problems with those extreme situations, but the point is, *anything* would.

    At least IBM will *LOAN* you a new machine while you rebuild. ;-P
  • Yep, but that post hardly went to the extent of 41,000 copies.. ;-P
  • Perhaps the money would be an issue, but the 4 TB of disk space? No problem there.. ;-P Easy..
  • Mainframes themselves are hardware redundant. Quadruple redundant power supplies, along with redundent memory and disks. Did I mention redundent processors that can take over what another was working on when it went bad? There is a reason why they are so darned expensive. There is also a reason why all of the larger banks and financial instituations use them, or Stratus machines..
  • I know of the isues that you raise. I assume, though, that the OS is going to be out of the way.
    From what I have read about the virtual machines is that you can assign an amount of memory and processors to each VM.
    If there is a good multi-threaded linux renderer,
    and it can support 4+ procs well, then the OS
    overhead is lower than it is for a single proc machine. The more procs per job, the faster the render. The biggest issue is where the fall off is for multiple procs.
    Assuming that one of the virtual machines is a file server, then the data can be sent to the render processes at a greater speed than over a fiber conection. Every little speed up makes a difference.
    Also, how the render jobs are divided up can make a big difference on what performance gains that can be made. The way that we divide shots on our render farm is good for quicker shot speed and lower frame speed. This works well for several machines with single or dual proc configs.
    With a mainframe render farm, and 10+ procs a VM, then the frame speed can be increased as well. Memory requirements lessen because of shared memory and turn arround time can stay about the same.
    The cost for similar horsepower from an alpha farm or an SGI Origin farm would be much higher, I assume.
  • by Anonymous Coward
    Reading Slashdot kiddies speculate the capabilites and limitiation of mainframes like the s/390 is like listening to joe-sixpack talk about double-eww-dot-com on the subway.
  • I've got no problems with mainframes; I think they're pretty cool. The very fact that they did it proves their, IBM's, prowess in computing systems.

    I wouldn't want to run Linux on anything less than one OS instance per processor. Such a system would make a fine Beowulf cluster due to the fact that the internal bandwidth of the mainframe is higher than an method of network wiring available today. It would be truly awesome. I know there is overhead in running multiple kernels and supporting drivers and such but it should be faster than discreet machines.

    Anybody have any idea why it wouldn't work?
  • I hope your sig is open source... because, I've made a modification you might like.

    10 PRINT "THIS IS MY SIG" 20 GOTO 10


    Or this variation:
    10 PRINT "THIS IS MY SIG... ";:GOTO 10

    Or this:

    Oh... to stay on topic... Linux on IBM Mainframes rock! :)
  • by jabbo ( 860 )
    Mainframes are usually tuned for block device I/O and are notorious for being crappy at TCP/IP. I'm not sure if this is a result of OS/390 being bad at it or the hardware, but my impression is that mainframes do best talking SNA to a farm of Unix boxes that act as little more than IP stacks proxying data to the mainframes.

    Isn't there one of the mainframe gurus from Schwab on here? Those guys run some big iron in the back room...

  • Been done for Years. In the late '70s a Company called network Systems released the first router, for the purpose letting mainframe devices sit in anouther location. CNT is their big complitor, which does a similear thing.

    Of course the orginial routers didn't do what you want (and didn't do IP until much latter) but it was the direct ancestor of what companies use now what they want disk or tape drives located in a different state or country.

    Interesing story, in 1985 this company sent a VP to california to examing and perhaps buy a small company that was just getting started then. AFter doing due dilligence they decided this company was going nowhere. The small company that wasn't going anywhere was Cisco. (Accually most of those close to Netowrk Systems today conclude that the only people who would win from that deal going through is 3com, but that is a different story)

  • Is over 1 million pages every minute. 1041666 to be exact.

    WOW that is amazing!

    The Cure of the ills of Democracy is more Democracy.

  • System 390 Mainframes are what big banks and stuff use. If your PC crashes you reboot, when the Mainframe that American Express uses to proccess transactions crashes a VP at IBM gets his ear chewed out. THEY are desinged for 99.999% (OR better uptime) Everything is redundent.

    The only whey to get a bomb box on a 390 is to use Semtex (OK maybe not but you get the idea)

    I'm kidding about the semtex folks!

    The Cure of the ills of Democracy is more Democracy.

  • They did this as a stress test demonstration. They didn't actually *RUN* a production system with 41,000 copies of an OS.

    And it's not just the web server. Look at how many applciations you can run under Linux. Now look at how many under Os/390. Go ahead, I dare you to look for something for the OS/390 without paying big moola for it..
  • Billions? Nope. 1 Million, perhaps 2. For the performance/reliability of several hundred PC based servers. Or at least a few dozen Sparc Enterprise 6500's..
  • True indeed, but I suspect this is do to the OS/390 kernel itself, and it's fairly deficient IP stack. I suspect with a little tweaking, the system could be tuned to provide some sort of direct lines out for IP as well.

    I do know what you're talking about though..
  • I disagree with this. Not all companies want to throw millions around using a closed, little used system, and develop their own inhouse software to solve every little problem.

    Sure, there are many out there, but if given a choice between two things, one with applications already available, and one without, they'd choose the OS with availability..
  • Now, instead of paying the cost for 41,000 users, along with data connections for 41,000 servers, and the physical cost of locating even 200 servers, buy 3 of these mainframes..

    Set them up in 3 corners of the globe.. Throw high speed lines, allowing the machines to take over when another fails..

    And best of all, I bet ever in Alaska you can get an IBM specialist at your door within an hour.

    Anyway, sorry for the wiseass remark, but it was just *TO* hard to resist.. ;-P
  • This would work well for a render farm if...
    There was terabytes of ram. You would want a couple hundred meg of ram for each render job.

    One thing that would be better is to assign 4 procs to a machine, thus reducing the total amount of memory to be 10,000 machines X 250-500 MB.

    I am sure that we could get that much memory for the system... ;)

    I don't think that most render engine would port well to 100 procs... but... if it was reprogramed with that in mind... it might do really well.

    Too bad I don't have the time or the resources.
  • For a render farm, you generally want the OS to stay the hell out of the way. There isn't much point in running muliple concurrent copies of an OS if one OS could do the job. There is also the problem that Mainframes are optimized for IO, not CPU.

    For a render farm you'd be far better off with an bunch of SGI Origin 2K, or a room full of Alphas. Interestingly, in these types of applications, physical support costs for floor space, air conditioning, power, etc, can begin to be quite significant, so you have to start thinking about work/watt or work/m^3.
  • Or they could one-up EDS.

    Images of a cat happily playing around in it's own happy world.

    Zoom out of cat brain and see Matrix-like techno-hive with thousands of cats on feeder tubes, with electricity sparking around.

    Voice Over:
    Cat herding. It's what we do.

  • This has just /got/ to be said:
    Can you imagine a Beowulf cluster of these things?
    (-1 Redundant, and I deserve it)


  • I'm getting a story on how Citrix isn't going to meet it's earnings estimates this quarter.

    Chas - The one, the only.
    THANK GOD!!!
  • As someone else had suggested this opens a neat new possibility: a hire-a-server service that gives you total root access on your own linux box; each "server" is actually a prepackaged linux memory snapshot running in VM.

    Just think - any script-kiddie who gets root is unable to see he has't got the machine to himself, powerless to hide from admins who can watch his every move from the real OS, and unable to do any damage that can't be rolled back in an eyeblink. Crashes and "reboots" recover in seconds. And, when the customer is done hiring their server, it just drops into the bit bucket and the resources are reallocated to other servers.
  • Assuming you weren't just attempting humor and need to learn quite a bit more about how mainframes work, go read this [linuxplanet.com] very interesting piece at linuxplanet.com about running virtual machines on mainframes and running other OS's on those virtual machines. It's several pages long but well worth the effort.
    IBM's mainframe operating systems will stay right there on the mainframes.
  • I remember a story not too long ago, i dont remember the name.. but it was about a guy talking about running a few thousand copies of linux on a VM mainframe OS/390 or something like that... or am i just halucinating??
  • I have kinds had this conspiracy theory about IBM. I think that they see Linux, and open source, as a way to have M$ levels of power without the problems. If challenged, they could just say "Well, the code is out there, why didn't YOU think it up? Neener neener," and go back to world domination. They create nifty hardware hacks which they release - after all, they aren't of much use unless you have a mainframe to begin with (which you come to them for). They supply code to a standards-based Web server (and release that, too). Monopoly? Heck no, we gave you the source code!

    Slightly off topic but its been on my mind.
  • Could we rewrite Apache to spawn off a new copy of the operating system for each connection?

    (this joke is kind of obvious, so forgive me if someone else has already posted it)

  • You wouldn't even need to do that. Say your failed radius server was named RADIUS. You would type, from the operator console,

    force radius
    autolog radius

    and you would get the same effect as a power cycle on a standalone machine, but without the time wasted on hardware and memory self-tests, SCSI bus initialization, etc. You go straight from the autolog command to kernel initialization instantly, so you could be back up and running in a few seconds, depending on how long it takes for linux to start up and your radius server to initialize.
  • God, I spent weeks studying that instruction, trying to figure out how to out it to work. IBM put out an entire BOOK on that instruction. It's basically a huge chunk of microcode that emulates a small piece of the VM kernel. It's pretty much useless for hacking/play purposes, unless you want to design around the instruction.

    I had better luck playing with the vector instructions. I had a very fast fractal generator :)

  • Right. VM has shared memory across virtual machines. Even though there are 41,400 images of Linux, there need be only one copy of the code in storage, shared by all of those virtual machines.

    If one of those virtual machines were to attempt to write to a memory page, CP would simply fork off a private copy of that page, and continue on.

    I believe that Linux (and most unixes) do the same thing -- one read-only copy for each currently running executable, shared between all processes running that executable.

  • Another reason to run a cluster within a single machine is to circumvent bottlenecks within Linux. If you have an application that insists on forking off dozens and dozens of tasks, you will start to run into the Linux scheduling algorithms, which are performance-optimized for a small number of tasks.

    If you create multiple Linux images using the zero-latency networking capabilities that VM gives you, you can split up your application until there are a small enough number of tasks per Linux image to run smoothly.

    Another good reason is, as AC says, if some users need root access for some reason. You can set up a system as a virtual machine, give them root access to that image, and nothing they can do will affect any other virtual machine in the system. Great for testing new Linux kernels. In fact, VM was designed specifically for this purpose -- except that it was used to debug new OS/390 (MVS) kernels instead.
  • This is true, but they have been making pretty good strides in the TCP/IP area, especially since their new "thrust" for the S/390 is in the enterprise server area. I would imagine the new version (out as of a few weeks ago) should show some improvment in that area. When we install it, that's the first thing I'm checking for :)

    Finkployd (Systems Programmer for Penn State)

  • *ABEND*

    If you knew the kind of day I've been having, you would not have written that :)

    F*cking JCL....


  • I'm 15, why the hell should I know anything about mainframes.

    No excuses, I'm a mainframe systems programmer and I'm only 6 years older than you. You should be ashmed of yourself, not knowing everying about all aspects of computing. :)


  • Referring to your OT question, a few months ago I ran across a research project on this. In a nutshell, they are building a mainframe-like computer out of the reject commodity parts from the manufacturing plants.

    On first boot the OS goes through and checks each chip/component piece by piece for reliability. If unreliable, it tries to assess exactly how/where it is unreliable and routes around the problems. The first boot and diagnose process takes about a week!

    The end result is very reliable large computer (64 CPUs, IIRC) with awesome failover capabilities (since dealing with broken hardware is integral to the whole system). I don't remember the figures, but it was something like the final running computer might have %30 or 40% broken parts.

    Oh and don't forget the bit about 'commodity parts' - broken cheap IBM clone hardware.

    I'm sorry I can't give you a link, I'm not at home and my net connection is too slow/unreliable for search for it. However, the project is funded at least in part by HP and it is hosted at a university (Sanford? Cornell?).

    They do have a working prototype, though which definition of "working" they are using... I'm in no position do judge. Cool pictures though. Cool project too.



  • I can't believe i'm the first to suggest this, but i'd like to see IBM run 40,000 virtual machines on one server and then link them all together in a Beowulf cluster.
  • This was posted weeks ago, no?
    The article was something to do with 'notes on runing linux on a mainframe'.
    They made a good point, though.
    One IBM mainframe, and you could present thousands of 'virtual' linux boxen to your web clients.... virtual machines in the extreme.
  • I was thinking that too. But actually, there aren't many ways to crash those things without using a shotgun.

  • Mainframes aren't dead. But, the jobs that required a mainframe 20 years ago can now usually be done by a couple PCs. So, now mainframes have larger work loads.

    Mainframe CPUs are pretty impressive, but a cluster of Dual celeron or OCed P3 FCs will blow them out of the water, if all you want is raw number crunching. If you're rendering 3d frames for a movie, which is very parallel (each frame, or portion of, can be done by a seperate CPU with very little overhead) then PCs are your best bet, by *huge* margins.

    But, if you want to do database work, with huge databases (think, the phone company, recording long distance calls as they happen, for 15 million customer) then you need a mainframe, something with internal bandwidth so high as to make the 1.6GB/s of a PC look like a serial port.

    But, even in the huge database model, you could still be using PCs to ease the load, by passing transactions through a cluster of PCs which would do the rate lookups, or something, letting the mainframe deal with just the one database.

    Mainframe aren't dead, but they aren't the ultimate solution either, when used properly, they are worth every penny.
  • You thought you could attack 1 machine from billions of places to take it down.. Now you can attack 1 machine and bring millions of servers down! (at least thousands) Worse yet, have all 41k servers attack someone else! ..well.. maybe not ;> -end of joke-

  • Note the end of joke line =P

  • There seems to be an implicit assumption here about the homogenic nature of large boxen. Specifically, that all machines with >1 processor work in fundamentally the same way.

    This turns out not to be the case. The "poor man's multiprocessing" that most young'uns are familiar with - Symetric MultiProcessing (SMP) has as key features a single system image scheduling tasks across multiple processors. Performance characteristics can be summarised with the following how-to-fix-it rules of thumb:

    1. What's the performance problem? MORE MEMORY NEEDED
    2. More Memory hasn't fixed my performance problem? GET A FASTER MEMORY BUS

    Contention for main memory is just about always the problem with SMP systems.

    However, those wise old sages in the Big Iron world were never going to be satisfied with this approach. There are any number of ways of putting >1 processor in a machine, SMP is mearly the "cheapest" (and possibly the easiest too). Specifically, S/390 systems tend to use clustering techniques which effectively involves n independent machines sharing hardware resources - such as network connections, memory & disk. These are coordinated by a single Hypervisor "master" image (usually VM) which is capable of spawning any number of (potentially different) "slave" operating system images - including, of course, itself. Note also that for any given machine, there is absolutely no guarantee that (number of OS images concurrently active) = (number of processors in machine); usually the "=" is replaced by a ">>" sign (hence the 41,000 Linux tasks metric!).

    Since the key operating characteristic of this approach to multiprocessing is many heterogeneous systems performing different tasks, it's not as simple to identify the performance bottlenecks :-). However, canny readers will note that since IBM mainframe hardware development has spent the last 30 years focussing on I/O and consequently throughput, rather than getting into arms races over CPU MHz, fundamentally the solution to performance problems remains the same. High IO rates (and not especially superbly quick CPUs)coupled with relatively cheap OS image creation, changes the approach to dealing with single-task performance problems - wheras a *IX or NT O/S is limited to spawning another process (and hoping it'll be able to exploit any spare SMP processors lying around without competing for precious IO resource), S/390 systems can spawn another process (which may make sense if the system is configured to allow OS images to spread across multiple processors), or spawn an entire new OS image and *guarantee* no IO contention (OK, OK, - vast oversimplification). Once a system consists of >8ish processors, this tends to prove overwhelmingly more effective for achieving whole-system throughput improvements, compared to an SMP arrangement (which would at this point be spending a huge proportion of it's time contending for IO resource or waiting for the OS image to resolve IPC and memory contention issues).

    It's true that the most effective way of doing SMP multiprocessing on Intel hardware is to use NT (for the moment...). However, don't make the mistake of generalising that rule-of-thumb outside the problem domain: intel-based SMP multiprocessing. This does *not* equate to the wider class of computing solutions based around multiprocessing.

    Here Endeth The Lesson.

    PS: Crays, Connection Machines and Transputer systems operating in other, fundamentally different ways too...

  • IBM already has the fastest web server in the world in their mainframe. Why would anyone want to switch to Linux and give up on the VIPA Takeover and Sysplex technologies? This whole demonstration is just extra hype to get Linux people interested in what the mainframe hardware can do. (Which is not necessarily a bad thing)
  • Would you rather have a single program abend, or have the whole system crash? You get the former with the mainframe, and all too often even on "mature" flavors of UNIX (forget Windows) you get the latter instead.
  • Here is another article [opensourceit.com]. I had submitted it and it got accepted but for some reason it didn't show up on the main page.

  • Hate to say it, but wouldn't it be more effective to run NT? Although there are good reasons why Linux is not as scalable (lack of high-spec machines in the hands of Linux developers), which are being fixed - for now NT scales *far* better.

    NT, as it's code currently stands, will never be able to be ported to the main-frame. Microsoft has tried other platforms but seems to have failed, numerous times. Linux on the ohter hand already supports more platforms than any other OS. Porting to the main-frame was just the next logical step in the evolution of Linux.

    By the way for every user you would have to have a seperate instance of NT because NT is not multi-user! Taken that into account NT would take more resources than Linux!

  • They have moved their article off of their "ticker" page. Those of you who wish to read it can find it here [bloomberg.com].
  • Redundant memory, redundant hard drives, redundant processors, .... But what happens when the cleaning staff unplugs it from the UPS because they needed that plug for their vacuum?

    They have redundant power feeds, too. And not just the whole machine - every box in the room has 'em, and each feed powers a separate set of power supplies in the box. (Some devices get their power from their controllers. And these are generally driven by multiple controllers...)

    If they did it right, the power cords go to opposite sides of the room, thence by different paths to different feed points where the building gets power from separate feeds that came in from different parts of the grid (which was a consideration in chosing the site for the computer center: at a grid boundary, or close enough that you can pay to string a line from a different section.

    There's at least one UPS, of course. On ONE of those two feeds. (A UPS, on the average, creates one extra power failure in its first year of operation. B-) )

    And I'd like to see the janitorial staff try to plug into the connectors that feed the mainframe. They aren't your typical duplex outlet.

    The point of concurrent maintainence, of course, is that ANYTHING in the box can fail, and be swapped out and replaced, without stopping the processes.

    They might not get as much CPU time or disk response speed as usual while the system is in "degraded" mode due to failed parts waiting for or undergoing replacement. But they run continuously for years - and are shooting for forever.

    I hear they once moved a Tandem across town by putting in a mainframe's frame and the comm lines, then gradually installing boards in the new location and unplugging them from the old, until the whole machine was at the new site. Still running.

    Try THAT with your PC. B-)
  • If IBM wants it, they can have it! Just gimme a ring when the commercial is going to be shown in my market.. And a check for one dollar, to hang on the office wall next to my autographed Elton John photo.
  • The main reason to build a Beowulf cluster inside a system like this is just because you can. Since the internal networking is light-weight, it doesn't cost you too much.

    JMS made some good points, but the basic advantages he suggests (better scheduler control, better memory allocation) really can be done by tweaking the process scheduler and memory allocation functions in the operating system, rather than massively cloning kernels. If there are limitations in the hardware (e.g. how many bits or frames of translation table space, etc.), this doesn't really get around it, but if you can already run more than 1 copy over VM, it may be easier than hacking everything that touches memory allocation. It can depend a lot on the sparseness of the virtual memory space you're trying to simulate.

    While it may not do much for user space work, it would be a fun place to test kernels. You save the room-full of 1U rack-mounted boxes, and instead have lots of virtual machines you can blow away when your kernel hack fails, and it lets you test lots of different parameter combinations in parallel.

  • Unix systems do a really good job of keeping user-space processes from trashing each other, though they occasionally hog resources in ways at which the system designers didn't expect or the sysadmins didn't pay attention to soon enough. But it's still possible to crash Unix systems by messing around too much in the kernel. Linux has gotten much better than the old days, but making changes still means testing. A Virtual Machine environment, as long as it looks close enough to the real thing, can give you a convenient environment for testing kernel hacks, drivers for non-hardware-dependent pseudo-devices, and other things that may stomp around underneath the OS's hard crunchy shell.
  • I fail to see why the "best" part is that these mainframes run Linux? If they had been running a *BSD, Windows 2000, or something else, would that not be as great? Why is Linux the flavor in the soup that's touted over the other ingredients? Give me a freaking break already. Want to advertise Linux? Put up more banner ads. Want to report news? Do so with some semblence of journalistic integrity.
  • There are multiple benefits from this.

    The same physical space required to house 41k servers is astronomical, then once you add power consumption into the equation, there really is no comparision. Heck, if you bought two main frames (one for redundancy) the cost benefit is still phenominal.

    I guess the point I'm trying to get at is that even though the mainframe is expensive, it could easily pay for itself in less than a year in the right webhosting environment.

    (who hopes he was somewhat clear)
  • Some "obsolete" but workable IBM AS/400 units are sold very cheaply. I know of at least one enterprising geek who filled a basement with AS/400s that he bought with a price negotiated by the pound . They all run, what he does with them I don't know, I'd hate to see his electric bill.

    (IBM Servers sold by the pound =anagram>
    'Short-lived!' presumed by snob.
    So dry: rubbish developments.)
    [almost never get any +1 when I include an anagram, I wonder why... :) ]
  • Weird, IBM used to be to Wang (and DEC and such) as Microsoft is to Linux and various Bsd's. So if the devil has been incarnated anew in Redmond, who'd have guessed that Armonk would end up hosting more than its share of the Open Source insurgency?
  • Didn't we just have this story?

    Last month's story [slashdot.org] had much more details.

    One thing neither of them talk about. VM/370 allows shared code between "machines". If a chunk of code can be ROMable, then you can load a single read-only copy of the kernel, or EMACS, and everyone who uses it gets the same copy, exectuing out of the same chunk of RAM. I don't know if this is at all relevant in Linux. Is the executable code typically kept (widely) separate from the data?

  • The plot thickens as they attempt to take over the world!

    And have to become the next Microsoft? We've already been through the DOJ grinder once, and both employees who remember it don't want to go back.

    As I have explained before, IBM believes in being a player in every market and a monopolizer of none. Our strategy is to get dominance in any market we play in, if possible. If we don't have dominance, develop and/or produce cutting edge components like GPS chips, hard drives, Transmeta chips, etc. and make a few cents off of every dollar our competitor makes. If you use someone else's software or hardware, our services consultants will still fly out and tell you how to make the best of it and fix it when it breaks (like Win 2K). In short, IBM stays the biggest single IT company in the world without holding a monopoly club over any given market. It enables those of us who work for Big Blue to feel like the good guys, especially when IBM is amongst the leaders in the push for standards that are non-MS-centric for such efforts as Java, etc.

    Disclaimer: I don't "represent" IBM. I just work there.

    B. Elgin

  • Don't think about disk I/O on mainframes the same way you do on PC architecture. These puppies are built to pump enormous amounts of data in and out of very fast drive arrays. It's not an IDE or even a fast SCSI RAID. FIber channel, and HPPI(i think). Throughput that will blow your hair off. (What's left of it)

  • Well, if we're talking about "force of nature" failures, then a mainframe becomes even MORE attractive.

    Here's why:
    IBM actually manufactures (and sells) a mainframe system that comes with it's OWN satellite uplink and guaranteed bandwidth. They're designed for use on oil platforms.

    These systems also come with some insanely fancy remote-mirroring and update functions (because, after all, oil platforms are hostile environments for most computers).

    So, if you're worried about natural disasters, you could theoretically buy two of these systems. Then you won't need to worry about anything less than a nuclear war -- even if the land-lines get killed, you've still got your friendly satellite.

    Besides which, distributing a couple o' mainframes is a hell of a lot easier than effectively distributing 82,000 PC-based systems! I mean, heck, just think of the POWER requirements for the PC equivalents . . . Good lord, you'd probably have components failing on at least one machine every five minutes or so (MTBF would kill ya on that many machines).

    Of course, I could be wrong.

  • At last, all those old servers can finally run a nice OS :o) Now, who wants a second-hand mainframe?
  • Q: So how do *you* fit 41,000 penguins in a room?

    A: Use a blender!

    Sorry if it's a little, umm, inflamatory...

  • It would seem to be an excelent way to set up multi homed web servers.
    One organization could lease space and a linux install to anyone.
    This would help all those who have put boxes at a distant local just for the use of bandwidth etc.
  • by sjames ( 1099 ) on Thursday March 30, 2000 @11:06AM (#1160259) Homepage Journal

    Can you say "single point of failure"? Good! I knew you could!

    Modern mainframes tend to have multiply redundant everything. System failure is not likely.

  • by Thomas Charron ( 1485 ) <twaffle@NoSpAM.gmail.com> on Thursday March 30, 2000 @11:11AM (#1160260) Homepage
    Doesn't work that way. Having one copy, would run much slower then running several copies of OS's in virtual machines. Multiplexing the data across all of the architecture requires more time then the horsepower boost. Hence, running 10 copies is literally 10 times faster then running one.

    It's a different world in the land of the mainframe..
  • 99.999 % track record.

    For those who don't do the math, that's an average of a 5 minutes of down time *PER YEAR*

    If the problems that you forsee *DO* happen, and it goes down for, oh, and hour, that would statistically speaking mean that 12 other sites had 0% down time for the year.

    The cumulative down times of 41,000 servers would be *MUCH* more then 5 minutes.

    Now, 41,000 is a *GROSS* exageration.
  • by Thomas Charron ( 1485 ) <twaffle@NoSpAM.gmail.com> on Thursday March 30, 2000 @11:49AM (#1160262) Homepage
    The wonders of the mainframe. You can swap out the bad parts *while the machine is running*, and bring the replaced part back into service.

    While I haven't had the opertunity to work on something with 20 years uptime, I *HAVE*, and have a terminal open now, to a machine with 6 *YEARS* uptime..
  • by jms ( 11418 ) on Thursday March 30, 2000 @01:52PM (#1160263)
    It would depend on what you were doing on that machine. There are at least two cases where Linux under VM will probably beat out native Linux.

    o) You have an application that wants to spin off a large number of separate tasks. Your Linux kernel will not perform well under these circumstances, but if you built a virtual beowulf cluster of many Linux images, each running four or five active tasks per image, then each Linux image will run smoothly and efficiently -- within its design parameters.

    2) You want to run a task that requires a huge address space -- far exceeding your real memory.

    First off, things aren't so bad, because by using a shared segment, VM can use one shared-memory copy for all of those Linux kernel images, saving a lot of memory.

    Also, mainframes can page so efficiently that they can massively overcommit memory without taking a performance hit. They were designed for this. The normal configuration of a VM system is thousands of users at terminals, each with their own virtual machine, running CMS. Each virtual machine might have two or three megs of shared program code, and however much private data they happen to be using at the moment. VM was designed to support a massive overcommittment of memory. Mainframes even have special paging storage, called expanded storage, and a set of hardware instructions for performing quick paging back and forth between real and expanded storage. Think of it as a fast ramdisk, attached right to the CPU bus.

    The end effect is that you can actually get an improvement in performance by turning off your own paging, and relying on VM's native paging facilities. MVS sites discovered this years ago.

    Say you want to provide 2 gigs of storage to an application, but your mainframe has only 1 gig of memory, you would have two options:

    1) Allocate about 1 gig to the Linux image, and create paging space within Linux. The Linux kernel handles all the paging.

    2) Allocate 2 gigs to the Linux image, so that Linux never has to page, and let VM handle the paging.

    You'll get better performance using method 2.

    The biggest strengths of mainframe designs go right to the heart of your objections. It's what VM was designed to do, and it does it very well.

  • by finkployd ( 12902 ) on Thursday March 30, 2000 @11:35AM (#1160264) Homepage
    Problem with that is, something that takes down one service takes down both of them. I realize mainframes are pretty damn reliable boxes, but if it goes down, do you want it to take your webserver with it?

    Not quite. The S/390 runs multiple OSs is LPARs (Logical Partitions) and they are pretty much independent of each other. The webserver can run on linux (or OE, the mainframe port of AIX) and not affect a production LPAR running OS/390 at all.

    Finkployd (Systems Programmer at PSU)
  • by Rombuu ( 22914 ) on Thursday March 30, 2000 @10:53AM (#1160265)
    But I bet Ultima IX still runs like crap on it.
  • by Tower ( 37395 ) on Thursday March 30, 2000 @12:22PM (#1160266)
    Remember that mainframes (i.e. 390) have some really nice kill and redundancy features... processor or memory stick dies... oh well, shut it down and keep going, then fire off an alert to the admin. Concurrent matainance.... mmmm....
  • by haggar ( 72771 ) on Thursday March 30, 2000 @02:18PM (#1160267) Homepage Journal
    I must say, Linuxworld proves to be much more interesting than /. as for posted newslinks. (but the discussion on /. is more interesting).

    But on topic: I had the pleasure to work with VM/ESA on top of which was running another IBM Mainframe OS, VSE/ESA. Several coipes of VSE were runnning at any time. And we of course started additional ones, for testing of programs. And the uptime was incredible!!! We had an entire disk unit *destroyed* (filings of the hard disk material flying around), but the system was still happily humming on. Very impressive.

    Also, did you guys know that OS/2 was developed partly by running it on top of VM? I think these mainframes + VM are the coolest technology to come out of IBM, if we don't count the bionic chip coming out in 2015 :o)

  • by technos ( 73414 ) on Thursday March 30, 2000 @12:19PM (#1160268) Homepage Journal
    You run a virtual network inside of the machine. You can run thousands of VMs but you're still stuck with only being able to cram five to ten Ethernet features into said box. So you use the gigabit-speed virtual network inside of the box and route em all through the VM's that have been assigned actual Ethernet. Voila! An entire Class B in a single box!
  • Microsoft doesn't try other platforms for WinNT.

    Microsoft works out a license arrangement, and the hardware vendor does the porting code of the very few bits of native code.

    It's the hardware vendors who have given up on their own WinNT ports... MIPS and DEC.

    The only hardware Microsoft has been interested in, is the hardware which the typical end-user would put their grubby mitts on. First, Apple BASIC cards (when every end-user knew what a PCB was). Then mice. A short-lived i186 booster card. Millions of more mice. Trackball. More mice. A few gaming devices. More mice. Now the X-Box. Common thread: grubby mitts of the unwashed masses.

    I'm not flamebaiting here. Linux may be cool, Linux may have superior traits in some regards, but as a whole, Linux has a lot to learn about offering products to the winning markets. 'Cuz there's only two winning markets: business-to-business (Why should I trust you with my billion-dollar mission-critical apps? You don't even have the money to pay for software!*) and mass-market (I don't even know how to turn the dang thing on!**)

    * Suits don't care about how kewl something is. They don't want to be surprised. They don't want risk. They want to do it just like the other guy does it, except with a somehow better profit margin.

    ** If you say you've never heard someone say this exact line, you're lying.

  • by Kagato ( 116051 ) on Thursday March 30, 2000 @11:33AM (#1160270)
    "Dimension, a Herndon, Virginia-based computer consultant that Nortel Networks Corp. agreed to buy last month, tested 41,000 copies of Linux for a large telecommunications customer. The client, which Dimension wouldn't name, provides Internet access."

    Hmm, Qwest just annouced they were entering into a hosting partnership with IBM. Opening many joint-venture web hosting centres in the US...

    At the same time IBM also has a deal with a web site design firm in Minneapolis MN to make a large ammounts of web sites...

    The plot thickens as they attempt to take over the world!
  • by philg ( 8939 ) on Thursday March 30, 2000 @11:26AM (#1160271)

    From the article, I got the impression that they wanted customers to use their existing mainframes (presumably data warehouses and such) as webservers. At least, that's what I got out of their claims at increasing speed by doing away with webserver-database latency.

    Problem with that is, something that takes down one service takes down both of them. I realize mainframes are pretty damn reliable boxes, but if it goes down, do you want it to take your webserver with it?

    (I'm assuming the security issues inherent in putting a webserver -- esp. a public one -- directly on one's data warehouse have been hashed out in the course of the VM development. Nonetheless, websites are flypaper for h4x0rz -- that's putting a lot of trust in software.)

    Same thing holds for anyone using it to replace 41,000 (yeah, whatever) webservers. One machine fails, 41,000 web servers (and god knows how many sites) out of business. I suppose a redundant mainframe is sufficient insurance -- but how much more appealing is that than buying a comparable number of Suns, and having just a few backup boxes?

    Seems like an interesting idea, and it certainly creates options. I don't know if it's the Sun-killer, though; and though it might convince existing users to not buy Sun, I don't know how many new buyers it would attract.

    Then again, the closest I got to working on a mainframe was touring a server room with a bunch of AS/400's in it once, so don't think I'm the Delphic Oracle or anything. :)


  • by Silver A ( 13776 ) on Thursday March 30, 2000 @11:17AM (#1160272)
    Links and articles:

    There are more given in the LinuxPlanet article (which is where I got the other links).

  • by Rupert ( 28001 ) on Thursday March 30, 2000 @11:35AM (#1160273) Homepage Journal
    Seems to me that if I had a mainframe, or could get hold of one, this is an awesome virtual hosting environment.

    Give me money, and I give you root access to your own, incredibly reliable, Linux box. If you trash it, it can be restored from backups in seconds. Incremental cost of adding another virtual host: almost nil. Until, of course, we get to 41,000. By then I should have enough money to buy a new mainframe. And so on.
  • by technos ( 73414 ) on Thursday March 30, 2000 @12:33PM (#1160274) Homepage Journal
    True, they're not going to replace 390 with Linux anytime soon. It's a market expansion thing. Buy a single IBM or 5,000 PC's.. IBM sells more iron, their customers save money. IBM keeps the mainframe market healthy.

    That would actually make a good sales slogan for Big Blue. Pan the camera over a virtual jungle of CAT5 and RS232 strewn on raised flooring, and up onto a cluttered wall of dilapidated 2U servers. One unit is smoking and sparking foreground right. Announcer: 'Would you rather have 10,000 chickens'. Screen goes black as the camera zooms out of the black background of the IBM logo on a shiny new mainframe. Announcer: 'Or one bull?' Wrap it up with the standard IBM music and blue bar quick scrolling to stop. 'IBM E-Business: A bull in your corner.'
  • by zantispam ( 78764 ) on Thursday March 30, 2000 @11:21AM (#1160275)
    ...one copy for each distro.


    Here's my [redrival.com] copy of DeCSS. Where's yours?
  • by (void*) ( 113680 ) on Thursday March 30, 2000 @11:18AM (#1160276)
    Hey they tried! But each NT server tied itself up trying to fight the others to be Domain Controllers!
  • by zcdill ( 157433 ) on Thursday March 30, 2000 @10:54AM (#1160277) Homepage
    So how do *you* fit 41,000 penguins in a room?

    (sorry for the ad slogan infringement, but it seemed like the right thing to do ;-)

  • by neo-opf ( 167085 ) on Thursday March 30, 2000 @11:04AM (#1160278)

    (to the tune of 99 bottles of beer/..... sorta)

    41000 copies of Linux on the box
    41000 copies of Linux
    if one of those copies should happen to fail
    wait in the dark, til the power comes on

  • by Thomas Charron ( 1485 ) <twaffle@NoSpAM.gmail.com> on Thursday March 30, 2000 @10:55AM (#1160279) Homepage
    Finally, someone catches on to the good that big iron can do. Mainframes *DO* have a practical use in the modern computing environment. Ever time I hear someone mention the 'Old, vunerable mainframes', I cringe.

    They have been doing what VMWare is doing, aka, running virtual machines, for nearly three decades. They know what they're doing.

    I'd be interested to know what large ISP is looking to use them in this way. To my knowledge, this would be the first published use of the mainframes specifically to serve as a server-multiplexor (Is that a word?) in an ISP environment. This could be the 'next big thing' for these machines. Either that, or be yet another flash in the pan with alot of 'cool' factor..

    (Fingers crossed)
  • by Thomas Charron ( 1485 ) <twaffle@NoSpAM.gmail.com> on Thursday March 30, 2000 @10:59AM (#1160280) Homepage
    You don't. You have 64 in a room, with 40,936 in line behind the doors.. Make 'em run *REAL* fast in and out (A practical use for pengiun mints?). If they do it fast enough, it *looks* like there are 41,000 in there.. ;-P
  • by jms ( 11418 ) on Thursday March 30, 2000 @01:04PM (#1160281)
    I'm not involved in this work; I wish I was. However, I'm very familiar with VM/ESA and the low-level programming facilities that are being used to pull of this 41,000 Linux virtual machine cluster. I used to write assembly language programs that used IUCV and virtual CTCAs, and what they are doing is crystal clear.

    First off, this setup is running under VM/ESA. This is NOT the same operating system as OS/390. Diehard VM'ers tend to view OS/390 about as fondly as Linux users view Windows. OS/390 is the huge, IBM-management-approved operating system with JCL, that evolved out of OS/360. VM is the back-room project that IBM management has tried to kill, over and over, but can't kill, because it's needed for OS/390 development, IBM developers demand it, and many customers demand it. OS/390 is what management wants to sell -- it's the "strategic" operating system. VM/CMS is what the IBM development teams use because it was designed, from the bottom up, by IBM's best software developers, specifically as a platform for software development. Really. I used it for 15 years. If you're developing or debugging IBM assembly code, it's just the best. VM was a skunkworks project, and a damn fine one. It's a shame that it isn't that well known.

    The two operating systems should NOT be confused. Different operating systems. Entirely. OS/390 can run as a guest under VM/ESA, but not vice versa.

    That said, VM has a HORRIBLE native TCP/IP implementation. It's a big program, written in Pascal, and it's a dog. In fact, it's about the weakest part of VM. It never got much attention, because mainframe networking has always been driven by SNA, VTAM, etc. and IBM development is traditionally done on a 3270 style terminal. All the tools, XEDIT, the mail system, etc, are all designed for 3270, block mode terminals. VM is lacking in TCP/IP support for the same reason that Unix systems are lacking in SNA support, because no one wanted it. This is changing.

    The VM TCP/IP implementation is a standalone program. The TCP/IP program runs in its own virtual machine. When someone wants to connect to TCP/IP, they use a system call to establish a connection between their virtual machine and the TCP/IP virtual machine using a facility called IUCV -- Inter User Communication Vehicle.

    IUCV is a very fast, block-oriented, secure, unspoofable point to point protocol for establishing data links between virtual machines. A programmer using IUCV starts by creating a link to the target, then sends blocked data by making a system call with the address/length of the data. The CP nucleus (their word for the kernel) copies the data into the system address space, synthesises and schedules an interrupt for the target virtual machine, and immediately reschedules the source virtual machine. The target virtual machine receives the interrupt, issues an IUCV receive system call, and CP copies the data into the target machine address space. This is all done completely asynchronously. It's extremely fast, and utterly secure. Zero-latency networking is a nice thing to have.

    Which leads to something very cool. IP over IUCV.

    I don't know exactly how they set their system up, but here are the basic tools that they have to work with:

    1) TCP/IP to the outside world can be handled in at least two ways:

    o Through a native Linux network device driver. In VM, physical peripheral devices are assigned to individual virtual machines. A virtual machine with a physical network interface attached to it simply uses it as an ordinary I/O device.

    o Via a connection to a native TCP/IP virtual machine, using a special device driver that knows the native IP-via-IUCV protocol.

    2) Connections between virtual Linux machines can be handled in a couple of ways:

    o through a virtual (or real) CTCA (Channel to Channel adapter.) a CTCA is a high speed parallel interface used to connect mainframes together, point to point, very fast. If you use virtual CTCAs, you can move Linux images from one machine to another without having to ever reconfigure anything within the Linux images themselves simply by replacing the virtual CTCAs with an attached real CTCA and changing the directory entry of the virtual machine.

    o Using an IUCV driver, one can interconnect all of the internal Linux images via virtual point-to-point lines. This is much faster then virtual CTCAs. The drawback being that you need to configure IUCV links within a virtual machine, so changing things around requires reconfiguration within the linux image itself, and IUCV is designed to work efficiently within a single system, not across multiple systems. It can be done, but it's a hack, and it's inefficient.

    o Through an obsolete API called VMCF, which was superceded by IUCV.

    The big innovation going on here is the realization that by running multiple Linux images on a single machine, or multiple Linux images on multiple machines, using mostly IUCV links, one can almost eliminate the network latency, because the data transfers are simply memory copies, and one can eliminate the network collision problem, and the network traffic problems. If you have 100 machines sitting on a fast ethernet, and you start getting a lot of inter-machine traffic, you are going to have collisions, and each machine has to waste a fair amount of time evaluating which packets are his. This removes the biggest bottleneck in large clusters of small machines, Also, an IUCV connection is guaranteed to never drop a packet, and always transmits packets in order, so TCP over IUCV proceeds smoothly and efficiently.

    This gives you lots of scaling options for your virtual Linux network.

    One more point.

    There was an article [slashdot.org] that came out two days ago, but due to a slashdot bug never appeared on the main page, but proceeded directly to the "older news" catagory. In it, the author wrote:

    An S/390 running a light load will not run as quickly as a fast PC server under a light load, according to Courtney. The difference between the two systems will not be apparent until the load is much larger.

    "The PC will begin to degrade and will typically reach a point where it avalanches down in performance as its load limit is exceeded. The mainframe starts out at a lower performance level, from the standpoint of an individual program task, but degrades much more slowly and much more linearly as the load increases," he says.

    Revisiting my previous comment in this thread, I remember, a while ago, reading in another article about a difference of opinion between some IBM programmers and the kernel maintainers. Supposedly, IBM was complaining that Linux performance went south when the number of running tasks became large, and proposed some scheduler changes, but the kernel developers didn't want to change it because the changes would have slowed the kernel down in the "normal" case of only a few active processes. Does anyone have a link to this or remember what I'm talking about?

    Sounds like this article is describing the same known effect. However, by running multiple images of Linux under VM, one obtains a workaround for the problem. If a Linux virtual machine is overloaded, create a new virtual machine image, and offload one or more of the biggest processes to run on the new machine.

    This is all very interesting stuff. Don't forget, the stuff we're just discovering now in the Linux world, is largely stuff that the IBMers, and especially the VMers have been working on and perfecting for about 30 years. I'd love to see a Linux kernel that can run 41,000 tasks, with a linear performance degradation curve. Until then, at least there is a way to run Linux on an operating system that has those characteristics.

    And the fact that their operating system can run 41000+ simultaneous tasks without disintegrating, but ours can't, should eventually get under someone's skin and prompt efforts to make the Linux kernel scale better under heavy multitasking loads. Why should they have all the fun?

    - John

  • by Shotgun ( 30919 ) on Thursday March 30, 2000 @11:27AM (#1160282)
    Mainframes are usually tuned for block device I/O and are notorious for being crappy at TCP/IP

    Spent some time working on IBM's TCP/IP stack. You're talking from the past. The mainframes stack used to be single threaded and very slow. As a workaround, IBM hacked it so that you could run several stacks on one image (this caused its own 'stack' of problems of course).

    The Release 6 was a complete rewrite of the TCP/IP stack. They used it to set record industry benmarks when it reached the 1.5 Billion pages per day mark about two years ago. They gave us all nice denim shirts with the embroidered slogon "1.5 Billion served". I wouldn't call that crappy, would you?
  • by Enoch Root ( 57473 ) on Thursday March 30, 2000 @10:56AM (#1160283)
    They considered running NT servers, but they didn't have the $8,000,000 for the licenses and the 4 TB of disk space. :)

Trap full -- please empty.