Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

IBM Announces First Linux-only Mainframes 218

A reader writes "The new Z-series mainframe for Linux, which costs $400,000 and is aimed at processing transactions at large businesses, is IBM's first mainframe computer sold without IBM's traditional z/OS mainframe operating system. More info at the IBM zSeries page" This is something that IBM and others of Big Iron vendors of *NIX have said - as Linux grows in maturity, they want to replace their *NIX with Linux. However, there's still work to be done in that area.
This discussion has been archived. No new comments can be posted.

IBM Announces First Linux-only Mainframes

Comments Filter:
  • by blackcat++ ( 168398 ) on Friday January 25, 2002 @09:06AM (#2900077)
    The link to the SourceForge Foundry is slightly broken. Correct link is here [sourceforge.net].
  • url (Score:1, Informative)

    by Anonymous Coward on Friday January 25, 2002 @09:06AM (#2900081)
    Try http://foundries.sourceforge.net/large/ [sourceforge.net]

    (Use the Preview Button! Check those URLs! Don't forget the http://!)
  • More... (Score:4, Informative)

    by Marcus Brody ( 320463 ) on Friday January 25, 2002 @09:13AM (#2900104) Homepage
    More coverage from the reg [theregister.co.uk]
  • by Tam-Lin ( 17972 ) on Friday January 25, 2002 @09:20AM (#2900135)
    I'd just like to correct something here: they aren't replacing the previous zSeries operating system, they're adding another choice. Now you can choose between z/OS, z/VM, and Linux. While there is something called Unix System Services that run within z/OS, it's not a stand-alone operating system; it's rund under z/OS, not by itself.

    And with Linux, you do loose a lot of the RAS characteristics that z/OS provides, as well as 40 years of compatibility with existing workloads. Linux is being sold as something to run new workloads on, workloads that z/OS previously wouldn't have been considered for.
  • by rabalde ( 86868 ) on Friday January 25, 2002 @09:23AM (#2900150) Homepage
    ZDNet [zdnet.com] have a recent story [zdnet.com] about a company called Boscov's Department Stores [boscovs.com] replacing a lot of NT machines with one IBM zSeries. From the article: "Boscov's, with 36 locations in six states in the mid-Atlantic region, scrapped its client/server architecture and is in the process of consolidating 70 IBM NetFinity 8500 and 500 servers running Windows NT 4.0, on a recently purchased IBM zSeries 900 mainframe running SuSE Linux Enterprise Server 7 as a virtual machine."
  • Re:Relative costs? (Score:2, Informative)

    by Snord ( 44479 ) on Friday January 25, 2002 @09:24AM (#2900154)
    From the article:

    Calling the new machines Linux-only is a bit of a stretch, of course, since the zSeries "Raptor" mainframes and the iSeries Model 820 servers will have z/VM and OS/400 installed on them (respectively) to act as partition managers.
  • Re:NO Z/OS? (Score:5, Informative)

    by Anonymous Coward on Friday January 25, 2002 @09:32AM (#2900191)
    No. z/VM is the 'meta-OS'. It's pretty much analagous to VMware in what it can do, in terms of hosting other OSs underneath it.

    z/OS is geared at high volume transaction, database, batch processing. it runs either z/VM or more typically natively or in an LPAR.

    An LPAR is a 'logical partition', a way of dividing a m/f up into several virtual machines.
    for now, these are static and implemented when a partition is 'booted' - IPL'd (initial program load) in m/f terms.

    VM on the other hand supports hundreds, even thousands of dynamically generated virutal machines. You can run VM inside an LPAR providing two levels of partitioning. I expect VM and LPAR technologies will converge at some future point.

    meanwhile everyhting can talk to each other over 'hipersockets' - memory to memory pipes that looks like a tcp/ip network to your software - blindingly fast
  • Cost Justification (Score:3, Informative)

    by NeonSpirit ( 530024 ) <mjhodge@gmai[ ]om ['l.c' in gap]> on Friday January 25, 2002 @09:42AM (#2900225) Homepage
    Consulting Times has a article [consultingtimes.com] which gives a "real world" cost justification example.
  • by Amarok.Org ( 514102 ) on Friday January 25, 2002 @09:51AM (#2900254)
    Granted, the mainframe has a good architecture. But why should my company spend $400,000 for a Linux mainframe, when we could run Linux faster on a $2,000 PC server?

    Architecture is the key. What's the difference between a 120 MIPS mainframe and 3000 MIPS desktop, and why is the 120 MIPS mainframe faster in mainframe type applications?

    Architecture. Specifically, things like I/O, process handling, etc.
    Don't get me wrong, I'm a strong believer that "desktop" type hardware can compete with the big boys, especially considering the cost diferences and the extra speed, boxes, redundancy, etc that you can buy with all that cash you save. But... there are times when the big mainframe architectures really do have a reason for being.

    Just my $.05 (inflation, you know).

  • Re:HOT SWAPPING!!! (Score:2, Informative)

    by rhost89 ( 522547 ) on Friday January 25, 2002 @10:11AM (#2900337)
    It does support hot swapping via IBM's channel paths. You can vary a channel on/offline and replace the offending piece of hardware. As far as disk drives go, they are all contained in a large DASD hot swapable raid controller (ours support about 4 TB of data at the moment)
  • Is that wise? (Score:2, Informative)

    by LiquidPC ( 306414 ) on Friday January 25, 2002 @10:19AM (#2900363)
    Not to sound like flamebait, but there have been alot of issues with 2.4 lately, it doesnt really seem stable enough that i'd put it on my mainframe, theoretically speaking. Problems range from fs corruption to sync() bugs, etc. Sure, its a nice desktop OS but I don't think it's ready for the mainframes.
  • by PeterMiller ( 27216 ) on Friday January 25, 2002 @10:27AM (#2900396)
    I have been working in the mainframe world for a few years now, and one thing you have to understand about mainframe operations, is that since it's conception the #1 priority is UPTIME. Speed was number 8 or 9.

    Only recently (last 7 years) has speed been a considiration, and that was thanks to the PC revolution. But again, you were alwsys dealing with two camps: Mainframe guys, and PC guys.

    So all this means is that there is another choice for people who want the " 5 9's",the holy grail of computing, and not Windows, Unix or any other platform other than the mainframe can deliver that.
  • Re:HOT SWAPPING!!! (Score:3, Informative)

    by ColdGrits ( 204506 ) on Friday January 25, 2002 @10:31AM (#2900414)
    You're out of date there :-)

    All of the new SunFire range (3800, 4800, 4810, 6800, 15000) have full hotswappability on PSU, disks, system controller boards, CPUs, memory, etc etc etc.

    The SF15,000 is the 106 CPU top-end system, while the SF3800 only goes up to 8 CPUs.

    Oh, and you can mix'n'match different speed CPUs in the same system too - useful for expansion in the future.

    Hope this helps!
  • by Anonymous Coward on Friday January 25, 2002 @10:38AM (#2900450)
    To expand on the parent post:

    PCs crash a lot. They're made from cruddy hardware because the average consumer either doesn't know the difference, doesn't care, or can't afford anything better. Mainframes have uptimes in the years; some have benn going for decades. They usually have hot-swappable everythings, including the usual power supplies and disks, but also hot-swappable CPUs, memory, expansion cards (network, etc), and even motherboards sometimes. Finally, they have a high degree of self-awareness. Today's PCs are starting to get some of these features (your BIOS might know the speed of the CPU fan, wheeee) but the mainframes are way ahead. They're set up to figure out when things are about to fail. When a potential failure is detected, the mainframe will call the vendor and order replacement parts automatically. A service tech will usually be there within hours to replace the part, and the part will be taken back to the lab to see why it failed. The knowledge gained from the failing part is used to design the next revision so it doesn't fail.

    When it comes down to it, CPU power isn't all that important in the mainframe world. They do a shitload of I/O, and they just work. An Athlon XP might run circles around a mainframe in Quake 3, but its components are slow and unreliable.

  • by Anonymous Coward on Friday January 25, 2002 @11:20AM (#2900640)
    Amazon had a mix of Unix and M$ (more M$ than Unix).
    Burlington was mostly Unix.
    Boscov had 70 aix and >500 M$.
    telia dropped mostly Solaris.
    Home Depot is apparently going to drop all M$.
    more and more are showing up and while they are replacing some unix, it is also replacing an equal or bigger percentage of M$.
    As the economy worsens and the companies that are making profits are running linux, well...
    It is exactly what happened in the late 80's early 90's when M$ was the correct way to go.
  • by rasilon ( 18267 ) on Friday January 25, 2002 @12:20PM (#2900965) Homepage

    It's not the maintenance that is the problem, things like configuration management and data integrity are more important. If you have a hundred servers, then you have a hundred places to check that everything is in sync. If you are running a small shop with a dozen or so machines and one administrator then they can keep all the state in their heads. When you get up to hundreds then the state is larger than one person can easily cope with and you start having to communicate state to others. With hundreds of boxes, it is easy to overlook things, with fewer boxes, the communication is easier, and cheaper.

    The other thing is CPU residency. Lots of small boxes wastes CPU power because they tend to be devoted to one task and are only capable of that task. The problem is, they are so small that you can't add other tasks to them so you need a new box... Generally, CPU residency on small boxes runs about 10%, with mainframes, this can rise to 90%. Take two tasks - one runs during the day, one runs during the night. Conventional wisdom would allocate two small boxes, one per task wasting them for most or their life. Mainframe usage would run them both on the mainframe - this gives each process more power when they run and doesn't waste the box when they don't. Most traffic tends to be peaky but only for a short period of time so if the box is large enough to hold them both, you get a saving whilst still making all the tasks faster.

    Small boxes are good when you need maximum cycles per buck and the task is easily partitionable with minimal interprocess communication and the tasks are continuous. When the tasks are not easily partitionable, need lots of IPC or are peaky then larger boxes make sense.

    The thing to remember is that where the scale is large, you need to make use of that scale to get maximum performance. You don't see chemical plants using hundreds of small vats, they use a few really big ones. With these systems they are used at a scale where communications and simply keeping track of what is going on is a major exercise and hence a major expense.

    My Experience? Well - put it this way, the SunFire 6800 turned up a few weeks ago, the 4800 turns up on wednesday as part of a plan to replace a Tandem mainframe and they will be sitting next to quite a few racks holding Sun E3500s, E450s, E250s, t1s, HP netservers, IBM RS6000s and SGI Origin 2000s and indeed a MacOS server or twenty. A lot of our comms talk to Stratus mainframes and the machine room cooling plants are a more pressing problem than CPU speed.

  • by Scooter ( 8281 ) <owen AT annicnova DOT force9 DOT net> on Friday January 25, 2002 @12:21PM (#2900971)
    hmm yeah - I'm no expert on Mainframe architecture, but from what I've read - it's down to pure I/O width, and massive redundancy/hotswap, belt&braces style robustness.

    I also agree with you that "desktop" style machines running something like Linux *can* offer similar levels of reliabilty and performance, but in a completely different way. In a nutshell - instead of one ultra-robust machine with multiple redundant sub-systems, you go for multiple redundant machines (although you could define the cluster as the machine - in which case it's no different...hmm :-/)

    I've successfully applied this pet theory of mine over the last 3 years wherever possible. Even things like ethernet switches - we used to buy Cisco 550X chassis which come with 2 of everything important, like PSU, routing module, supervisor module, backbone interfaces and so on, but they cost £35K each for the config we typically buy. Sure they hardly ever fail, and if a component fails, there's a backup. However - recently we started buying smaller cheaper swicthes - but lots of them - typically 3 where 1 would do: total cost about £15K for the same scenario

    Web servers lend themselves easily to this too (especially if you use Apache and Tomcat (or whatever it's called this week :P) - we stopped buying huge multi CPU boxes, to handle a specific load - and re-designed our web server clusters to use many smaller (1U) rackabble boxes for all tiers of the system from front end caches, load balancers, firewalls, JSP processors and even the database nodes (with shared disk arrays). Need more back end database? Clone a few more 1U DB servers and connect em up! This meant we could stop worrying about how much traffic we would be getting to the sites so much - if it turned out we'd underspecced, we could add some more quite easily.

    I always thought that IBM continued developing the Mainframe to support existing OS/390 customers with large complicated mission critical apps on them - I can see some use for a mainframe running Linux (and I bet their are more Linux savvy techies otu there than z/OS - which would help with recruiting admins for the box), but I still feel that the multiple-smaller-boxes-running-linux solution is a better bet - as it can be any size you want within reason - start off small for dev/testing, and then pile on the hardware for production.

APL hackers do it in the quad.