Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Experiences of Running Linux on a Mainframe 325

xneilj writes, "Linuxplanet has an interesting article where a guy decided to install the native Linux on the company mainframe in their lunch hour. Interesting article if you're wondering why anybody would pay seven figures for a box when you can get a high-end pc machine for a fraction of that. Author Scott Courtney reckons if you put Linux on it 'the mainframe of today may in fact be the best damned Web server you ever saw.' "
This discussion has been archived. No new comments can be posted.

Experiences of Running Linux on a Mainframe

Comments Filter:
  • by Anonymous Coward
    It's quite amusing to see these comments about how mainframes are dead, and that beowulfs are now the 'true way'. I was over at CeBIT in Germany the other day, an saw all sorts of interesting things. Not only IBM's relationship with Linux on more serious hardware (RS/6000 etc.), but also their introduction of Monterey on I-32, I64 and PPC machines makes food for thought... They had one of 6 Itanium machines to be seen on the whole of CeBIT campus (which comprises 7,400 exhibiting companies), and it was running Monterey. What was also very interesting is Unisys' latest data center mainframe - if you can call it that. It had all the crossbar, main storage, I/O and dynamic partitioning stuff that is so typical to mainframes, but its processing nodes off the crossbar (the bit that sits between processing, IO and memory) were each made up of two Pentium III Xeon processors sharing a bus. This was quite amazing - they had their top-of-the-range 32 node box on show, running Windows 2000 DataCenter Edition and SCO Unix simultaneously on two partitions. IBM seem to be going the right way with their server stuff. They're now even producing small single-unit height rack Netfinities for ISPs and Telcos (obviously these support linux). They're being announced sometime next month... But quite a nice move, considering the fact that Sun have those 1U boxen (Netra or whatever they're called). CeBIT is a good place to go for this kind of stuff - it's usually very cutting edge.
  • No actually it's probably a Stratus box. Stratus [stratus.com] These guys make crazy redundant hardware/os stuff. If you have a credit card, get a prescripton from CVS, buy anything on QVC it goes through a stratus box.
  • That was a truly hilarious troll... if I had any moderation points to my name, it would definately be moderated up!

    I actually did this once, based on originality of the troll, but it got promptly moderated down all the way to -1. It was such a good troll that it actually lived up to the very definition of troll: To fish in; to seek to catch fish from. Here on /., the Usenet sense should prevail, namely: The well-constructed troll is a post that induces lots of newbies and flamers to make themselves look even more clueless than they already do, while subtly conveying to the more savvy and experienced that it is in fact a deliberate troll. If you don't fall for the joke, you get to be in on it. It would seem to me that most /. moderators can't spot the difference between flamebait, offtopic and troll, and sadly Cmdrtaco promotes this by tagging troll as something inherently bad. It's not.

  • Windows isn't a toy, I've *never* had any fun with it.. it's an embarrassment. :-)
  • You can always tell the new converts by their inappropriate application of beowulfs to applications where they are highly unsuitable. The reason people still use mainframes is IO throughput. No cluster of PCs can do that.

    Someone here on slashdot has a sig that's quite appropriate here: "the average slashdotter seems to think the entire internet could be run of a cluster of beige boxes running linux."

    Grow a clue people. This is almost as bad as the idiots who scream beowulf when someone asks about high availability and fail-over.
  • by Anonymous Coward
    My dad worked on a system where they had full machine specs (back in the magnetic core memory days).

    He has a great story about them not needing floating point stuff at their site but wanting to drive some peripherals without a compatible interface. To cut things short, they ripped out the floating point board and replaced it with a driver for this peripheral.

    All was fine until they got an external consultant in to help with debugging part of their system software. Apparently it took him several weeks to get his head around the fact that occassionally the application would appear to do a floating point division and then discard the result - of course this was actually flushing a buffer.

    *sigh*

    closest I ever got was wiring morse keys into a joystick port - hardware hacking just isn't what it used to be.
  • by Anonymous Coward
    At 262 MIPS, new Trinium-II Series 9 processors deliver considerably more power than IBM's G6 'Turbo Opera' processors, which deliver about 201 MIPS of power per engine. The new Triniums not only deliver more single engine power than the G6s, but they are considerably more scalable than IBM's S/390 servers and offer better SMP ratios to boot. A 12-way IBM G6 server has an aggregate of about 1,614 MIPS of power, whereas the 12-way Trinium-II processor offers 2,441 MIPS of aggregate processing power in a single system image. The full 16-way Trinium-II server has 2,969 MIPS of power, 85% more scalability than the biggest 9672 G6. Trinium can crunch 262 MIPS per processor and support up to 16 CPUs in a single enclosure. Performance on a full system tops 3,000 MIPS, compared with a ceiling of 1,600 MIPS on the 12-processor System/390 Generation 6 line that IBM shipped last year. Now, that's a webserver. Well, not really.
  • The E10K (starfire) isn't a patch on the IBM mainframes that the article talks about. You can only configure domains based around system boards, and you need to use an individual disk to install a separate copy of the OS for each domain. On top of this, I have found that dynamic reconfiguration isn't as reliable as I'd like it to be. OTOH, we are using quite an old starfire now.

    You can get large amounts of storage and redundancy on a starfire. But it still isn't a mainframe, although it is still a damn good high end Unix server.
  • Not being a maniframe type, I enjoyed reading about the s390 and the vm stuff - also, it was a pretty cool thing to have, what did the one guy say, 41 thousand Linux images running at once on one of these? Gives an idea of the muscle these babies have, huh?
  • Want a new DNS, mail, web, FTP, or whatever server? Don't spend $[foo] for a shiny new PC - just fire up a partition for whatever server you want.

    To paraphrase my reply to the previous comment to my original message: To not spend $[foo] for a shiny new PC, you must first have spent $[foo] * a a large number for the box which allows you to "just fire up a partition" to add a VM. :-)

  • There are better informed people than myself out there, but as I understand it Vmware will not cut the mustard in this instance.

    Whoa there! I never said VMWare replaced this, but rather the idea of Linux running under a virtualized system was first (?) demonstrated under VMWare.

  • As much as I understand from reading in various web sites - the Network drivers were written from scratch without modifying any existing code - so IBM doesn't need to release the code for it (and Linus allows binary only module)

  • Look, I understand what you're saying, but this isn't a very good argument. It's based on security through obscurity, which most security professionals agree is useless. If there's a buffer overflow, it should be fixed, not ignored because nobody knows what to do with it. I'm not trying to pick a fight here; I certainly understand your argument. I just don't think it's a good one. Let the merits of Linux/390 vs OS/390 stand on their own. If one is actually more secure than the other, fine. But relative obscurity is not a security feature.
  • I think that a mainframe is largely a thing of the past due to the fact that most of the investment in the money and other factors will obselete the small parts that make the mainframe.

    Funny, I've been hearing this for ten years now. "The Mainframe is dead." "Unix is dead." Sure. It's beyond me why people try to predict the future in this industry. I've yet to see someone who can do it without emabarrassing himself horribly.

    Running linux on these things I guess would be a waste because I am sure that some crappy mainframe OS would work a little better.

    What makes you think so? IBM did the port after all; I would assume they know what they're doing. Besides, if the mainframe OS is crappy (which they aren't) why would it be better???

  • Try again. IBM did not lose money on mainframes last year. It isn't a high growth market, but it still makes up a large portion of IBMs money stream. Maybe 83-93 IBM lost money in mainframes, but last I check they were shipping mainframes as fast as they could make them. (part of that was Y2K hit mainframes the hardest) Nothing can process the volumn of data a mainframe can. (Okay, a cray can do it, but your talking mainframe class price too)
  • Next time, read the article. Trust me, you will be a better person for it.

    I assure you that I did read the article in its entirety(sp?) before responding. However now to your points:

    It would be near perfection having a single piece of hardware, properly partitioned, to become your router, your DNS server, multiple and independant but linked web/ftp servers, file servers, X11 servers, print servers, and then having a couple partitions for actual user processes; all of which on a piece of hardware that is 100% hot swappable, and can have a partition rebuilt in less than a minute. And with the massive I/O of a mainframe, to boot.

    Perhaps it would be ideal for some but personally I prefer seperate boxes for a lot of things. Catastrophic failures being one. What good is a hot swap box if the box is flooded (water) / destroyed by fire / millitant admin with an axe, etc. Before everyone screams "Backups" (and I agree with them) think of this: if your $Million machine is toast there's nowhere to put the tape and I would imagine delivery on a shiny new mainframe wouldn't be a next day thing. With commodity hardware I can (at worst) steal my home machine, buy a dozen more at your local outlet and be back up and running from backups in a matter of hours.

    I also disagree with your "near idealism" about everything in one box. I would certainly not want my router / firewall / X term / webserver / database in one box. However the idea of the mainframe where every "sub box" is a totally seperate entity is a great idea. But for most cases, the sheer co$t of this compared to a standard cluster and frontend/backend solution must also be addressed. How many standard machines would you need to replace (taking into account the savings you made in configuring/adminning only one box) with an S/390 to make it profitable?

    Let me perhaps modify my opinion a little: The S/390 port could be useful, but is definately of fringe usefulness when all factors are taken into account.

  • Sounds like the job for a lot of PCs with load balancing or perhaps some form of a beowulf cluster for the actual processing. Also
    you could have multiple high speed printers I really can't believe that you need a mainframe to increase something that is on the
    device end like a printer.


    Beowulf? For billing applications? Do you know how complex parallel programming is?

    And how do you like the idea of maintaining 200 PCs: I'd suggest it's a lot more difficult and expensive than maintaining one mainframe. Especially when most mainframe components are 100% hot-swappable: hell, you can do a microcode update on these things without any downtime.

    Of course you'd have multiple printers, although the last mainframe printer I was attached to did 120 ppm, and had its own monitor.
    --
  • Failover, sustainability and load balancing are key when building a web infrastructure. You are better off with fifty rackmount P III's than one mainframe any day of the week. That is why no one uses or advocates using mainframes for web serving.

    A S/390 has fault tolerance features that the PC world can only dream of. Failover and sustainability are really not a problem in mainframe world, believe me.
    --
  • Nothing beats my DOS based webserver for pure hittage. I run Caldera OpenDOS {used to be available from www.calderathin.com, but I think now it's lineo.com) with packet drivers for an NE2000 and I run the boa webserver[www.boa.org, it's really cool, a single process webserver] ported to a DOS TCP-IP stack. For reliability, it's number 1. I pity the fool that tries to DOS my DOS box.

    nWo for life!
    ------------
    a funny comment: 1 karma
    an insightful comment: 1 karma
    a good old-fashioned flame: priceless
  • OpenEdition (one of the newer OS upgrades) has a unix shell and all of the same functionality as you would find on a unix platform.

    ...except for a native character set of ASCII or a superset thereof. :-)

    I don't know to what extent the ASCII/EBCDIC problem gets in the way of using the UNIX environment on OS/390 (e.g., requiring that the code be checked to make sure it doesn't assume that "A" through "Z" form a contiguous set of character codes - and doesn't assume that the characters you get in from a TCP connection from an HTTP client are in the native character set of the C compiler and the file system and...), but if it gets in the way of Just Recompiling, perhaps that was some of the rationale.

  • Alphas are not "PC" quality or speed! There are only two (new) Alphas that I can think of that are less then $5,000 USD

    What about Microway's [microway.com] Alpha-based systems? $1995 for a 533MHz 21164...

    Your Working Boy,
  • High availability? Sorry, even if one intel box is 90% reliable, 8 are 99.999999% reliable.


    Sorry, but you are figuring your reliability the wrong way around. If you only need 1 or 2 boxes worth of horsepower, then, yes, 8 boxes provides high availability.

    OTOH, if you are attempting to gang multiple Intel boxes together to replace mainframe capacity (ignoring I/O bandwidth in this example), then the more boxes you have, the greater the chance one of them will be down.

    Let's give PC hardware a break, and assume 99% reliability instead of your figure of 90%. If you need 50 Intel boxes to replace a mainframe, at 99% reliability each, then overall reliability (all 50 machines working at the same time) is 0.9^50, or about a 60% chance. In other words, at any one time, there is a 40% chance 1 of your boxes is down. If you need at least 50 working boxes, then you better have >50 boxes linked together.

    It gets worse with more boxes. Do you need 100 boxes working all the time? The odds of keeping 100 boxes running (each with an individual chance of 99%) is 0.9^100, or about 36.6%.

    Meanwhile, that mainframe you were so keen to dump probably had an uptime availibility of at least 99.95%.
  • There's a new reason to start reading comments at 0 and -1...when there are funny trolling posts like this one that slip through the moderators. People see something that is anti-rob anti-slashdot or anti-"linux party line" and they moderate it down.

    But this is a high-quality troll. Has Jesus Christ ever checked out segfault.org? You'd fit in over there.

  • Odd huh? At the low end, mainframes are not that much more expensive than large Intel SMP servers. At the high end of course you're getting a boatload of hardware but more importantly what you're getting is a discipline, process and operational model refined over the last 40 years. The point of mainframes is not that they cost fortune, it's that they are built and operated to run and run and run and run.

    When I was fulltime in MVS ESA world we planned for WEEKS for an IPL (reboot) because they happened maybe twice a year for planned upgrades and never from unplanned events. In fact it was a ground breaking event when we got a VTAM upgrade that didn't make us regen VTAM following LU/PU changes. And 3745FEP's and NCP? Forget it, I've never seen one crash. Ever. In 20+ years. ESCON Channel I/O is 136Mbps or 17MBs out of the box and there are many approaches to multiply that in the aggregate. Throughput? We put CICS in its own region running as an uninterrupted communications task on a low end 9021-200 and got 500tps almost 8 years ago.

    The problem though with putting freenix on a mainframe though is the fs. Native freenix fs structure probably can't be implemented well over the native IO or over the native mainframe file structures, PDS's etc.
  • I know it's bad form to post OT, but I figure this might be seen by someone willing to help me.

    I'm currently an admin with a few years' experience on various flavors of unix-alikes on peecee hardware. I'd love to get some experience on big iron, but I'm not sure how to go about that.

    If anyone is willing to spend a few hours a week mentoring, I'd love the opportunity to learn from someone in the Boston Metro area with experience. You have scut work to be done? I'll do it, if I get to learn. I'm busy, but I can spare a few hours a week. If you're interested and like to teach, please drop me a line (jerkbob_at_pobox_dot_com [mailto]).

    Apologies again about the OT post...

    --
    A host is a host from coast to coast...

  • I am offering mainframe web hosting for $5 per month. Yes, $5 per month.

    Service includes:
    100 Meg of disk space
    Free name registration
    50 Gigbytes transfer / month.

    One time setup fee of US $3,000,000 due at signing.
  • Yes, and there are companies that you can contract with that maintain entire machine rooms full of unused mainframe equipment. If your mainframe is put out of service -- say destroyed in a fire, you drive your backup tapes to the facility, and they configure their mainframe to do your work while IBM rushes in a team to install your new mainframe.

    These are expensive contracts for big time players.
  • Not true. If you have a mainframe running two linux images, and one of those linux images is compromised, there is no way for the person to break out of his virtual machine and get into other virtual machines.

    Why?

    The concept of a "virtual machine" extends not only to the user-mode instructions, but to the system-mode instructions as well.

    The standard end user operating system, CMS, runs entirely in system mode with memory protection turned off! Or at least it thinks it is ... it's actually running in user mode, the control program actually emulates all of the privileged instructions on behalf of the virtual machine guest.

    So, even if you found a way to inject executable code into the kernel, and get the kernel to run your code in system mode, the only damage you can do is to your running kernel. You are still kept inside of a black box, and can't interfere with any other virtual machines.
  • Sniff ... you're making me cry :0)

    We just rolled out our 3090 last week. Cut it up and sold it for scrap. $5,000,000 brand new in 1989. I think we got $3000 for it ... or traded it for a PC, or something. In those 11 years, I think it crashed maybe twice due to a CPU hardware failure.

    I don't miss the 3380s though. They were a different story. Those disk drives were NOT sealed, and we have a crappy, dusty machine room. We used to have a 3380 failure about once a week. Finally, we replaced our 3380 strings with 3390s, then Hitachis, which don't seem to crash (knock on beige colored steel ...)

  • Haha. Obviously you've never seen a mainframe power cord.

    It's about the thickness of your wrist, and uses a locking connector. I believe that it requires a wrench to detach the power cord from the motor-generator.

    Yes ... the motor-generator. The electricity from the power plant is not used to directly power the mainframe. Instead, the power is used to drive a motor, which turns a flywheel, which drives a power generator, which provides power to the mainframe.

    If there is a power spike on the lines, the power spike is absorbed by the momentum of the physically rotating flywheel, thus protecting the mainframe. Beats the hell out of a "surge suppressor" any day.

    This is standard equipment on a 3090.

    Oh yes, and we have direct connections to two Commonwealth Edison power substations. If there is a blackout on our primary substation, a HUGE, frightening looking switch is automatically thrown, and the mainframe power is switched over to the other substation ... theoretically without killing the mainframe. Theory is very nice ...

  • IBM came out with a rushed product called AIX/370. We ordered a copy to demo it, and it sucked. It was a dog.

    This Linux port is actually the first credible Unix implementation for the 390. (i.e. something that systems programmers are excited about, as opposed to disgusted by.)
  • Next thing you know, they'll be porting it to those old DEC boxes that all the universities have laying around.

    Been there, done it ;-)

    Most of them run NetBSD with ease. I have actually used some of them as file servers and despite their pathetic CPU power (around a 286-386) they stuff a 10MB ether to the point of congestion (unfortunately there are no higher speed interfaces for them).

    They suck for web servers, DNS or whatever else where latency and execution speeds are crucial but they make damn good fileservers after you replace the hard drives with a recent SCSI. And after such surgery they just work. Boot them once and forget them forever.

  • It's not as good as that 386 running in my basement...it gets 1000s of hits a day and never misses a beat.

    (hits with a baseball bat that is.)
  • I do not belive that mainframes will die anytime soon. I actually hope and predict that they will be around for a long time.
    I have said for 18 years now that we must use the right technology for each system. As much as that may hurt some /.-ers, the right thing is not always Linux. (don't get me wrong, I am a registred Linux user Linux Counter [li.org])
    Mainframes and their OS is extremely good at processing enormous volumes of data and transactions. They are real bad at interactive processing and should not do that. Linux is one flavour of the very good Unix system that is excellent for servers and power workstations. With the introduction of Linux, Unix will be accessible to the "common user", there is a bit left but it will be there Real Soon Now (tm).
    I have seen to many projects becoming disasters because of using the wrong technologies. Let's stop the bickering and try to collectively use computer technology wisely.
  • Look at Lucent WaveLan IEEE pcmcia drivers. You can get one that is GPL (in source) but lacks support for all features of the card (gold version w/ 128bit encryption running at the full 11Mbps). Or you can get one that is binary with a GPL'd source wrapper acting as a go-between between the proprietary product and the GPL'd OS.

    The latter is Not bundled AFAIK with any distribution but must be obtained directly from Lucent's website. But the latter driver does support all the features of the card. It seems that here Lucent decided to cover their arses by writing a wrapper to their proprietary bit that was source GPL talking to the pcmcia_cs and the kernel.

    So, which is right? Just including a binary or do you Need to write a wrapper?
  • I found this discussion waaay too late for anybody to see this, but I'll post it anyway for my own satisfaction. :)

    Could mainframe linux enable mainframe NT?

    Install linux into a mainframe VM,
    then install VMWare under the linux host,
    then install NT under that.

    It's sufficiently recursive to make me smile if nothing else. And we haven't even talked about squeezing Transmeta codemorphing in there somewhere. {chuckle}

  • UNIX on mainframes isn't new. IBM has had AIX/390 out for a long time, and Amdahl had their UTS (UNIX Time Sharing) for their (IBM plug compatible) mainframes way back in the mid 80's.

    As far as old DEC boxes, I was using UNIX (specifically 4.2 and 4.3 BSD) on VAXes back in the mid 80's. DEC hardware was the original host for UNIX (the PDP-7), and was the most popular hardware for UNIX in the 70's (PDP-11 family). I know for sure that NetBSD still has support for at least some of the VAX hardware, and you can get licenses from SCO (the current USL owner) for free to run older versions of UNIX (V6/V7) on PDP-11's. Once you get a V7 license you can get a copy of the 2.11 BSD distributions from I believe an organization called the PDP-11 preservation society.

  • Read the FAQ. Once a person has built up a certain level of karma, every post they make automatically has a +1 bonus on it unless they specifically check a box against it when they post. Likewise, a person who accumulates a certain level of negative karma automatically gets a -1 penalty on every post (which they can't opt out of). Moderation seems to work. Most of the annoying posts (first posters and trolls) tend to get moderated down quickly, and only occasionally does a particularly good post get moderated above 2. I very rarely see any worthwhile posts moderated to -1, so you can fairly easily read with your threshold at 0 and not miss anything good. Sometimes I see good, but not outstanding posts that are at zero (anonymous cowards do occasionally have something worthwhile to say), so I wouldn't personally read with my threshold set above 0. In general, I am pretty thick skinned, so I usually read with my threshold set at -1.
  • Finally, here's a possible application from out on the fringes. Suppose you are a Web-hosting provider and you want to give your clients as much flexibility as you can without jeopardizing your own systems' security. Instead of buying a huge farm of PCs, you buy one S/390 mainframe with lots of RAM and the VM operating system. Now each client company gets their own virtual Linux machine with full root privileges. They can start and stop their Web servers, upgrade software, test new code, or whatever, without risk to your infrastructure.

    OK, all you entreprenuers, are you listening? I would LOVE to host in an environment like this,
    assuming it could be done for a price that's competitive with colocation/dedicated box services. From the numbers I saw in the article, it seems (at least as far as HW goes) you might be able to be very competitive indeed.

    Call me when you want to beta-test. Or even sell.

  • 1. Losing the opportunity to make millions is not the same thing as losing millions. And even if it were, you're off by about three orders of magnitude - think billions.

    2. Last time I checked, IBM's revenue was larger than Dell's, Gateway's, Compaq's, and HP's. Combined. Tell me again about IBM's economic failure?
  • One can really tell that you didn't read the article. I mean.. you can REALLY tell.... Had you read the article, you would realize what rubbish you are talking. Who said they were running it on the bare metal on the mainframe? Do you know what a mainframe really is? Read the article.. it's VERY good..
  • Hmm.
    Though, a 100Base network can't actually do 100Mbps between hosts. 100Mbps is simply the number of symbls that can be put on the channel.
    With ethernet overhead, and other protocol overhead, including handshaking, you'll find the maximum throughput for something using tcp is around 85Mbps. And that's if only two hosts are talking.

  • Lets see, back in 1980 they predicted that the mainframe would be gone in 10 years, because of the mini and super-mini computers. What happened? The mainframes got more powerful and continued to sell.

    Then in 1990, they predicted that the mainframe would be gone in 10 years, this time because of PCs and servers. Result: IBM is selling more mainframes than ever before.

    Now you are predicting the same thing. What you are doing is the exact same thing that previous people did; you are ignoring the ability of the mainframe to improve just as much as PCs. The mainframe is ALWAYS going to be around, because there will always be a need for massive amounts of computational ability and I/O. The PC may improve, but the mainframe will also.

    When you buy a PC for 2-3 thousand dollars, you are getting pretty low end stuff, compared to a mainframe which costs $100,000 on up.

    For money you get honey, and what you get with a mainframe is a machine that is designed to work and keep on working, no matter what.
  • If Seymour Cray designed it, it's a supercomputer.

    There is some overlap, classic supercomputers, like mainframes, have very high performance I/O and memory systems. Both tend to have huge amounts of RAM and disk space. The word size on a supercomputer is usually equivalent to the size of a double precision floating point variable. The IBM 360/370/390 is a 32-bit architecture although its main memory address space is much larger than 32-bits. The mainframe has a hierarchial memory system (L1 cache, L2 cache, main memory, bulk memory) where the classic supercomputer does not have a cache. The supercomputer streams operand vectors from memory to vector processing units in the CPU and the results are streamed back to memory.

  • But if the 99,900 of them are script kiddies who can only use known exploits, they aren't hard to fight off. The 100 knowledgable and motivated ones are the problem either way.

    Script kiddies are a problem, but mainly to the 95% of sites that aren't maintained, the ones that have admins who read bugtraq, etc, are fine.

    If you're on the ball as an admin, you shouldn't have many script kiddy problems.
  • It got a 4, which I guess it deserves, but posting the same thing twice is a bit weak.

    The original is here - in the EBay story [slashdot.org], BTW.

  • Damnit! I knew reading Slashdot all day everyday was a sin, but I just thought I'd just lose my job.. not my eternal soul!

  • IBM only lost business at the low end. Many machines in the old 360 line were basically minicomputers. PC technology overwhelmed what was originally considered a "minicomputer" decades ago...at the same time the "minicomputer" products got more powerful.

    Now the desktop/small server products are tremendously powerful (200-900 MHz is a ridiculous amount of processor power) and are often underused because they are so cheap (Hello, SETI@Home!). The "minicomputer" devices have taken over many of the former "mainframe" applications, and many actually use parallel microcomputer designs (ie, Sequent, Stratus). The "mainframe" has fairly fast processors and is surrounded with very fast I/O devices and I/O processors. When dealing with the huge amount of data which global companies produce and manipulate, often parallelization of data handling is more complicated than using big iron.

    Remember, IBM also sold a lot of IBM PCs. They lost a lot of PC business to competitors, particularly when they tried to require MicroChannel use. Most PCs just don't use so much data that they needed MicroChannel, and the additional licensing expenses just weren't worth the minor benefits. During the same time period there was a gap in their minicomputers with their old-tech S/36 before the AS/400 was developed. The big iron is flashy, but a lot more people needed the smaller machines. And there was a lot more competition once the microcomputer technology provided engineers with fast processor components for competing designs.

  • Wrongo! Mainframe = "Serious Business Machine". Nothing will get an MIS director's attention like saying "See, Linux can run both on your PC and on that million dollar IBM mainframe that runs your core business. Windows can't."

    Evangelists, please remember this the next time Microsoft gets its panties twisted about "scalability".

    Microsoft: "Our OS scales from a hatchback to an SUV"

    Linux: "Our OS scales from a motorcycle to a freight train"

  • The problem with a mainframe-specific Unix is that, since the demand is low, so is the R&D budget. Nobody is going to pay to make an {insert favorite mainframe} Unix as good as a Solaris; Solaris has many more development hours because Sun can sell mucho boxen. There's just no money in mainframe Unix.

    By the same token, Linux will blow a mainframe also-ran Unix out of the water.

  • Hmmm...fat pipes...sounds useful, these days of streaming media.

    Professionally doing the e-commerce thing, I am constantly running into the same problems: bandwidth and performance. It's not enough that the program does X, but that it has to do so much X in so little time on our hardware.

    This may be an interesting development. A lot of these new outfits can get capital for the asking but not developers. Thus, mainframes are easy to pick up, but mainframe developers are almost impossible to hire. An S/390 Linux port allows you to use your existing Unix staff, with some mainframe sysadmins and minimal retraining, to use high-bandwidth hardware.

    obSlashdot: What happens if you make a Beowulf cluster of mainframes? ;^>

  • I'd second you, but I'm not convinced that this guy isn't an honest-to-goodness professional troll. Literally. If I was running a site like slashdot, I'd make damned sure that there was some kind of argument/flamewar going on all the time. No offense guys.

    On a side note, I wonder about the "submitted" stories sometimes. Often you see that So-an-So writes "blah blah blah" where blah blah blah has a consistently similar style, often correlated to the individual who stuck the story up on slashdot. But maybe it's just me.
  • Errr, what time frame are you talking about? No sane person I know would tell you that mainframes are the be-all and end-all of computing. No computer is perfectly suited for all tasks. When was the last time you saw a tape robot hanging off a PC that was acting as something other than a controller? When did you hear about PCs churning out billing statements for a million customers twice a month? (And finishing the actual print job in a day?) It's just a matter of tuning your platform to the task, and mainframes are seriously tuned for big volume.
  • To step across the line for a moment: I think point was not if you have a failure, it's if you have 8-way independant, redundancy that you still have availability even if your boxes have individual chances of falling over at a given rate. So it's not a matter of math, it's a matter of knowing what was meant in the original post. :)
  • Yes, I got the same impression, that they wrote it from scratch.

    I'm not sure this matters though. Section 2b of GPL V2 states:
    "You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License."

    Notice the phrase "that in whole or in part contains".

    Naturally, the driver itself is probably not derivative of other drivers, except for trivial similarities. However the distribution and more to the point the kernel at runtime is a work that includes GPL'd components

    Allowing clean room developed code to be linked (even at runtime) to GPL'd code would be a huge loophole in GPL. It would allow you both to create proprietary dervied works, and to expropriate components of GPL'd works for incorporation in closed products, simply by partitioning the code into sections by licensing.

    I believe GPL 2 envisions this:

    "If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works."

    The question is, can the network driver "be reasonably considered indepependent and separate". (And to a lesser degree, is the network driver distributed as a separate work.)

    Any thoughts?
  • Failover, sustainability and load balancing are key when building a web infrastructure.

    Sure, but do you roll your own, or do you buy it nicely packaged up in a cabinet with complete system wide support from the vendor?
  • I'm surprised that nobody else mentioned this, but the article says that the network drivers for the IBM port are closed source (but apparently free beer).

    Isn't redistributing a kernel with a non-free module a violation of section 2b of the GPL (the viral clause)?
  • Actually, if I wanted to make an incredibly powerful and secure Web server, I'd run it on regular OS390, not Linux/390. Here's why:

    The latest OS390 versions come with their own Web server, which is a variant of Apache (the variant part comes because of a difference in the way mainframes handle processes). So that part is already there.

    Now, if you run the real Apache on Linux on your S390, by their nature you have thousands and thousands of people who know the strengths and weaknesses of those packages. Only a relative handful of people in world could hack an OS390 system.

    How many of you, if you discovered a buffer overflow situation that would let you enter a command string on such a system, would know what to do?

    Garg
  • Um.... sounds like you don't have a clue what you are talking about, and you probably didn't read the article either. Go check out information about transaction processing, reliability, and mainframe I/O structure... then come back and apologize to all of the nice people you have offended with your total ignorance... Beowulf clusters are great for lots of things, but PCs have *terrible* I/O throughput. It's a Big Deal(TM) for lots of things, and I think you should read some more before you run your mouth off.
  • Then you're doing something wrong... a 10-node PIII 550 Mhz System can stomp all over any RS/6000 out there. I administer a 16-node PIII-500 Beowulf and we routinely get Flops competitive to an Origin 2000 (for 1/30th of the price).

    It all depends on: a) your application, b) your choice of networking hardware and software. Flops-intensive software will fare *very* well on a Beowulf. I/O-intensive stuff (like retrieving huge amounts of data off an RDBS) would most likely be better off on a mainframe.

    Check out this site [beowulf-underground.org] and tune up your cluster...

    engineers never lie; we just approximate the truth.
  • by chazR ( 41002 )
    Your comment is a long way over the reality horizon and still accelerating.

    Intel hardware has a long way to go before it can touch big iron for sheer I/O throughput, which is what a mainframe customer wants. 16-way is barely the right order of magnitude.

    Win2K is so hideously unstable that it has no place in the *real* machine room. MS claim 99.95% reliability. Well, that sucks. Given three minutes for a reboot, that's one crash every two days. If I have 5,000 users logged on, I can't afford that sort of downtime. I need genuine 24*7 availability. That means a system that can cope with 5,000 users, a number of whom are developers, hacking away processing tens of millions of records a day and never crash. Not once. Given an operational life of maybe ten years I won't tolerate a single unplanned outage. Sorry, guys, but the only boxes that do that are mainframes.

    Having said that, I'd quite like to run Linux in a VM.
  • You're trolling right?

    Name me one bank running their systems on a cluster of PC's!

    I'm an MVS contractor, makeing a nice living from working on these "neolithic dinosaurs". Last year I worked for HSBC - the Hong Kong Shanghai Banking Corp. They are known as Midland over here in the UK. I know what banks run their stuff on, and it sure as hell isn't a Beowolf Cluster!

    No flames intended but you're talking a whole crock of shit here!

  • ... Even if it is now owned by the japanese.

    Talk about IBM clones... B-) It's hard to get more accurate than one done in the company started by the guy that used to design 'em for IBM.
  • I read the other day that Charles Schwab has installed 6 mainframes in the past year and that they are taking new ones as fast as the vendors can deliver them.

    There are still some places where the dirt cheap PC style boxes have not infiltrated. As many people have pointed out, mainframes can handle much higher transaction volumes and have much better uptime figures than any other kind of system.

    Also, remember that many of the outfits that use mainframes have huge amounts of legacy code in production that might not port so easily to a beowolf cluster, or anything else.

    The costs are huge but you need to put them in perspective. I was talking with one of the local Sun techs in my area and he had recently been out to work on an Starfire that a client kept as a hot spare. This is a $2 million dollar machine and it's just the backup! The tech thought it was overkill but the client, a major brokerage house, does $4 million an hour in transactions. The spare machine would pay for itself ini preventing only 30 minutes of downtime.

    For the organizations that really need them, the big iron is still worth every penny.
  • I can't believe how this post has been moderated up to 4 (at time of writing) when the poster obviously hasn't even read the article. Read it. It's very informative and interesting, especially if you don't know what modern mainframes are capable of.

    HH
  • Don't know why you can't read the article (slashdotted maybe?) but youre general impressions are completely wrong. With the S/390 mainframe, multiple operating systems can be run simultaneously. You can even debug unstable, crashing OS's without taking the machine down. Very cool.

    And are IBM 'brain dead' for porting Linux to their own mainframes? I don't think so.

    HH

  • <i>Perhaps it's the cool factor of walking into a room of machines instead of a room of machine</i><br><br>
    That's so odd, I would think the cool factor of seeing see the room of mainframe would be exponential greater than a room of PC based machines (even 1GHz Athlons w/SCSI...). A "room of machine", that's funny, like my neighbour trying to borrow a "cup of sofa" from me. :)<br><br>
    <i>something inside me says not to trust a single box</i><br><br>
    Here, Here! With any other configuration, having everything on one box would send a spike of fear into my spine. But this configuration, something says to me: "Have no worries". Must be those thrice-damned subliminal messages again. Still, mainframes were built good, and if you have ever used hot-swap-able components, you will understand the utter joy of them.
  • I would imagine delivery on a shiny new mainframe wouldn't be a next day thing

    Well, actually, it (probably still) is. Some years ago while working in a mainframe computing center I was told that IBM had actually hired a Boeing to fly a whole mainframe over the Atlantic because they had no adequate hardware in Europe and the still needed to honor their 24-hour service contract.

    At that time (more than 10 years ago) the company I worked for estimated that they would go bankrupt if they lost computing service for 72 hours. I would guess that time has decreased by now. If you are that dependant on computers, you really care about the quality and speed of your service contract.

  • Errr, what time frame are you talking about? No sane person I know would tell you that mainframes are the be-all and end-all of computing. No computer is perfectly suited for all tasks. When was the last time you saw a tape robot
    hanging off a PC that was acting as something other than a controller? When did you hear about PCs churning out billing statements for a million customers twice a month? (And finishing the actual print job in a day?) It's just a
    matter of tuning your platform to the task, and mainframes are seriously tuned for big volume.


    Sounds like the job for a lot of PCs with load balancing or perhaps some form of a beowulf cluster for the actual processing. Also you could have multiple high speed printers I really can't believe that you need a mainframe to increase something that is on the device end like a printer.
  • There are a lot of advantages of large systems over a group of PC's. Chief among them is I/O bandwidth. A web server, unless it's running some really complex perl scripts or is also running the database back end (not the best idea due to security, BTW...), is not usually CPU bound. Basically, all the CPU is doing is saying "send this file to this IP address". Not a massive amoung of processing power needed there. So then, your limiting factor becomes your I/O bus. The I/O on a PC is pathetic compared to what you can get on a mainframe class system. Granted you can tie a bunch of systems together and get that much I/O, but then you have a bunch of systems which leads me to my next point - sysadmining one system is a lot easier than sysadmining 8 systems working together. Then, on top of that, you have the load balancing software and firewall you use to make them all look like one machine. Now, instead of your single point of failure being the machine, it's the firewall. By the time you buy all that hardware to get up to the reliability of a mainframe, you may end up spending almost as much as buying a mainframe.

    One last point in favor of mainframes is one very few people think about until it's too late - ease of service. Hardware, especially hardware that's being run at full capacity 24/7, breaks. When it breaks, it's nice if you can replace it easily. This is where PC's really suck and mainframes are quite nice - they are designed to be serviced, often while running. Your ethernet board died? No problem - swap in a new one without rebooting (I assume IBM can do that? I'm sure someone will flame me with a correction if necessary :) Redundant power supplies are also quite nice.

    This is not saying that a mainframe is right for *every* webserver, but when you lose millions a day for going down, even a small increase in reliability or ease of service may well be worth the extra cash up front!

  • Actually very few web sites have traffic greater than 100 Mbps.

    Well, the one I work for does. As a hint, it was "attacked" two weeks ago. The traffic this site gets is what I base my comments on.

  • A S/390 has fault tolerance features that the PC world can only dream of.

    Its a lot harder to yank out two hundred power cords than one.

  • Check out Schwab's web site. It is served off of a *cluster* of mainframes.

    Thats certainly for transactions, not serving html.

  • Failover, sustainability and load balancing are key when building a web infrastructure. You are better off with fifty rackmount P III's than one mainframe any day of the week. That is why no one uses or advocates using mainframes for web serving.
  • The reason web serving is typically spread over many machines is because no affordable hardware exists to handle all of the network I/O for any moderately busy website through one connection.

    While a mainframe may be able to handle all of the CPU needs for EBay, no single network card can handle all of the traffic in an affordable fashion. And even if one did, EBay would be silly to put all of their eggs in one basket, regardless of its power.

  • One of the problems that server farms can face is the issue of support. If you install 8 intel boxes you must support 8 intel boxes.

    A trained chimp can support an intel box. If its too much hassle - go with a colocation service that does it for you. Still cheaper than a mainframe.

    The point with webserving is granularity. Think of webservers as grains of sand - you add or subtract them at will to increase performance or to fix problems. In this contect a mainframe is a large rock, which doesn't work well in webserving environments.

    WEb serving is fairly light on processors - its heavy on network connectivity and disk I/O, with an emphasis of 100% uptime. With these requirements, it doesn't make sense to use mainframes, and I don't think anyone would reasonably advocate using them in this capacity.

    Take a tour of a colocation like globalcenter - it will put things in perspective.

  • What happens if you make a Beowulf cluster of mainframes? ;^>

    Actually, you can virtualize a whole bunch of Linux systems on one mainframe and Beowulf them.

    Talk about ridiculous extremes.
    Anomalous: inconsistent with or deviating from what is usual, normal, or expected
  • I'm currently on a Sun Microsystems E10,000 (e10k) course... studying Sun's answer to the Mainframe. It does basically all that the author likes, as well as being Unix, so you can do shell tools, etc, and be in the Unix environment we all like so much.

    Okay, so it costs UKP1,000,000+, but so what?!! Solaris is (basically no cost), Linux is free.
    The power's there, plus the flexibility. Also, if you know Linux, you know Solaris.
  • Uh, yeah. I have. But here it is from the horses mouth: Robert W. Edwards, president of Risks Ltd., a risk-management consulting firm in Keedysville, Maryland, calls computer hardware theft "endemic." "There's a constant hemorrhaging from big businesses," he notes. "And it's not just PCs that [thieves are] stealing. They'll steal disk drives, even mainframes." That quote is from: This CFOnet.com article [google.com]
  • by Anonymous Coward on Monday February 28, 2000 @09:13AM (#1240623)
    Anyone ever hear of a 390 getting jacked? I didn't think so.
  • by davie ( 191 ) on Monday February 28, 2000 @08:08AM (#1240624) Journal

    Read the whole article, or read it again. On S/390, Linux is a tool that runs in a partition under the native OS. The article isn't advocating ditching VM, he's talking about adding Linux to the S/390's toolset. Need a webserver, DNS, etc? Set up a partition, allocate DASD, copy a new Linux image over, set it up.

    As the article states, $400 a seat for mainframe power and reliability is pretty cheap, and you're soon going to see IBM offering end-to-end Linux server solutions, featuring S/390, RS/6000s, PC Servers, and probably AS/400 in the near future (I know there's a third-party port in the works, but IBM will probably beat them to market). IBM shops will be able to come close to Write Once, Run Everywhere, and in the case of S/390, on a machine that almost never falls down and can handle an insane volume of I/O. Don't think that won't appeal to the bean-counters.

  • by IGnatius T Foobar ( 4328 ) on Monday February 28, 2000 @08:27AM (#1240625) Homepage Journal

    Where the mainframe excels is in its I/O channel. You can, for example, move gigabytes of data from one disk to another (or from disk to tape, or whatever) generating only one CPU interrupt. The channel is intelligent enough to do this kind of thing. That's why people tend to be surprised when they find out that the mainframe that just moved a few hundred million records around without breaking a sweat only has 64 MB of memory and a not-so-hot CPU.

    Getting Linux onto the mainframe is a very important step, but it's only part of the picture. Facilities then need to be introduced to take advantage of the advanced I/O facilities that are available.

    The Intel-PC world's attempt to imitate a mainframe's channel is Intelligent Input/Output (I2O). It does something similar, with intelligent peripherals designed to take the processing load off the main CPU. Mainframes have had this for decades. The Commodore 64 did, as well (I believe the article touched on this, actually). Now the PC world is finally catching up.

    Alan Cox is working on getting I2O support worked into the Linux kernel. If the kernel interfaces for I2O are done with a sufficient level of abstraction, it is entirely possible that IBM could adapt them to use on an S/390 box as well.



    --
  • KwsNI dun said:

    Q: A mainframe is NOT a thing of the past !!! What do you think manages your bank account, possibly your salary, certainly your IRS account and probably your pension fund ??? A: It's called a Beowulf Cluster. 10 PIII-550's. Total cost $30,000 as opposed to a 6-figure mainframe. IRS though, well, they're just a wastefull buracracy.

    Actually, a Beowulf cluster probably wouldn't be the right tool for the job there (it'd be rather akin to using a butter-knife to tighten a flathead slotted screw--it'd work, but there are better tools for the job).

    Beowulfs are very good if you need to do processing that can be done very well in parallel such as some formulas. They do not do so well if you a) have to do a lot of parallel work quickly (for that, a supercomputer like a Cray is probably better suited) or b) especially if you have to move and/or work with a massive amount of data (which is what mainframes are used for mostly and which is a job they do very well).

    Besides, the entire purpose of Linux on mainframe boxen isn't to link a mess of S/390s into a Beowulf cluster (which, while it would make a very large box, it would be slower than a comparable Cray or even SGI Challenge series). The major purpose of porting Linux to a virtual machine S/390 is twofold--for "bare metal" runs it's basically an alternative to AIX (which, while reliable, is a horrid bastard child of *nix and the entire IBM REXX mess--for a fair amount of stuff which compiles "out of the box" on most *nixes, you have to add REXX scripts in AIX), while the port for "virtual machines" in S/390 (the one largely concentrated on in the article, by the way) is meant largely as a more user-friendly (at least to us folks used to *nixes and OS's invented in the last 25 years or so) alternative to the traditional IBM VM OS's (most notably MVS, VM/CMS, and VM/ESA--I've had experience with VM/CMS, by the way, and trust me when I say that *nix is far friendlier ;) that can also be used to add capabilities that have not existed for IBM/Amdahl "Big Iron" so far (like X-terms--other than VAXen and a few abortive NT ports, there isn't such an animal as a GUI for mainframes--it's all CLI terminals; also, stuff like PPP accounts can be set up (good for unis that may still have some old 3090 VM/CMS boxen about--IBM isn't officially supporting most OS's for the 3090s anymore, VM/CMS has a real dearth of Internet apps, and the daemons that DO exist do have some serious security flaws [most notably IBM VM SMTP--which can be anonymously relay-raped in its default install--and the default mail client which had serious problems in the early 80s with worms being transmitted]) and whatnot).

    FWIW--there are a lot of places still using Big Iron, and not necessarily because they have a ton of legacy Fortran or COBOL (yuck) code that has been around since 1965. :) Insurance companies and banks, for example, commonly still use Big Iron because, well, Big Iron is about the only thing that won't choke on the massive amounts of data it must deal with reliably on a regular basis. (Supercomputers might be able to deal with it, but storage is iffy, supercomputers tend to be more temperamental than most Big Iron, and the amount of supercomputer needed tends to be quite a bit more than the average amount of Big Iron costs. Beowulfs are good for small- to medium-sized applications, but would barf on the amount of info needed and/or the nodes required (there is a limit to how many boxen may be linked in a Beowulf cluster; part of this is due to transmission speeds, but part of it is due to limitations of the Beowulf code itself); also, Beowulf clusters can fall down go boom if one node falls down goes boom.)

    To give an example of, say, the average group that might use and NEED Big Iron--how about the US Census Bureau. They have to put in something around 250-270 million records in their databases every ten years; in addition to that, they have to keep databases (for comparison and updates, as well as to tabulate trends across decades) of anywhere from 100 million-250 million records--one for nearly every person in the country.

    Yes, this is actually stored by computer--I happen to live nearish one of the four big processing centres for the Census Bureau in the US. They basically input everything on terminals from the census forms, thousands of people do...which are ultimately stored on Big Iron and on media totalling anywhere from terabytes to possibly even exabytes of data.

    Damn near everything SHORT of a Really Huge Big Iron system is going to choke, hard, on this amount of info. (Especially so when you consider that a fair amount of older data is probably stored on tape or removable big-platter hard disks--usually using legacy storage systems which may not even be commercially sold anymore--which are being converted to more modern storage media capable of handling terabytes of data at a time.) I like Linux as much as anyone (and freely admit to being a Slackware and SuSE bigot ;) but a Beowulf cluster just isn't going to handle that. Not even if you built it from Alphas. Not even if you built it from bloody Playstation 2s. :)

    You mentioned the IRS--well, they've got comparable storage and data handling requirements as the Census Bureau does, only worse. :) They have to maintain upwards of 100 million records which must be entered and updated yearly (just from people who fill out tax returns)...plus records from W2 forms filled out by employers of the folks associated with those 100 records...plus trends must be done on ALL these records, and previously archived records which may stretch back as far as 30-35 years or so (again, often on legacy systems and data--good old removable disk packs and 9-track tapes) in order to flag folks (who might be remiss on paying their taxes) for audits...we're talking literally hundreds of terabytes or MORE of data that the IRS must go through on a yearly basis! It's actually kind of impressive that their systems don't barf more than they do when one thinks of the sheer amounts of data they go through...

  • by tonton ( 27741 ) on Monday February 28, 2000 @07:55AM (#1240627)
    A mainframe is NOT a thing of the past !!! What do
    you think manages your bank account, possibly your
    salary, certainly your IRS account and probably
    your pension fund ??? Mainframes, that's what.

    Most of the concepts that we enjoy in Linux: high
    reliability, scalability, etc..., have been present on mainframes for **ages**. I am not
    flaming, but you have to give each one its due ...

    What really sets these machines apart (in my personal experience since more than 25 years)
    are two things: reliability, and high throughput.
    These machines were designed from the ground up
    to serve **hundreds** (sometimes thousands)
    of simultaneous **active** users. You have the
    hardware facilities to serve literally thousands
    of concurrent I/Os (this explains the price,
    of course).

    Now about linux: I think that's a good move on
    IBM's part (who sponsored the port): as I see it,
    their long-range view would be: one OS (and one
    set of applications tailored to this OS) on every
    possible hardware platform. Then application ports
    would truly be re-compilations only, which profits
    IBM too (as software developer, and also in the
    support department)...

    If D. Miller could port Linux to high-end Sun machines with 15 processors, why not on IBM mainframe ?
  • I assure you that I did read the article in its entirety(sp?) before responding

    Then I apologize for accusing you with ignorance.

    What good is a hot swap box if the box is flooded (water) / destroyed by fire / millitant admin with an axe, etc

    A point was brought up (either by the author OR a /.er) that you should take into account proper security (rooms/locks/guards?/fire control/waterproofing/etc.) into account when spending the money to purchase a six-figure piece of equipment. I mean, that's just common sense (that I learned from the Gospel according to O'Reilly). I can't see the difference from losing one machine in a building due to fire/flood to losing 20 machines in a building for the same reason.

    I would imagine delivery on a shiny new mainframe wouldn't be a next day thing

    Depends. I don't have enough information about purchasing mainframes to know if IBM would toss out another expensive mainframe while the insurance claims/warranty/guarantee/etc claims get tossed around. But, I would hope that IBM would send one as soon as economically feasible.

    With commodity hardware I can (at worst) steal my home machine, buy a dozen more at your local outlet and be back up and running from backups in a matter of hours.

    You're kidding, right? Boy, I must be configuring something wrong; when I rebuild a system, it generally takes me a very long time to rebuild one system from backups, much less multiple systems. Of course, I am no system administrator. But truly, I think your estimate about rebuilding a network is too optimistic. Maybe you mean "kludging together something with baling twine, duct tape, and spit, until I have enough time to rebuild it properly".

    I would certainly not want my router / firewall / X term / webserver / database in one box

    Oops, I didn't mean it *that* way. Of course, you want all your servers and routers on seperate machines, due to redundancy, the KISS principle and security reasons. But with multiple OSes running on multiple distinct and unconnected partitions, running completely independant, but on the same hardware, that's just ideal. One instance of Linux/FreeBSD for a router, one for Apache, one for SAMBA, etc, with the same security risks as having them on seperate boxen.

    But for most cases, the sheer co$t of this compared to a standard cluster and frontend/backend solution must also be addressed

    Agreed, you would need to replace a lot of machines to make this a money saving, but:
    1) A lot of companies have a lot of machines. Enough to make this profitable? Would the profit margin hit at 20 servers, with 200 low-end workstations? 50 servers, with 1000 LEWSs? I knew a few mid-range Canadian businesses that have the 20/200 setup.
    2) Pure bragging rights. Lousy reason, sure. Something that should be taken into account. You betcha.
    3) Politics. Multiply reasons in 2) by a factor of ten for reasons.

    Financially, I would very much like to see the math to see where the break even line is. I smell an "Ask Slashdot" question, can you?

    If anything, Tzanger, you made me think, which is more than I can accuse most of the people I have talked to today. Thanks!
  • I disagree with most of your ports. Some of the ideas of author of the Linux/Mainframe article (if you would have read the article) are very valid. It would be near perfection having a single piece of hardware, properly partitioned, to become your router, your DNS server, multiple and independant but linked web/ftp servers, file servers, X11 servers, print servers, and then having a couple partitions for actual user processes; all of which on a piece of hardware that is 100% hot swappable, and can have a partition rebuilt in less than a minute. And with the massive I/O of a mainframe, to boot.

    Next time, read the article. Trust me, you will be a better person for it.
  • by Kagato ( 116051 ) on Monday February 28, 2000 @08:21AM (#1240630)
    Although I've read articles in the past that have noted IBM running linux on the 370, I liked this more detailed article.

    It should be noted that a Mainframe has huge ammounts of I/O bandwidth. I've seen five year old mainframes put new as/400's to shame. By all rights the 400 has more processing power, but the mainframe has such huge ammounts of bandwidth that it can move the information around quicker. When you think of a web server you realize that the actual program isn't all that complicated. Most of the CPU time is dedicated to feeding I/O out to the TCP stack. A mainframe may not be as powerful as the as/400 or a new Xeon, but it has a lot of smaller processors, and they aren't wasting cycles waiting for I/O to give them something to crunch.
  • Also you could have multiple high speed printers I really can't believe that you need a mainframe to increase something that is on the device end like a printer.

    And here is where you betray your ignorance of things mainframe. Mainframe devices are much faster than similir devices in the PC world. A mainframe printer connects like a network device does and utilizes huge transfer rates. Mainframes are built for IO. IO from tapes, to tapes, to printers. Mainframes exist for IO.

    Beowulf clusters are built for parallel processes -- think solving intractable differential equations through simulation like weather models where the state of point A influences its neighbors.

    Billing and accounting systems don't have the same kind of dependance between data points. It's just that there is a massive number of them.
    And they need to be processed quickly. Some of the files I work with every day have 7 million records with 2K of data per record. That 14GB just for that dataset and we get a new dataset every month. And that's just one file. I use about 10 different files. My company has literally tens of thousands of data files on tape cartridges in the mainframe data center. I can process *any* of them on request in a matter of minutes. I could process those 7 million records of data in a half hour if I had the machine all to myself. That's about 9 MBytes/sec of sustained throughput *with* calculations on every one of 7 million records in that time.

    Anomalous: inconsistent with or deviating from what is usual, normal, or expected
  • by myshka ( 143797 ) on Monday February 28, 2000 @07:40AM (#1240632)
    Unable to connect to the database. Please email.

    Looks like a mainframe should be standard equipment for any site mentioned on slashdot.
  • by Steve Burnap ( 155427 ) on Monday February 28, 2000 @07:37AM (#1240633)
    Wow, I thought I'd seen everything, but Unix on a main frame! What a bizare concept!

    Next thing you know, they'll be porting it to those old DEC boxes that all the universities have laying around.

  • by FJ ( 18034 ) on Monday February 28, 2000 @08:17AM (#1240634)
    Actually it depends on what you're doing. I can't comment on Linux specifically on S/390 but I can comment on OS/390 running on S/390.

    One of the problems that server farms can face is the issue of support. If you install 8 intel boxes you must support 8 intel boxes. Generally speaking, supporting server is easier than multiple smaller boxes. The problem also isn't with making a 99.99999% availablility, but with making a site scalable for a large amount of traffic. If you need a few dozen servers to handle the traffic and availability, a single S/390 platform may give you better response and reliability.

    The other issue that makes S/390 a good platform is that the hardware is extremely reliable. I read that the CPUs have an average failure rate of 1 failure every 30 years. Not too shabby and if it does fail the entire box doesn't crash, the underlying microcode just makes that CPU unavailable and will notify IBM support. If you have a spare CPU available it will even turn the spare on so you don't loose processing power. I've also heard rumors that future releases of S/390 will allow you to dynamically turn on CPUs so if you run out of processing power you can perform an upgrade with one command and never taking down the box.

    Another advantage (as the article mentions) is disaster recovery situations. Mainframe DR plans have been in place for decades. Again backing up and restoring 1 system is typically much easier than multiple smaller servers. The hardware is also much more standardized than PC platforms so finding a DR site is not terribly difficult.

    Also, don't let the cost of the mainframe fool you. They are a lot cheaper than you might believe. The old style of mainframes needed plumbing for the water which made the costs very expensive. The newer CMOS boxes don't require external plumbing and have a footprint the size of a large filing cabinet. So size & plumbing are no longer a problem like they were 15 years ago.

    I'm not saying that S/390 will make other servers obsolete, but if you have the need and the money it definitely gives you an attractive alternative. Also don't forget that some people estimate that 60-80% of the world's data still resides on these old boxes.

    As for the rational of running a free OS on a mainframe, that would be attractive to some because the software costs can quickly add up on a mainframe. Generally speaking the faster the mainframe you have, the more expensive the software becomes. A free OS could result in a huge budget savings for a company.

    Again, it won't be for everyone (heck it won't be attractive to most people), but for some this will definitely be a good solution to a difficult problem.

    Just my $0.02
  • Sure, the dozen PIII's will match the Big Iron in MIPS/FLOPS, but it would take a hundred times as many to match the sheer I/O bandwidth of those monsters. An old, low-end IBM 9x2 will handle 4,000 GB of I/O per second and love it. A high end PC will perhaps handle two, and totally thrash. Assume I'm wrong, and a PC could push 10 GB. You'd still need 400 computational nodes + 40 managerial nodes + 2 controller nodes == 442 PC's to match the performance of ONE old mainframe.

    Used 9x2, $20,000.
    442 PIII@$1800, $795,600. Which is cost effective?

    When you're doing simple processing of huge data sets, like bank account updates or IRS 10-40 return valitation, it isn't adding up numbers that bogs. It's the contunual process of [retreive x][save x][print x]. Beowulf clusters have their place, and not as replacements for mainframes.
  • by el_guapo ( 123495 ) on Monday February 28, 2000 @08:25AM (#1240636) Homepage
    A PC vs Big Iron Holy War was definitely not what I was expecting with this article. Having worked with computers in various industries, I can tell you one thing, all of this stuff has its place. I was at a large loan servicing shop (like over a million loans) and I can tell you - you cannot configure any PC hardware to do what their MID-range box did. Over a 2 year period, not ONE server bounce, all the while hundreds of users logged in constantly, running massive batch jobs every night, printing out over ONE MILLION statements at a time etc. etc. etc. I am certainly not knocking the PC, I love 'em. But I do like the positive tone of this article, IMHO Big-Iron stuff is way way misunderstood by folks who haven't any experience with them. Heck, even I scoffed at the things until I got a chance to work with them. (PS I know my website's down - gimme a break, I have a full time job, ya know?)
  • I think you are glossing over years of interesting history and computer architecture. IBM's mainframe division might have lost mililions because of PCs, but not because PCs were superior computing platforms for big data.

    PCs won for the same reason that we have traffic problems on our highways--everybody wants to be a driver and doesn't want to share. This was, it seems, part of the rationale for the federally funded development of the internet (well, ARPANET): getting scientists to share computing resources bought with federal money (c.f. A History of Modern Computing, Ceruzzi 1998, p 296). This is of course the same phenomenon that drove minicomputers, which were also replaced by PCs. I'm not saying this is bad, well, not in the case of computing anyway.

    But PCs still suck at any number of computing tasks, and aren't really improving in areas that can't be mass marketed. That's why my lab bought a very expensive dual Alpha machine instead of spending that money on the 5-to-20 equivalently clocked P-IIIs (these numbers come from real computations). Not to mention a farm of PCs can only handle embarassingly parallel computations at the same speed has the Alpha, and require more programming effort than the Alpha. The Alpha isn't even close to a mainframe, either.

    And I haven't even gotten to bandwidth issues that sponsered this thread (well, they're part of the 5-to-20 figure above, in some ways). IBM lost on mainframes because they dealt _only_ with mainframes. DEC lost with minicomputers because they too were arrogant/ignorant about PCs. And while Intel seems to acknowledge the information appliance ideas, they're x86 tech will only go so far (we can hope, can't we?). But just as information appliances aren't the best choice for PC-type tasks, PCs aren't aren't the best choice for mainframe-sized loads.

    Oversimplification is a marketing tool. It has no place in intelligent discussion, where flippant remarks are better replaced by _questions_.
  • Some anonymous coward dun said:

    Microsoft: Our OS is 100% reliable and has 1000s of applications readily available off the shelf, including the worlds #1 word processor and spreadsheet. Linux: Our os is based on 30 year old technology and has a few apps that are a bit flaky, and you need to rebuild your kernel every 5 minutes. yeah cool "advocacy" dude. Best wait until Linux is ready for the desktop before you start hyping.

    *chuckle* Methinks someone doesn't quite get the point with the mention of scalability...

    1) Big Freakin' Deal that you can run Office 97 under Win98. For the applications we're talking about here (mainframe stuff--numbercrunching and storing) "pretty" stuff like Office 97 or GUIs in general are neither helpful nor necessary (in fact, they'd be a detriment to the Job that the Big Iron is doing).

    1a) I would far from call any Microsoft product "reliable". Yes, this includes Office, Win95/98/NT/2000/3.1/CE/Me/[insert latest marketing spin from Micro$oft here], and IE. Yes, I know of what I speak here--I've had to do more than one repair job when supposedly well-configured Microsoft apps and OS's suddenly developed severe cases of incontinence. :P Compared with some of the stuff I have to put up with re Microsoft stuff (hint: OS's are not supposed to corrupt their essential files over time, nor are they supposed to lock when running programs [necessitating a hard reboot and scandisk], nor are they supposed to crap themselves after 49 days of uptime because even Microsoft acknowledges that neither Win95 nor WinNT are stable enough to stay up longer than that, thus a 49-day reboot is coded in), even beta builds of Linux are marvels of stability :)

    1b) Please call me when a version of Windows is widely available for Really Big Iron, such as is used for databases for insurance companies and the US Census Bureau. ;) (AFAIK, they don't exist--not even WinNT ports (the largest iron WinNT was ever ported to, BTW, were Sun and Alpha ports--and those two ports are supposedly being discontinued). Most of 'em don't use *nixes, either--they use stuff you've probably never heard of like MVS, VM/CMS, VM/ESA, etc.) 2) The point wasn't on "who had more apps" or "who was prettier". It was "Who can run the base OS on more stuff"...which Linux beats Microsoft, hands down. (Itsys are teeny even compared to WinNT boxen, and with the recent ports to run as virtual machines under mainframes (not to mention the Linux/VAX project, the Linux/3090 project, etc.) Linux has probably just surpassed NetBSD as the OS which can run under the maximum number of architectures.) It's rather a different cock-fight than the usual comparisons, mind. 2a) I'm not sure that the virtual-machine versions of Linux are quite ready for prime-time (at least for what mainframes tend to be used for), but at least the option IS available should one want to run Linux as a shell (as opposed to a traditional mainframe virtual-machine OS like VM/ESA or MVS). Compared to the OS's that do tend to be used with mainframes, Linux is a fair sight more user-friendly; more people nowadays are familiar with *nixes in general (if from nothing else but student email accounts or computer science courses) than most mainframe OS's. Also--and this may be a shock to you to hear this--using Linux as a virtual engine actually would make it easier for users to set up stuff like Internet accounts--including PPP services for folks who want to use Windows from home. ;)

    (As a minor data point to add to that--the University of Louisville recently retired its old 3090 (which had been formerly used in EMCS and IS courses, then [when email first started becoming widely available and the EMCS and IS departments had largely gone to either PCs, an RS/9000, or a combination of SGI, HP, and DEC Alpha boxen] was used as the primary Internet account server for the Arts and Sciences school) in exchange for a DEC Alpha box. This was done for many reasons, partly because PPP is easier to set up on the Alphas and partly because IBM no longer officially supports VM/CMS on the 3090s [which was a Bad Thing, especially since they also no longer accepted security patches for Internet utils and daemons; at the time, there were two rather serious security bugs for IBM VM SMTP that were being widely abused, and I spent much of a summer giving the two unofficial patches to universities who'd been relay-raped by mailbombers :P]. If a virtual-machine version of Linux had been available for the 3090, it's possible they could have kept it in service a while longer instead of selling the thing off for scrap metal. :P)

    3) You talk of things being "ready for the desktop"--most mainframes aren't because they have no real need to be. Realistically, the most useful setup for a Linux VM on a mainframe would be either for Internet-related network services (sorry, but Linux does have better support there anymore--at least sendmail and qmail do have protection against relay-raping and are regularly fixed to close any security holes found) or for a shell alternative for folks who are already used to working on *nixes at a shell prompt (instead of them having to learn the command sets for Yet Another OS). It's fairly obvious that you've never done much work with a mainframe--otherwise, you'd realise that there is no freaking desktop...these are Big Machines, things that fill up entire rooms complete with false floors to hide the miles of cable and Halon extinguisher systems. You aren't going to get right at a terminal, and you probably aren't even going to use an X-term with these beasts (unless the Linux VM running has it set up to do so); if you access these things directly at all instead of sending stuff back and forth across a network with the mainframe being basically a virtual disk, you're going to do it the old-fashioned, CLI, type-in-the-commands-on-a-TN3270 way.

    Needless to say, unless and until some kind of Windows port makes it to such Big Iron, whether or not it's "ready for the desktop" is completely and utterly moot! Unless a Linux VM is installed and set up to use X-terms, you aren't going to get a pretty interface--the closest the OS the VM is running and your Windows box are going to get is with your Windows box running a terminal emulator like TeraTerm or VT3270. It's going to be done by text, the way Big Iron has always done it since we got away from programming boxen by switching plugs and relays and valves (vacuum tubes for us Yanks) around and went to punchcards and old Teletype terminals instead, before the nutty folks down at Xerox PARC came up with the idea of GUIs in the first place.

    (And before you ask--yes, I know what I speak of here, too. I was at U of L back in the days when the 3090 was actively being used to teach Fortran, and also when it was used as the Arts and Sciences Internet server--U of L actually had set up a mess of old VT100 terminals because, other than through a terminal program, that was the only way the students could read their mail! Graphical interfaces for VM/CMS and most other mainframe OS's that run in virtual machines plain don't exist; Internet apps that most folks take for granted (un-relay-rapable mail servers, such things as even text-based WWW clients, etc.) had to be found or just weren't available, and tended to be years behind their *nix and/or Windoze equivalents. Needless to say, life got easier for the A&S students when they retired the old 3090 and got the nice Alpha server running OSF/1 ;) I wasn't QUITE there in the days of punchcards, but apparently they were still being used as late as the early 80's there--that's how long the 3090 was around--and the things were never really designed for anything much besides big databases and number-crunching and maybe BITNET connections. Most of the OS's for Big Iron actually date back to the days before CRTs became widely available, especially the Big Iron using virtual machine OS's. In fact, the ONLY Big Iron I know of at ALL that uses anything close to a GUI are a) the Alpha and Sun ports of Windows NT and b) a terminal and configuration program for OS/2 designed to act as a console for booting AS/400 boxen running OS/400 (in other words, the OS/2 program largely replaces the blinkenlights). There's no need for being "desktop pretty" if all you're doing with the thing is using it as a big-arse server (which most mainframes are)--most of the time you might not even be doing direct interaction with it anyway, and if you have to a CLI works wonderfully. ;)

  • by DLG ( 14172 ) on Monday February 28, 2000 @11:58AM (#1240639)
    This discussion keeps referring to 90% and 99% uptime as giving credit to PC hardware. I have, in 5 years, with 4 Linux servers, seen 3 hd crashes, 1 network card, 2 modems, and 1 motherboard go down. Each one of these was what I would consider a major crash (You say modem crashing is minor but these were internal and at the time primary IP devices).

    Given that several hundred day uptime has been my experience with downtown being planned, I would say that standard pc availability is closer to .1% likelyhood of failure with some accumulation of likelyhood being more accurate (I.E a working new pc is unlikely to fail if it is entirely working but at 5 years the likelyhood of a harddrive or powersupply failure is probably 1%.).

    That being said, I do not think that the issue hereis cost of replacing parts or the costs involved in running high availabilty. Just adding hotswappable redundant power and fan, and hotswappable raid, good power conditioning, and decent backup+hotspare gives you acceptable performance. One of the reasons we consider the Beowulf cluster as economical is the notion that to increase performance you just buy commodity hardware and replace out old/worn machines. The process of doing so is probably not much different than the hotswappable parts of a mainframe.

    Obviously the issue is the I/O involved with typical mainframe applications. Most PC users simply have no capacity to understand what bandwidth really is necessary for many real life applications. Someone mentioned 9 million record databases at 2k. Considering demographic data regularly run at credit card companies to determine NEW customers (where they are using 100million name lists), the person using a 9 million record database is actually a mid range user. Having tested a standard Pentium class PC with SQL, I find that over a million records makes queries very slow (30 seconds-2 minutes) Now certainly I could thrown more power, but larger databases create geometrical demands as the order of magnitude changes.

    Having worked for the City of New York in the late 80's, I installed one of the first PC based LAN's and database solutions and I can tell you the benefit of this versus the Mainframe with their terminals had nothing to do with performance. It had to do with the politics of wanting to add fields, add reports, and prioritization of work demands. Centralizing computing power meant that changes to a database might require 6-10 months simply to add a field. This led to bastardization of fields, and what can only be refered to as DENORMALIZATION of data.
    Statistical analysis was even more cumbersome. A question being asked by a new commisioner would require new statistical analysis, and the response would be needed quickly. But the request for a new report would require a bureaucratic nightmare.
    Because of this many tasks were being done by hand by large staffs (8 people ddedicated to a single report).

    When we recieved PC's, I was hired to handle the databases (relatively large 10-100 thousand primary keys) and the ability to create a new statistical model to view data being made in 1 day instead of half a year changed the way things were done. I had seen reports that took 2 weeks that were monthly reports and required a 8 people turn into 30 minute processing jobs that required 1.

    The growth of the PC in departments was not because of the performance. it was because MIS departments were notoriously self-empowered and there was no way for a small department to influence their priorties.

    People seem to be under the perception that the mainframe was somehow unsuitable, but it was always clear to us that the issue was management of the resources and a need by computer savvy managers to have their own programmers with their own priorities.

    My father, a mainframe COBOL programmer/analyst for 30 years was pleased to see Linux sharing some of the same concepts as he was familiar with. However we both are well aware that the sort of performance he was used to programming (batches of 100million records overnight) is just not available nor are the management features. Hell, you can't even RUN COBOL in Linux.

    In any case, I really think a major reason that there is so much anti-mainframe perspective is that mainframes have so often been considered expensive not for the uninitiated's touch, by the generation of programmers who were raised with the PC. They haven't used them, haven't considered what a COMPUTER is on a fundamental level, and don't have appreciation for the kind of programming that was done 30-40 years ago. My father used to brag about being able to write largescale applications in 8k of ram but with unlimited storage, a computer that fit in a room, but barely.

    Of course Mainframes aren't dead, and of course IBM made money even when it lost the lead with PC's. IBM gives away mainframes. It doesn't need to earn a cent from hardware. It has the largest patent library in the world, and sells techniques, not technology. Its Supercomputers, Mainframes, Mini's PC's and palmtops are part of a SERVICE oriented drive to provide solutions to problems. The reason Linux is important to them is that they envision a POSIX compliant standards OS that they can run from top to bottom with no additional learning curve at each stage. No doubt THEY would not suggest a mainframe for solutions in which a Beowulf cluster or a multiple high availability cluster would be superior in cost. They would make their money supporting their user with whatever was the most effective solution for the pay. If we consider the world through the PC Vs Mainframe religious war view, then we lose the opportunity to evaluate just how much the PC has evolved into a personal mainframe, and just what we can accomplish in the next 10 years with Linux in the commodity market as well as what Linux can bring to the Mainframe market.

    I think the article brought up several excellent examples of how a mainframe webserver farm might be advantageous. CoLocation is a bastard solution intended to move the machine closer to the bandwidth since we can't seem to provide fibre effectively to the location. As Media requirements change, colocation will stop being valuable (renting shelfspace atsomeone elses facility and losing physical access? Not acceptable)
    Anyone with usage requirements that need a 390 is not going to care about the cost of multiple T3's. The idea that a new client could be granted a FULL operating environment that could be backedup as an image, could be given increased performance, storage, network facilities based on a level of service agreement, all sounds exciting.

    Where we now have burstable T1's for growing companies that allow you to pay for the usage you actually see rather than spending based on your MAXIMUM usages, we would see every facility adapt to the usage. Suddenly see a 10fold increase in server usage? Instead of having to run around to solve a problem you didn't have a day ago, you could have a service agreement that would automaticly up your billing rate as your performance was increased. The benefits would not be just the ability to support a website with a 10million hit per day but to also support a 100000 sites that each MIGHT grow into a 10million hit per day site, without requiring capital costs to address that expansion, and reducing the cost of failure not in terms of system failure, but in BUSINESS failure.

    Isn't that one of the benefits of Linux? To allow a company to take a risk without the costs inherent in the high end solutions? This article merely points out that the same benefit exists at every level of hardware.

    Sorry for the rant... Well not really.:)

    DLG
  • by Score Whore ( 32328 ) on Monday February 28, 2000 @07:48AM (#1240640)
    The whole point of modern mainframes can be summed up in one word: VOLUME. Regardless of the ground that has been covered by intel and co., the different OS developers, and the various efforts to develop high capacity I/O interfaces, your standard PC platform just doesn't have the ability to handle the sheer volume of data that your average mainframe deals with. PC's have a long way to go before they can even begin to encroach on the mainframe realm of computing.
  • by Ungrounded Lightning ( 62228 ) on Monday February 28, 2000 @12:39PM (#1240641) Journal
    The whole point of modern mainframes can be summed up in one word: VOLUME.

    Sorry, but that's the wrong word. Volume is necessary, but you can get that with either big machines or clusters of little ones.

    The word you want is RELIABILITY.

    And by reliability I don't mean just uptime (although that's a piece of it). I mean the machine does not drop bits. Period. Even though the PIECES of it are dropping bits all over the place. (When you have square feet of silicon intercepting cosmic ray secondaries and rattled by thermal vibration it's unaviodable.)

    I know of at least one mainframe multi-CPU unix clone (UTS) which has sites with uptimes measured in years. In fact the last time I heard there were software patches that had been enqueued to be loaded the next time it went down, which have been waiting for years as well.

    The CPUS are automatically switched out when they fail and manually switched back in once they're fixed. The show goes on. And the processes that were running on the cpu as it failed still do their computation correctly - because the broken bits were caught and fixed as the CPU/memory/whatever hiccupped.

    Many of the people who are putting together clusters of machines of lower reliability - including those in the management of at least one mainframe company - haven't grokked that concept.

    The more computations you do, the more likely you are to be hit with an error. If your process is mission critical you can use hardware that catches AND FIXES the error, or you can try to write software that detects and recovers.

    The software solution is the MUCH harder problem. The hardware fix - which is the mainframe solution - is expensive. But when you're dealing with millions of bucks per hour of downtime, or perhaps per dropped bit (as phone companies, brokerages, banks, and the like are), you can afford it. Mainframes (less peripherals), redundancy and all, have been under a megabuck a pop for some years.
  • by chadmulligan ( 87873 ) on Monday February 28, 2000 @08:01AM (#1240642)
    I'm sure few people have ever worked on a mainframe for any length of time, and many may even have a false concept of a mainframe as a huge, obsolete piece of equipment which obviously should be substituted by a Beowulf cluster or a multiple-CPU desktop (or near-desktop).

    Well, it just isn't so. Granted that clusters and desktops may even equal a mainframe's raw processing power, but a mainframe's real strength lies in its massive I/O capacity... you can connect huge arrays (even RAID arrays) of high-speed disk drives and have them work near rated transfer speed. No desktop comes even close to that.

    Over 25 years ago I worked on a Burroughs B6700 mainframe. We had full source (at zero cost) to the OS, compilers, utilities and so forth, and had a great time mucking around with these, fine-tuning stuff, fixing bugs and even implementing new Algol constructs. For some weird reason Burroughs rarely used our fixes, though :-). BTW we also had full hardware schematics - not that these were of much use...

  • by Steve Burnap ( 155427 ) on Monday February 28, 2000 @08:02AM (#1240643)
    In short, Linux + Mainframe + Old technology = Marketing death.

    Wrongo! Mainframe = "Serious Business Machine". Nothing will get an MIS director's attention like saying "See, Linux can run both on your PC and on that million dollar IBM mainframe that runs your core business. Windows can't."

    Remember, what caused the PC to catch on was not the "home user", but the business world. The PC caught on because MIS directors saw it as a "Serious Business Machine" in contrast to competitors that were "game machines". To these guys, Microsoft is the "johnny-come-lately" that they are not all that comfortable with. For them, IBM is still king. Saying that Linux will run on their big iron while Windows won't says to that that Linux is a serious operating system while Windows is a toy.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...