IBM Runs 41,000 Copies of Linux on Mainframe 226
An anonymous reader wrote in to send us a story from Bloomberg about IBM making mainframes act like hundreds of servers. The best part is the bit at the end where they mention testing it by running
41,000 copies of Linux. "
Re:One problem (Score:1)
Re:Why bother? (Score:1)
Now picture all of those individual boxes being a single mainframe. Each customer would still have the security of having their own box, installing their custom software in a custom configuration, but all of the boxes would be virtual. From a customer's perspective, things would still be the same; remote login, no problems with other users overrunning their stuff (it's their individual machine), etc. The ISP only has to maintain a single machine; the mainframe.
Also, need to install another box? Just create another VM. Of course this may only scale to 41,000 boxes, but after that you just buy another mainframe and a few mainframes take much less space, power, etc, than thousands of PCesque boxes.
Jason
IBM's real contribution to Linux (Score:1)
Linux on IBM hardware is great but the real value is the contributions IBM's OS architects are making to Linux.
IBM's AIX is the best Unix. Read for yourself.
http://www.computerworld.com/home/print.nsf/(fr
http://www.dhbrown.com/dhbrown/opsysscorecard.c
More importantly IBM is taking their expertise with scalability, reliability, maintainability and adding major functionality to Linux. Such as their addition of a Journal Filing System.
In a couple years IBM will have thousands of developers servicing Linux, developing code, and architecting critical missing pieces that Linux needs to mature.
A couple mainframes running Linux is just a nice story compared to the real contribution IBM is making.
Moderation... (Score:1)
The post that was the parent's parent is rated -1:
[This is why solaris/irix are dead. (Score:-1, Flamebait)]
Well, how the heck did you see that post? And for that matter if you find a -1 discussion interesting/annoying enough to comment on, how can you trust the moderation at slashdot enough to surf at 2?
Re:Why does multiple servers matter? (Score:1)
REUNITE GONDWANALAND! :)
Think of it! (Score:1)
American Big Business runs on mainframes (Score:1)
Mainframes are far from dead. Most big business relies on them.
*sigh* If only we could kill COBOL...
*ABEND*
Whoa there (Score:1)
Maybe on the Intel front... But then Sun never really put much stock in x86 Solaris. I wouldn't use x86 Solaris unless it was the last choice - I'd run Linux for that. Get a cheap Sparc and run Solaris on that.
a good monopoly (Score:1)
one would think that IBM learned their lesson on what it means to be a "poor behaved" monopoly.
Re:Big Question (Score:1)
Re:You gotta admit... it's impressive. (Score:1)
You see, if I wanted an applicaiton to do that, instead of running 64 copies of an OS, talking via MPI or the such, I'd run one that could divvy out the work to 64 to 128 threads, running one machine.
Granted, though, more distributed applications are available for Linux and the such then OS/390. That along could make it worth it. It wouldn;t be a 'performance' boost over running one that can thread properly in an OS/390 environment.
Re:A new form of DoS... (Score:1)
Re:isnt this old news? (Score:1)
Re:One problem (Score:1)
Of *COURSE* they'd have problems with those extreme situations, but the point is, *anything* would.
At least IBM will *LOAN* you a new machine while you rebuild.
Re:Not much different.. (Score:1)
Re:Linux wasn't the first choice (Score:1)
Re:One problem (Score:1)
Re:Hmmm render farm.... (Score:1)
From what I have read about the virtual machines is that you can assign an amount of memory and processors to each VM.
If there is a good multi-threaded linux renderer,
and it can support 4+ procs well, then the OS
overhead is lower than it is for a single proc machine. The more procs per job, the faster the render. The biggest issue is where the fall off is for multiple procs.
Assuming that one of the virtual machines is a file server, then the data can be sent to the render processes at a greater speed than over a fiber conection. Every little speed up makes a difference.
Also, how the render jobs are divided up can make a big difference on what performance gains that can be made. The way that we divide shots on our render farm is good for quicker shot speed and lower frame speed. This works well for several machines with single or dual proc configs.
With a mainframe render farm, and 10+ procs a VM, then the frame speed can be increased as well. Memory requirements lessen because of shared memory and turn arround time can stay about the same.
The cost for similar horsepower from an alpha farm or an SGI Origin farm would be much higher, I assume.
slashdaughters and mainframes (Score:2)
You gotta admit... it's impressive. (Score:2)
I wouldn't want to run Linux on anything less than one OS instance per processor. Such a system would make a fine Beowulf cluster due to the fact that the internal bandwidth of the mainframe is higher than an method of network wiring available today. It would be truly awesome. I know there is overhead in running multiple kernels and supporting drivers and such but it should be faster than discreet machines.
Anybody have any idea why it wouldn't work?
Re:One problem (Score:2)
From:
10 PRINT "THIS IS MY SIG" 20 GOTO 10
To:
10 PRINT "THIS IS MY SIG": GOTO 10
Or this variation:
10 PRINT "THIS IS MY SIG... ";:GOTO 10
Or this:
10 SIG$="THIS IS MY SIG.":FOR X = 1 TO LEN(SIG$):PRINT LEFT$(SIG$, X):NEXT X: GOTO 10
Oh... to stay on topic... Linux on IBM Mainframes rock!
However (Score:2)
Isn't there one of the mainframe gurus from Schwab on here? Those guys run some big iron in the back room...
Re:One problem (Score:2)
Been done for Years. In the late '70s a Company called network Systems released the first router, for the purpose letting mainframe devices sit in anouther location. CNT is their big complitor, which does a similear thing.
Of course the orginial routers didn't do what you want (and didn't do IP until much latter) but it was the direct ancestor of what companies use now what they want disk or tape drives located in a different state or country.
Interesing story, in 1985 this company sent a VP to california to examing and perhaps buy a small company that was just getting started then. AFter doing due dilligence they decided this company was going nowhere. The small company that wasn't going anywhere was Cisco. (Accually most of those close to Netowrk Systems today conclude that the only people who would win from that deal going through is 3com, but that is a different story)
1.5 Billion pages a day (Score:2)
WOW that is amazing!
The Cure of the ills of Democracy is more Democracy.
Re:One problem (Score:2)
The only whey to get a bomb box on a 390 is to use Semtex (OK maybe not but you get the idea)
I'm kidding about the semtex folks!
The Cure of the ills of Democracy is more Democracy.
Re:Why bother? (Score:2)
And it's not just the web server. Look at how many applciations you can run under Linux. Now look at how many under Os/390. Go ahead, I dare you to look for something for the OS/390 without paying big moola for it..
Re:More linux (Score:2)
Re:However (Score:2)
I do know what you're talking about though..
Re:Why bother? (Score:2)
Sure, there are many out there, but if given a choice between two things, one with applications already available, and one without, they'd choose the OS with availability..
Re:One problem (Score:2)
Set them up in 3 corners of the globe.. Throw high speed lines, allowing the machines to take over when another fails..
And best of all, I bet ever in Alaska you can get an IBM specialist at your door within an hour.
Anyway, sorry for the wiseass remark, but it was just *TO* hard to resist..
Hmmm render farm.... (Score:2)
There was terabytes of ram. You would want a couple hundred meg of ram for each render job.
One thing that would be better is to assign 4 procs to a machine, thus reducing the total amount of memory to be 10,000 machines X 250-500 MB.
I am sure that we could get that much memory for the system...
I don't think that most render engine would port well to 100 procs... but... if it was reprogramed with that in mind... it might do really well.
Too bad I don't have the time or the resources.
Re:Hmmm render farm.... (Score:2)
For a render farm you'd be far better off with an bunch of SGI Origin 2K, or a room full of Alphas. Interestingly, in these types of applications, physical support costs for floor space, air conditioning, power, etc, can begin to be quite significant, so you have to start thinking about work/watt or work/m^3.
--
Cat herding (Score:2)
Scene:
Images of a cat happily playing around in it's own happy world.
Zoom out of cat brain and see Matrix-like techno-hive with thousands of cats on feeder tubes, with electricity sparking around.
Voice Over:
Cat herding. It's what we do.
--
It's got to be said... (Score:2)
--
I think the link broke. (Score:2)
Chas - The one, the only.
THANK GOD!!!
Many users, all root (Score:2)
Just think - any script-kiddie who gets root is unable to see he has't got the machine to himself, powerless to hide from admins who can watch his every move from the real OS, and unable to do any damage that can't be rolled back in an eyeblink. Crashes and "reboots" recover in seconds. And, when the customer is done hiring their server, it just drops into the bit bucket and the resources are reallocated to other servers.
Re:This is why solaris/irix are dead. (Score:2)
IBM's mainframe operating systems will stay right there on the mainframes.
isnt this old news? (Score:2)
----
Obligatory conspiracy theory (Score:2)
Slightly off topic but its been on my mind.
New process model (Score:2)
(this joke is kind of obvious, so forgive me if someone else has already posted it)
Re:Crashes (Score:2)
force radius
autolog radius
and you would get the same effect as a power cycle on a standalone machine, but without the time wasted on hardware and memory self-tests, SCSI bus initialization, etc. You go straight from the autolog command to kernel initialization instantly, so you could be back up and running in a few seconds, depending on how long it takes for linux to start up and your radius server to initialize.
Re:Information on VM and Linux (Score:2)
I had better luck playing with the vector instructions. I had a very fast fractal generator
Re:-1 Redundant (Score:2)
If one of those virtual machines were to attempt to write to a memory page, CP would simply fork off a private copy of that page, and continue on.
I believe that Linux (and most unixes) do the same thing -- one read-only copy for each currently running executable, shared between all processes running that executable.
Re:beowolf (Score:2)
If you create multiple Linux images using the zero-latency networking capabilities that VM gives you, you can split up your application until there are a small enough number of tasks per Linux image to run smoothly.
Another good reason is, as AC says, if some users need root access for some reason. You can set up a system as a virtual machine, give them root access to that image, and nothing they can do will affect any other virtual machine in the system. Great for testing new Linux kernels. In fact, VM was designed specifically for this purpose -- except that it was used to debug new OS/390 (MVS) kernels instead.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
different mainframe-like architectures (Score:2)
On first boot the OS goes through and checks each chip/component piece by piece for reliability. If unreliable, it tries to assess exactly how/where it is unreliable and routes around the problems. The first boot and diagnose process takes about a week!
The end result is very reliable large computer (64 CPUs, IIRC) with awesome failover capabilities (since dealing with broken hardware is integral to the whole system). I don't remember the figures, but it was something like the final running computer might have %30 or 40% broken parts.
Oh and don't forget the bit about 'commodity parts' - broken cheap IBM clone hardware.
I'm sorry I can't give you a link, I'm not at home and my net connection is too slow/unreliable for search for it. However, the project is funded at least in part by HP and it is hosted at a university (Sanford? Cornell?).
They do have a working prototype, though which definition of "working" they are using... I'm in no position do judge. Cool pictures though. Cool project too.
cheers,
-matt
Beowulf? (Score:2)
--
Wow. Old news. (Score:2)
The article was something to do with 'notes on runing linux on a mainframe'.
They made a good point, though.
One IBM mainframe, and you could present thousands of 'virtual' linux boxen to your web clients.... virtual machines in the extreme.
Re:One problem (Score:2)
-B
Re:Supercomputing (Score:2)
Mainframe CPUs are pretty impressive, but a cluster of Dual celeron or OCed P3 FCs will blow them out of the water, if all you want is raw number crunching. If you're rendering 3d frames for a movie, which is very parallel (each frame, or portion of, can be done by a seperate CPU with very little overhead) then PCs are your best bet, by *huge* margins.
But, if you want to do database work, with huge databases (think, the phone company, recording long distance calls as they happen, for 15 million customer) then you need a mainframe, something with internal bandwidth so high as to make the 1.6GB/s of a PC look like a serial port.
But, even in the huge database model, you could still be using PCs to ease the load, by passing transactions through a cluster of PCs which would do the rate lookups, or something, letting the mainframe deal with just the one database.
Mainframe aren't dead, but they aren't the ultimate solution either, when used properly, they are worth every penny.
A new form of DoS... (Score:2)
---
Re:A new form of DoS... (Score:2)
---
Re:Why bother? (Score:2)
There seems to be an implicit assumption here about the homogenic nature of large boxen. Specifically, that all machines with >1 processor work in fundamentally the same way.
This turns out not to be the case. The "poor man's multiprocessing" that most young'uns are familiar with - Symetric MultiProcessing (SMP) has as key features a single system image scheduling tasks across multiple processors. Performance characteristics can be summarised with the following how-to-fix-it rules of thumb:
Contention for main memory is just about always the problem with SMP systems.
However, those wise old sages in the Big Iron world were never going to be satisfied with this approach. There are any number of ways of putting >1 processor in a machine, SMP is mearly the "cheapest" (and possibly the easiest too). Specifically, S/390 systems tend to use clustering techniques which effectively involves n independent machines sharing hardware resources - such as network connections, memory & disk. These are coordinated by a single Hypervisor "master" image (usually VM) which is capable of spawning any number of (potentially different) "slave" operating system images - including, of course, itself. Note also that for any given machine, there is absolutely no guarantee that (number of OS images concurrently active) = (number of processors in machine); usually the "=" is replaced by a ">>" sign (hence the 41,000 Linux tasks metric!).
Since the key operating characteristic of this approach to multiprocessing is many heterogeneous systems performing different tasks, it's not as simple to identify the performance bottlenecks :-). However, canny readers will note that since IBM mainframe hardware development has spent the last 30 years focussing on I/O and consequently throughput, rather than getting into arms races over CPU MHz, fundamentally the solution to performance problems remains the same. High IO rates (and not especially superbly quick CPUs)coupled with relatively cheap OS image creation, changes the approach to dealing with single-task performance problems - wheras a *IX or NT O/S is limited to spawning another process (and hoping it'll be able to exploit any spare SMP processors lying around without competing for precious IO resource), S/390 systems can spawn another process (which may make sense if the system is configured to allow OS images to spread across multiple processors), or spawn an entire new OS image and *guarantee* no IO contention (OK, OK, - vast oversimplification). Once a system consists of >8ish processors, this tends to prove overwhelmingly more effective for achieving whole-system throughput improvements, compared to an SMP arrangement (which would at this point be spending a huge proportion of it's time contending for IO resource or waiting for the OS image to resolve IPC and memory contention issues).
It's true that the most effective way of doing SMP multiprocessing on Intel hardware is to use NT (for the moment...). However, don't make the mistake of generalising that rule-of-thumb outside the problem domain: intel-based SMP multiprocessing. This does *not* equate to the wider class of computing solutions based around multiprocessing.
Here Endeth The Lesson.
PS: Crays, Connection Machines and Transputer systems operating in other, fundamentally different ways too...
Just hype... (Score:2)
Re:American Big Business runs on mainframes (Score:2)
Re:isnt this old news? (Score:2)
Re:Why bother? (Score:2)
NT, as it's code currently stands, will never be able to be ported to the main-frame. Microsoft has tried other platforms but seems to have failed, numerous times. Linux on the ohter hand already supports more platforms than any other OS. Porting to the main-frame was just the next logical step in the evolution of Linux.
By the way for every user you would have to have a seperate instance of NT because NT is not multi-user! Taken that into account NT would take more resources than Linux!
Correct Link to Article (Score:2)
Pulling the plug on a mainframe. (Score:2)
They have redundant power feeds, too. And not just the whole machine - every box in the room has 'em, and each feed powers a separate set of power supplies in the box. (Some devices get their power from their controllers. And these are generally driven by multiple controllers...)
If they did it right, the power cords go to opposite sides of the room, thence by different paths to different feed points where the building gets power from separate feeds that came in from different parts of the grid (which was a consideration in chosing the site for the computer center: at a grid boundary, or close enough that you can pay to string a line from a different section.
There's at least one UPS, of course. On ONE of those two feeds. (A UPS, on the average, creates one extra power failure in its first year of operation. B-) )
And I'd like to see the janitorial staff try to plug into the connectors that feed the mainframe. They aren't your typical duplex outlet.
The point of concurrent maintainence, of course, is that ANYTHING in the box can fail, and be swapped out and replaced, without stopping the processes.
They might not get as much CPU time or disk response speed as usual while the system is in "degraded" mode due to failed parts waiting for or undergoing replacement. But they run continuously for years - and are shooting for forever.
I hear they once moved a Tandem across town by putting in a mainframe's frame and the comm lines, then gradually installing boards in the new location and unplugging them from the old, until the whole machine was at the new site. Still running.
Try THAT with your PC. B-)
Re:Old, vunerable mainframes (Score:2)
Beowulf for hack value, or tweak the schedulers (Score:2)
JMS made some good points, but the basic advantages he suggests (better scheduler control, better memory allocation) really can be done by tweaking the process scheduler and memory allocation functions in the operating system, rather than massively cloning kernels. If there are limitations in the hardware (e.g. how many bits or frames of translation table space, etc.), this doesn't really get around it, but if you can already run more than 1 copy over VM, it may be easier than hacking everything that touches memory allocation. It can depend a lot on the sparseness of the virtual memory space you're trying to simulate.
While it may not do much for user space work, it would be a fun place to test kernels. You save the room-full of 1U rack-mounted boxes, and instead have lots of virtual machines you can blow away when your kernel hack fails, and it lets you test lots of different parameter combinations in parallel.
Nice kernel testing environment (Score:2)
More linux advertisement? (Score:2)
Re:Why does multiple servers matter? (Score:2)
The same physical space required to house 41k servers is astronomical, then once you add power consumption into the equation, there really is no comparision. Heck, if you bought two main frames (one for redundancy) the cost benefit is still phenominal.
I guess the point I'm trying to get at is that even though the mainframe is expensive, it could easily pay for itself in less than a year in the right webhosting environment.
Sol
(who hopes he was somewhat clear)
IBM Servers sold by the pound (Score:2)
Some "obsolete" but workable IBM AS/400 units are sold very cheaply. I know of at least one enterprising geek who filled a basement with AS/400s that he bought with a price negotiated by the pound . They all run, what he does with them I don't know, I'd hate to see his electric bill.
(IBM Servers sold by the pound =anagram>'Short-lived!' presumed by snob.
So dry: rubbish developments.)
[almost never get any +1 when I include an anagram, I wonder why...
Re:Mainframe Linux (Score:2)
-1 Redundant (Score:2)
Last month's story [slashdot.org] had much more details.
One thing neither of them talk about. VM/370 allows shared code between "machines". If a chunk of code can be ROMable, then you can load a single read-only copy of the kernel, or EMACS, and everyone who uses it gets the same copy, exectuing out of the same chunk of RAM. I don't know if this is at all relevant in Linux. Is the executable code typically kept (widely) separate from the data?
Re:Telco Ties with IBM (Score:2)
And have to become the next Microsoft? We've already been through the DOJ grinder once, and both employees who remember it don't want to go back.
As I have explained before, IBM believes in being a player in every market and a monopolizer of none. Our strategy is to get dominance in any market we play in, if possible. If we don't have dominance, develop and/or produce cutting edge components like GPS chips, hard drives, Transmeta chips, etc. and make a few cents off of every dollar our competitor makes. If you use someone else's software or hardware, our services consultants will still fly out and tell you how to make the best of it and fix it when it breaks (like Win 2K). In short, IBM stays the biggest single IT company in the world without holding a monopoly club over any given market. It enables those of us who work for Big Blue to feel like the good guys, especially when IBM is amongst the leaders in the push for standards that are non-MS-centric for such efforts as Java, etc.
Disclaimer: I don't "represent" IBM. I just work there.
B. Elgin
Re:Cool, but..... (Score:2)
Solution to disasters (Score:2)
Here's why:
IBM actually manufactures (and sells) a mainframe system that comes with it's OWN satellite uplink and guaranteed bandwidth. They're designed for use on oil platforms.
These systems also come with some insanely fancy remote-mirroring and update functions (because, after all, oil platforms are hostile environments for most computers).
So, if you're worried about natural disasters, you could theoretically buy two of these systems. Then you won't need to worry about anything less than a nuclear war -- even if the land-lines get killed, you've still got your friendly satellite.
Besides which, distributing a couple o' mainframes is a hell of a lot easier than effectively distributing 82,000 PC-based systems! I mean, heck, just think of the POWER requirements for the PC equivalents . . . Good lord, you'd probably have components failing on at least one machine every five minutes or so (MTBF would kill ya on that many machines).
Of course, I could be wrong.
Mainframe Linux (Score:2)
Re:Hrm! (Score:2)
A: Use a blender!
Sorry if it's a little, umm, inflamatory...
multihoming (Score:2)
One organization could lease space and a linux install to anyone.
This would help all those who have put boxes at a distant local just for the use of bandwidth etc.
Re:One problem (Score:3)
Can you say "single point of failure"? Good! I knew you could!
Modern mainframes tend to have multiply redundant everything. System failure is not likely.
Re:Why bother? (Score:3)
It's a different world in the land of the mainframe..
Re:Single Point of Failure? (Score:3)
For those who don't do the math, that's an average of a 5 minutes of down time *PER YEAR*
If the problems that you forsee *DO* happen, and it goes down for, oh, and hour, that would statistically speaking mean that 12 other sites had 0% down time for the year.
The cumulative down times of 41,000 servers would be *MUCH* more then 5 minutes.
Now, 41,000 is a *GROSS* exageration.
Re:One problem (Score:3)
While I haven't had the opertunity to work on something with 20 years uptime, I *HAVE*, and have a terminal open now, to a machine with 6 *YEARS* uptime..
Re:Beowulf! (Score:3)
o) You have an application that wants to spin off a large number of separate tasks. Your Linux kernel will not perform well under these circumstances, but if you built a virtual beowulf cluster of many Linux images, each running four or five active tasks per image, then each Linux image will run smoothly and efficiently -- within its design parameters.
2) You want to run a task that requires a huge address space -- far exceeding your real memory.
First off, things aren't so bad, because by using a shared segment, VM can use one shared-memory copy for all of those Linux kernel images, saving a lot of memory.
Also, mainframes can page so efficiently that they can massively overcommit memory without taking a performance hit. They were designed for this. The normal configuration of a VM system is thousands of users at terminals, each with their own virtual machine, running CMS. Each virtual machine might have two or three megs of shared program code, and however much private data they happen to be using at the moment. VM was designed to support a massive overcommittment of memory. Mainframes even have special paging storage, called expanded storage, and a set of hardware instructions for performing quick paging back and forth between real and expanded storage. Think of it as a fast ramdisk, attached right to the CPU bus.
The end effect is that you can actually get an improvement in performance by turning off your own paging, and relying on VM's native paging facilities. MVS sites discovered this years ago.
Say you want to provide 2 gigs of storage to an application, but your mainframe has only 1 gig of memory, you would have two options:
1) Allocate about 1 gig to the Linux image, and create paging space within Linux. The Linux kernel handles all the paging.
2) Allocate 2 gigs to the Linux image, so that Linux never has to page, and let VM handle the paging.
You'll get better performance using method 2.
The biggest strengths of mainframe designs go right to the heart of your objections. It's what VM was designed to do, and it does it very well.
Comment removed (Score:3)
Sure it can run 41,000 copies on linux (Score:3)
Re:Single Point of Failure? (Score:3)
Read the news throught linuxworld (Score:3)
But on topic: I had the pleasure to work with VM/ESA on top of which was running another IBM Mainframe OS, VSE/ESA. Several coipes of VSE were runnning at any time. And we of course started additional ones, for testing of programs. And the uptime was incredible!!! We had an entire disk unit *destroyed* (filings of the hard disk material flying around), but the system was still happily humming on. Very impressive.
Also, did you guys know that OS/2 was developed partly by running it on top of VM? I think these mainframes + VM are the coolest technology to come out of IBM, if we don't count the bionic chip coming out in 2015
Re:Networking question (Score:3)
Re:"Microsoft tried other platforms but failed" (Score:3)
Microsoft doesn't try other platforms for WinNT.
Microsoft works out a license arrangement, and the hardware vendor does the porting code of the very few bits of native code.
It's the hardware vendors who have given up on their own WinNT ports... MIPS and DEC.
The only hardware Microsoft has been interested in, is the hardware which the typical end-user would put their grubby mitts on. First, Apple BASIC cards (when every end-user knew what a PCB was). Then mice. A short-lived i186 booster card. Millions of more mice. Trackball. More mice. A few gaming devices. More mice. Now the X-Box. Common thread: grubby mitts of the unwashed masses.
I'm not flamebaiting here. Linux may be cool, Linux may have superior traits in some regards, but as a whole, Linux has a lot to learn about offering products to the winning markets. 'Cuz there's only two winning markets: business-to-business (Why should I trust you with my billion-dollar mission-critical apps? You don't even have the money to pay for software!*) and mass-market (I don't even know how to turn the dang thing on!**)
* Suits don't care about how kewl something is. They don't want to be surprised. They don't want risk. They want to do it just like the other guy does it, except with a somehow better profit margin.
** If you say you've never heard someone say this exact line, you're lying.
Telco Ties with IBM (Score:3)
Hmm, Qwest just annouced they were entering into a hosting partnership with IBM. Opening many joint-venture web hosting centres in the US...
At the same time IBM also has a deal with a web site design firm in Minneapolis MN to make a large ammounts of web sites...
The plot thickens as they attempt to take over the world!
Single Point of Failure? (Score:4)
From the article, I got the impression that they wanted customers to use their existing mainframes (presumably data warehouses and such) as webservers. At least, that's what I got out of their claims at increasing speed by doing away with webserver-database latency.
Problem with that is, something that takes down one service takes down both of them. I realize mainframes are pretty damn reliable boxes, but if it goes down, do you want it to take your webserver with it?
(I'm assuming the security issues inherent in putting a webserver -- esp. a public one -- directly on one's data warehouse have been hashed out in the course of the VM development. Nonetheless, websites are flypaper for h4x0rz -- that's putting a lot of trust in software.)
Same thing holds for anyone using it to replace 41,000 (yeah, whatever) webservers. One machine fails, 41,000 web servers (and god knows how many sites) out of business. I suppose a redundant mainframe is sufficient insurance -- but how much more appealing is that than buying a comparable number of Suns, and having just a few backup boxes?
Seems like an interesting idea, and it certainly creates options. I don't know if it's the Sun-killer, though; and though it might convince existing users to not buy Sun, I don't know how many new buyers it would attract.
Then again, the closest I got to working on a mainframe was touring a server room with a bunch of AS/400's in it once, so don't think I'm the Delphic Oracle or anything. :)
phil
Articles about this: (Score:4)
There are more given in the LinuxPlanet article (which is where I got the other links).
A practical use for this? (Score:4)
Give me money, and I give you root access to your own, incredibly reliable, Linux box. If you trash it, it can be restored from backups in seconds. Incremental cost of adding another virtual host: almost nil. Until, of course, we get to 41,000. By then I should have enough money to buy a new mainframe. And so on.
Re:Old, vunerable mainframes (Score:4)
That would actually make a good sales slogan for Big Blue. Pan the camera over a virtual jungle of CAT5 and RS232 strewn on raised flooring, and up onto a cluttered wall of dilapidated 2U servers. One unit is smoking and sparking foreground right. Announcer: 'Would you rather have 10,000 chickens'. Screen goes black as the camera zooms out of the black background of the IBM logo on a shiny new mainframe. Announcer: 'Or one bull?' Wrap it up with the standard IBM music and blue bar quick scrolling to stop. 'IBM E-Business: A bull in your corner.'
In other words... (Score:4)
*ducks*
Here's my [redrival.com] copy of DeCSS. Where's yours?
Re:Linux wasn't the first choice (Score:4)
Hrm! (Score:4)
(sorry for the ad slogan infringement, but it seemed like the right thing to do
-Bugbbq
41000 copies Linux sing along (Score:4)
(to the tune of 99 bottles of beer/..... sorta)
41000 copies of Linux on the box
41000 copies of Linux
if one of those copies should happen to fail
wait in the dark, til the power comes on
(Repeat)
A large ISP.. (Score:5)
They have been doing what VMWare is doing, aka, running virtual machines, for nearly three decades. They know what they're doing.
I'd be interested to know what large ISP is looking to use them in this way. To my knowledge, this would be the first published use of the mainframes specifically to serve as a server-multiplexor (Is that a word?) in an ISP environment. This could be the 'next big thing' for these machines. Either that, or be yet another flash in the pan with alot of 'cool' factor..
(Fingers crossed)
Re:Hrm! (Score:5)
Information on VM and Linux (Score:5)
First off, this setup is running under VM/ESA. This is NOT the same operating system as OS/390. Diehard VM'ers tend to view OS/390 about as fondly as Linux users view Windows. OS/390 is the huge, IBM-management-approved operating system with JCL, that evolved out of OS/360. VM is the back-room project that IBM management has tried to kill, over and over, but can't kill, because it's needed for OS/390 development, IBM developers demand it, and many customers demand it. OS/390 is what management wants to sell -- it's the "strategic" operating system. VM/CMS is what the IBM development teams use because it was designed, from the bottom up, by IBM's best software developers, specifically as a platform for software development. Really. I used it for 15 years. If you're developing or debugging IBM assembly code, it's just the best. VM was a skunkworks project, and a damn fine one. It's a shame that it isn't that well known.
The two operating systems should NOT be confused. Different operating systems. Entirely. OS/390 can run as a guest under VM/ESA, but not vice versa.
That said, VM has a HORRIBLE native TCP/IP implementation. It's a big program, written in Pascal, and it's a dog. In fact, it's about the weakest part of VM. It never got much attention, because mainframe networking has always been driven by SNA, VTAM, etc. and IBM development is traditionally done on a 3270 style terminal. All the tools, XEDIT, the mail system, etc, are all designed for 3270, block mode terminals. VM is lacking in TCP/IP support for the same reason that Unix systems are lacking in SNA support, because no one wanted it. This is changing.
The VM TCP/IP implementation is a standalone program. The TCP/IP program runs in its own virtual machine. When someone wants to connect to TCP/IP, they use a system call to establish a connection between their virtual machine and the TCP/IP virtual machine using a facility called IUCV -- Inter User Communication Vehicle.
IUCV is a very fast, block-oriented, secure, unspoofable point to point protocol for establishing data links between virtual machines. A programmer using IUCV starts by creating a link to the target, then sends blocked data by making a system call with the address/length of the data. The CP nucleus (their word for the kernel) copies the data into the system address space, synthesises and schedules an interrupt for the target virtual machine, and immediately reschedules the source virtual machine. The target virtual machine receives the interrupt, issues an IUCV receive system call, and CP copies the data into the target machine address space. This is all done completely asynchronously. It's extremely fast, and utterly secure. Zero-latency networking is a nice thing to have.
Which leads to something very cool. IP over IUCV.
I don't know exactly how they set their system up, but here are the basic tools that they have to work with:
1) TCP/IP to the outside world can be handled in at least two ways:
o Through a native Linux network device driver. In VM, physical peripheral devices are assigned to individual virtual machines. A virtual machine with a physical network interface attached to it simply uses it as an ordinary I/O device.
o Via a connection to a native TCP/IP virtual machine, using a special device driver that knows the native IP-via-IUCV protocol.
2) Connections between virtual Linux machines can be handled in a couple of ways:
o through a virtual (or real) CTCA (Channel to Channel adapter.) a CTCA is a high speed parallel interface used to connect mainframes together, point to point, very fast. If you use virtual CTCAs, you can move Linux images from one machine to another without having to ever reconfigure anything within the Linux images themselves simply by replacing the virtual CTCAs with an attached real CTCA and changing the directory entry of the virtual machine.
o Using an IUCV driver, one can interconnect all of the internal Linux images via virtual point-to-point lines. This is much faster then virtual CTCAs. The drawback being that you need to configure IUCV links within a virtual machine, so changing things around requires reconfiguration within the linux image itself, and IUCV is designed to work efficiently within a single system, not across multiple systems. It can be done, but it's a hack, and it's inefficient.
o Through an obsolete API called VMCF, which was superceded by IUCV.
The big innovation going on here is the realization that by running multiple Linux images on a single machine, or multiple Linux images on multiple machines, using mostly IUCV links, one can almost eliminate the network latency, because the data transfers are simply memory copies, and one can eliminate the network collision problem, and the network traffic problems. If you have 100 machines sitting on a fast ethernet, and you start getting a lot of inter-machine traffic, you are going to have collisions, and each machine has to waste a fair amount of time evaluating which packets are his. This removes the biggest bottleneck in large clusters of small machines, Also, an IUCV connection is guaranteed to never drop a packet, and always transmits packets in order, so TCP over IUCV proceeds smoothly and efficiently.
This gives you lots of scaling options for your virtual Linux network.
One more point.
There was an article [slashdot.org] that came out two days ago, but due to a slashdot bug never appeared on the main page, but proceeded directly to the "older news" catagory. In it, the author wrote:
An S/390 running a light load will not run as quickly as a fast PC server under a light load, according to Courtney. The difference between the two systems will not be apparent until the load is much larger.
"The PC will begin to degrade and will typically reach a point where it avalanches down in performance as its load limit is exceeded. The mainframe starts out at a lower performance level, from the standpoint of an individual program task, but degrades much more slowly and much more linearly as the load increases," he says.
Revisiting my previous comment in this thread, I remember, a while ago, reading in another article about a difference of opinion between some IBM programmers and the kernel maintainers. Supposedly, IBM was complaining that Linux performance went south when the number of running tasks became large, and proposed some scheduler changes, but the kernel developers didn't want to change it because the changes would have slowed the kernel down in the "normal" case of only a few active processes. Does anyone have a link to this or remember what I'm talking about?
Sounds like this article is describing the same known effect. However, by running multiple images of Linux under VM, one obtains a workaround for the problem. If a Linux virtual machine is overloaded, create a new virtual machine image, and offload one or more of the biggest processes to run on the new machine.
This is all very interesting stuff. Don't forget, the stuff we're just discovering now in the Linux world, is largely stuff that the IBMers, and especially the VMers have been working on and perfecting for about 30 years. I'd love to see a Linux kernel that can run 41,000 tasks, with a linear performance degradation curve. Until then, at least there is a way to run Linux on an operating system that has those characteristics.
And the fact that their operating system can run 41000+ simultaneous tasks without disintegrating, but ours can't, should eventually get under someone's skin and prompt efforts to make the Linux kernel scale better under heavy multitasking loads. Why should they have all the fun?
- John
Re:However (Score:5)
Spent some time working on IBM's TCP/IP stack. You're talking from the past. The mainframes stack used to be single threaded and very slow. As a workaround, IBM hacked it so that you could run several stacks on one image (this caused its own 'stack' of problems of course).
The Release 6 was a complete rewrite of the TCP/IP stack. They used it to set record industry benmarks when it reached the 1.5 Billion pages per day mark about two years ago. They gave us all nice denim shirts with the embroidered slogon "1.5 Billion served". I wouldn't call that crappy, would you?
Linux wasn't the first choice (Score:5)