Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
IBM

S/390 Support is Now on Kernel 2.2 144

Alan Cox has released kernel 2.2.14pre14 (And now 15). The big news is that IBM S/390 Support is now merged into the 2.2 kernel (most of it). Currently the port features: Full SMP support, Disk, Networking & Console. More details can be found on this feature from Linux Today
This discussion has been archived. No new comments can be posted.

S/390 Support is not on kernel 2.2

Comments Filter:
  • by alonso ( 63617 )
    Why are they (IBM) porting Linux on that platform?
    Are they thinking about the future of S/390 without OS/390?
  • Cool, but the rough kernel support doesn't cut it. Someone must make a Linux distribution for S/390 to make this port useful
  • I guess they would just need to port the applications across to it though... therefore, probably not a big deal.
  • That bloody kernel eats up 50MB already! No more ports! Stop it before it's too late! Heeeeelp!!
  • by davie ( 191 ) on Saturday December 18, 1999 @04:28AM (#1463660) Journal

    The OS that runs on the S/390 supports multiple virtual machines. As I understand it, Linux will run in one or more VM partitions (?)--it will not be the primary OS.

  • by Anonymous Coward
    Well, well!

    After so many back-stabbing, behind-the-scenes, out-flanking FUD and BS from commercial companies pannicked about how they could possibly survive real "Open" competition, Unix is finally getting unified.

    That it's Linux does not matter.

    That it's open source does.

    The amount of sharing between SGI, Sun, IBM, and espeically the free "Unix-like" operating system groups -- *BSD definately included -- can only accelerate.
  • To be honest I already had the chance of reading 3 comments so I have the advantage of knowing what at least 3 other people think :) For me it seems only natural - a consequence of a trend that has grown stronger lately. No longer than a week ago Creative announced availability of jobs for open source drivers. Over the late year major companies (I guess all of you know them, so I won't bother mentioning names) have anounced their intent to enter the Linux market. Why ? Because it's good. Because it could mean the end of an era in the computers' industry. An era dominated by egos (and not in the nicest way I could say). I also means that we may have grown up (as an industry, as a field of research etc.). So i guess I can say that this is really good news.
  • I have read this is the first step, the second is to made it the primary OS.
  • by RNG ( 35225 ) on Saturday December 18, 1999 @04:44AM (#1463665)
    With Linux ports now ranging from PDAs to PCs to Workstations and now to Mainframes, Linux is acutally proof that you can write a portable OS without using a Mircokernel. The argument used to be that only a Microkernel based OS would be highly portable but Linux proves that this is not true. We've gone from 1 platform (IBM PCs) to lots of them (I have no clue what the current count is) with the first few being done with (virtually) no commercial backing.

    Some companies out there (with deep pockets) who once claimed (or at least aimed for) portability across platforms, should be seriously embarassed by this. Linux proves that portability can be achieved under a traditional/monolithic kernel design. And while some OS purists/professors may argue about some of the finer points of this, it should be noted that Linux is here now and it works on a ton of platforms. The fact that it's free and (as far as an OS can be) cool is an added benefit, with the latter being lost on 99+% of the population ...
  • Help me too! Those of us who are not blessed with multi-T3 internet connections and so happen not to have 50+ megs of hard drive space to play around with are a bit out of luck :-( Fourtunately there are patches, but I'm personally worried about messing it up if I patch more than 2 times or so. Split the kernel based on architecture, guys! Of course there is the argument of being able to cross-compile and stuff, but how many people are actually going to bother cross-compiling a kernel, and of those who are, can you wait another couple of minutes at most for a kernel with the ASM and drivers for that architecture to download?

    GET linux-2.4.1-i386.tar.bz2

    How hard can that be?

    Kenneth

  • by jlnance ( 4756 ) on Saturday December 18, 1999 @05:02AM (#1463667)
    From what I hear, the reason they are doing the port is that they found that mainframes make great web servers, but it is easier to port Linux to the 390 than it is to port all the programs you would want on your web server to MVS or CMS
  • ...without using a Microkernel.

    Just wanted to point out that is a microkernel (the open-source Mach microkernel). But fourtunately the monolithic [apple.com]runs on most every new Mac. (I don't know about the G4; I haven't checked in a while.) [www.linuxppc.orglinuxppc]

    I will not argue with your later facts about Linux, because they are correct IM(H)O.

    Ken

  • Um, that's what patches are for. You know, patch-2.2.13.bz2. If you're going to be compiling kernels it's assumed that you know how to apply a patch. It's not rocket science.
  • Personnally I think that Linux should be ported to more systems. A solution could be to split the tar-balls into separate systems.

    Regarding patches, I have not downloaded a full tar-ball since pre-2.2.0 something, applying patches is easy. You only need to do a 'make clean' and 'make oldconfig', which is far more faster than reconfiguring your system every time you upgrade the kernel.
  • by Adnans ( 2862 ) on Saturday December 18, 1999 @05:10AM (#1463671) Homepage Journal
    If you like to muck with new kernels and don't really have the bandwith learn to use patches PROPERLY, it's not that hard. If not, just stick with your favourite distribution and wait for the next CD release to upgrade.

    Splitting the kernel into separate architecture modules is going to be a nightmare for the kernel maintainers. They will have to spend more time maintaining and less time hacking, you don't want that do you? Besides, the archive is only 13MB bzipped2'd now. That's only like 3 full length mp3 songs! Think about that! :)

    -adnans

    PS. 'fraid of messing up your kernel tree with patch? Try patch with --dry-run first.
  • I think this is very interesting. Linux to push hardware: s/390, alpha ....
  • But given that the NetBSD folks are geniuses at these sort of things, I'm sure it will be along soon ;) Evidently theres rumours of a Dreamcast port of NetBSD as well, whee!#@$!
  • by MattMann ( 102516 ) on Saturday December 18, 1999 @05:17AM (#1463675)
    What is a mainframe from the point of view of linux? What sort of changes are required to port to a mainframe, particularly to a S390? Folks talk about stuff like IO channels and VMs but what are the salient features that I don't know about?

    Assuming I'm familiar with von Neuman architecures, stack machines, microprocessors, minicomputers, memory mapped memory, memory mapped devices, IO ports, interrupts, and the unix concepts of streams, char devices, block devices, etc... what don't I know about mainframes? (Please don't make me read the source :)

    One thing I *do* know from using them briefly, is that IBM "terminals" (3270s? something like that) are really weird: they are not simply connected via a serial cable. They have these extra control signals that light up indicators that say "you can't type now, I'm busy" and the text editors seem to do their editing on the "screen" locally and then send the changes back when you are done. I realize this has nothing to do with the kernel, but it would seem to make the whole experience quite surreal.

  • What is the value of a Linux port to the S/390? The price/performance ratio would be awful -- even before taking into account the hardware maintenance costs of an S/390.

    I can see maybe a small scalability value in that the latest S/390's have quite fast processors, which, along with their small number (10? 12? on Hitachi?) of CPU's and Linux's limitations with large CPU counts might combine to be as fast an SMP Linux port as is available -- but surely not much faster than a Compaq 8400 (or whatever they call their high-end SMP Alpha box these days).

    Does this make solid business sense to anyone?
  • by dhms ( 3552 ) on Saturday December 18, 1999 @05:22AM (#1463677)
    I can see it now... the scene: March 1, 2000, a call to IBM's Large Systems Group Order line...


    IBM: Hello, IBM S/390 sales group, how may I help you?

    Caller: I'd like to buy a '390 with 32 CPUs and 64GB of main memory

    IBM: Would you like disks and communications with that?

    Caller: Yes. I'd like 400 terabytes of redundant, channel attached DASD's, a full compliment of COMC's for 3270 and ANSI terminal devices for 500 directly connected users, LU6.2, SNA and TCP/IP networking over fiber and coax and an attached robotic tape library.

    IBM: Which operating system would you like? VM/390 or Linux?

    Caller: Linux, please.

    IBM: No problem. We can pre-install it, or you can download it from ftp.kernel.org on the Internet.
    We'll schedule overnight delivery of your system, please make sure there's someone available in your data center who can sign for the delivery...
    Oh, and will you be paying for this with Mastercard, Visa, American Express or a purchase order (valid D&B required)...?

    Caller: Bummer, you don't take Discover? Um... Amex, I guess. Can I get some Linux/390 t-shirts and coffee mugs with that too?
    :
    :

    Hmmmmm.... I wonder how much power and A/C I'd have to install in the basement in order to...

  • by Anonymous Coward
    Does anybody know where I can find the gcc patches to compile for S/390?
  • I can't tell from IBM's web site what sort of processor(s) these beasts use. It just says "S/390 capable processor." eh? Anyone know?
  • Does anyone know what the S/390 MCM goes for?

    1.5 Billion (with a B) transistors on a 127x127 mm Multi Chip Module! Wow!

  • Hmmmmm.... I wonder how much power and A/C I'd have to install in the basement in order to...

    Depends if the S/390 support is for their PowerPC-based CMOS boxes or not.. the CMOS boxes are pretty efficient (about the size of a RS6000 990) compared to earlier designs. I'm assuming the PPC port is the compatible port as there's already been a lot of work on Linux in the PPC area...

    Your Working Boy,
  • In your example, you can get the 5->6, 6->7, 7->8, 8->9, 9->10, 10->11,11->12.

    It's a shame there isn't a script that does this for you....

    Hmm, an idea has just formed...


    Your Working Boy,
  • by Anonymous Coward
    The S/390 is the processor. We have an R/390, which is an RS/6000 F50 (which is a PPC machine) with an S/390 chip sitting on a PCI card. There is also a P/390 which is the same idea, but is a PC with OS2. These of course are extremely low end machines (but they are all I have ever used). No sane person would use them for anything other than testing (we bang Tn3270 clients against ours).

    If you want to know about the real big iron, go to http://s390.ibm.com/ [ibm.com].

  • ...many times on linux-kernel. The short story is that Linus will never do this, but obviously won't stop someone else if they want to give it a try.

    However, the benefit of doing this is minimal. The majority of the code in the kernel is not in the arch/ subdirectories, but rather in the drivers. A more reasonable approach to me would seem to be some sort of dynamic system (web-driven or otherwise), where you could go and "order" a custom kernel tarball (i.e. i386, SB32, NE2000, nfs and firewall support) and out pops a stripped-down kernel source tree with the appropriate subset of the kernel proper.
  • by Greyfox ( 87712 ) on Saturday December 18, 1999 @06:25AM (#1463690) Homepage Journal
    Because they can!

    There's quite a huge Linux culture in IBM that is currently comprised mostly of techies -- very few people in management really understand or use the OS (Though they are getting caught up in the hype, so the hype does have its uses.)

    IBM has accidentaly managed to hire some very sharp technical people, many of whom the corporate culture has not yet crushed the spirit out of yet, and those people might say "Gee, it'd be neat to port Linux to a S/390." Much of the cool stuff that has come out of IBM in the historical past has been initiated by single employees in the company, often working on their own time. I'm rather surprised the S/390 changes were allowed to be released, since the standard IBM contract says IBM owns anything you do in your own time and AFAIK they have not yet released any guidelines for writing open source softwre in your own time.

  • Back when OS/2 was the big thing, IBM's eventual goal was to have the OS ported to all its hardware, from the lowly PC to the biggest mainframes. The rationale was that by doing this, you would have to spend less time retraining employees as you scaled up to larger and larger hardware.

    IIRC the I386 version of OS/2 had a monolithic kernel, so they started from scratch on the PPC using a microkernel design. I saw it running once. Guy whose machine it was said it'd run for about 1/2 hour and then freeze up solid. IBM scrapped the OS/2 on PPC shortly before they admitted they'd been defeated by MS in the OS war and dropped support for OS/2 for everyone but the really big customers.

    I find it ironic that Linux is now realizing the goal of one OS everywhere.

  • by Greyfox ( 87712 ) on Saturday December 18, 1999 @06:34AM (#1463692) Homepage Journal
    Ok, I think this calls for a new Mindcraft test.

    Lets take the biggest hardware NT runs on and the biggest hardware Linux runs on...

    I guess this pretty much kills the FUD about Linux not scaling well...

  • Linus touched on this subject (micro vs. mono) in "Open Sources: Voices from the Open Source Revolution". His chapter is titled "The Linux Edge" [oreilly.com].

    He believes that the excessive money spent on microkernel research (in the 80s-90s?) was not only a waste, but perhaps downright corrupt:

    In fact, [optimizing tricks for microkernels that also would apply to monolithic kernels] made me think that the microkernel approach was essentially a dishonest approach aimed at receiving more dollars for research. I don't necessarily think these researchers were knowingly dishonest. Perhaps they were simply stupid. Or deluded. I mean this in a very real sense. The dishonesty comes from the intense pressure in the research community at that time to pursue the microkernel topic. In a computer science research lab, you were studying microkernels or you weren't studying kernels at all. So everyone was pressured into this dishonesty, even the people designing Windows NT. While the NT team knew the final result wouldn't approach a microkernel, they knew they had to pay lip service to the idea.

    Gee, I hope quoting a paragraph from an open source book isn't illegal. Eh, what the hell.
  • Well, it still is infeasible for any company to do. Think of how many developers Linux has. Then pay them dirt cheap, say $60K/year... What's that put your payroll at?

    But what a company couldn't do, a community of individuals can do, because there isn't the underlying motive of making money... Of course that's all changed now, thanks to RHAT and LNUX, but hopefully there's still enough of the original drive left that Linux will emerge unscathed.

    So far as your "free" goes... we've discussed this before, but linux is only free as in money, not as in speech.... That's a discussion for another day, however.
  • Consider this: when was the last time you heard of a MacOS-emulator for Windows? To be clear, I think FreeBSD is wonderfull. The point is that the collective BSD's have already lost the battle of 'network effects'. Linux has not only a much larger community of individual developers and users, it also has IBM and SGI contributing to the kernel now. Going back to the initial topic - this (Big Corp. porting Linux to thier Big Hardware) will seems like on obvious eventuality in the (near?) future. Yes, Linux & GNU (GNU/Linux anyone?) are becoming the 'Standard Unix'. Consider: - Linix ABI as emerging standard: witness FreeBSD running Linux binaries - GNU/Free Software as a standard 'environment'. How many people use a Sun box 'as is' out of the box? The _first_ thing I do is load up bash,tcsh,gnu-fileutils (I love 'df -h'), etc. - The Linux kernel as a standard. Small embedded hardware to, soon(?), IBM 'BigIron'. - The Linux OS (colloquial sense -- GNU/Linux for the literal minded) as a 'standard'? The battle rages on... p.s. Why? I say GPL and Linus' true achievment of establishing a _very_ open development process/community. Real Hackers(TM) and BigBlue rub elbows (well, patches)! Who'd have thought.
  • Imagine all the application they now got in a "glance", all the Open Source one Apache, Perl, Gimp, and even the proprietary SAP, Oracle, etc. Finally Linux is faster heading to WORA than Java.
  • by Anonymous Coward
    As others have noted, it makes sense in shops that run VM, bought the mainframe primarily to run other OS's on it, but want to run web stuff too. It's less hassle to run Linux as another OS on top of VM that it is to port web tools to a mainframe OS.
  • by Anonymous Coward
    The AS/400 line of machines has just about the same hardware specs as the RS/6000 line. The RS/6000 line already supports linux (not all available systems) it wouldn't make sense to port linux to AS/400 systems.

    On RS/6000 linux is only supported on the 43P and the F50 model, a desktop workstation and a workgroup server. They only have a maximum of 4 processors. The really high end stuff, the S80 can have up to 24 processors and 64GB of memory. But as Linux can not scale that well (yet) it's not much of an issue.

  • by fwr ( 69372 )
    for i in `ls patch-2.2.*.gz | sort -t . -g -k 3`; do
    zcat $i | patch -p0
    done

    Just have all the patch-2.2.x.gz files in /usr/src that you need to take you from your current version to whatever version you want. I even tested this from 2.2.0 through 2.2.13 and it worked fine. Moving linux to linux-2.2.13, untarring linux-2.2.13.tar.gz, and then running diff --unified --recursive linux linux-2.2.13 shows:


    Nothing! See, told ya it isn't rocket science. I'd frankly be a little worried if there were any differences...
  • Well, nothing like seeing another linux port on /.
    Makes me happy for the rest of the day. Now that we have mainframe support, the next thing to do is go the exact opposite direction- embedded systems. Then we may truely bask in the glory of an os that supports every machine from embedded systems to mainframes. Are there any major architectures that Linux doesn't support yet?
  • Wouldn't these babies make an awesome beowulf cluster???

    Wait a minute, each one is like a beowulf cluster :o


    -W.W.
  • Along the lines of IBM's stuggle with OS/2, don't forget the original goal of Windows NT was to be an ultraportable operating system, and NT4 shipped with support for 4 platforms. Unfortunately, in a 'work-alike' situation, the platform with the cost and mindshare advantage won 98% of NT's market and NT5 is now shipping with x86 support only. (However, I've heard rumors that WinCE is stripped-down NT kernel and that does run on several CPUs.)

    The moral of IBM's and MS's story is that maintaining a multi-platform commercial OS is a money loser. (Sun keeps making the poorly supported Solaris x86, apparently as an intellectual exercise.)

    Just another case where Linux breaks all the normal rules.
    --
  • You said:
    but linux is only free as in money, not as in speech


    I may be just plain stupid, deluded or otherwise insane, but I always thought that one of the main atractions of Linux (OSS in general) was that it was free, as in free speech. Did I miss something rather fundamental here?

    As for your argument about the price of Linux developers: lets say you have 1000 (full time) developers and pay them $60K/year. That would be $60 Million. For companies like IBM, Sun or MS that's peanuts. So it's not just a matter of money although I agree that in principle Linux gets and incredible amount of development work for free ...

  • I thought IBM's OS/2 porting problems were caused by the excessive amount of asm used...

    Linux has the advantage of a limited amount of asm...and being compilable by an almost universal compiler.
  • Linux is not free as in speech because you and I can not do whatever we want with it. I can't make my own distro and refuse others access to the source code. You may say that's fair, but I say that's limiting what I can do with it. I think the BSD's are a much better example of "free" software as in speech, except you still need to retain their copyright notices...

    Yes, one of the attractions of OSS is that it's "free" and you can see the code, but the GPL does put severe limits on what you can and can not do with that code.
  • This is all off topic, but that being beside the point, I may as well interject.

    It's actually not disputed that *bsd is quicker at some things than Linux.

    It's also not disputed that *bsd is more stable under some conditions than Linux.

    Conversely, it's additionally not disputed that the same can be said in favor of Linux.

    Maybe dispute isn't the right word, folks are definately arguing about it, but that's not the point.

    The ability to run Linux binaries in various BSD variants has been around for quite a while. I'm also not particularly impressed by the intelectual capabilities of someone who'd figure that was a preferable way to go about things.

    You've got the source, why not just recompile? Why on earth would you use a compatibility library when you can just compile it as a native binary?

    If *you* want to take that as reflecting on the sanity of the average BSD user, that's up to you.

  • Maybe so, but what does it say about the state of an OS when it's developers write signifigant portions of code to enable it to run applications written for another OS?

  • by Anonymous Coward on Saturday December 18, 1999 @09:19AM (#1463711)
    Back in the summer of 1998 it slipped out in an open IBM Linux online forum that an intern in the Toronto Labs had, in his spare time, ported Linux to run under VM on a 390 mainframe. There was a lot of talk amongst us propellerheads about how good this would be for IBM to do with Linux what the marketing folks claimed we would do with OS/2 (and failed).

    Then the PHB's broke into the discussion and squished it. It frightened them. I think it still does. IBM doesn't make nearly as much money on the iron as they do on the software. Make the software free and there is a lot of lost revenue. Maybe the PHB's are coming around to the fact that IBM still makes zillions on service, and it's better to be the first company to offer a unified UNIX solution cross-platform than to watch a competitor do it.

    With IBM actively involved in development of everything from palm devices to big iron, Linux only makes sense for selling smart solutions to customers. So, some revenue is lost on software sales. Big deal. Increased volume via bundled solutions will make up for that.
  • The official Linux/400 site is at :
    http://users.snip.net/~gbooker/as400.htm
  • > Linux has the advantage of a limited amount of asm...and being compilable by an almost universal compiler.

    Actually, a big skeleton more than one of the commercial vendors have in their closets is that their commercial OS kernels are built with gcc and not the compilers which they sell to their customers, for the obvious technical reasons. I won't name names, but they know who they are :-)

    If they stopped to think about it, they'd realise they could make more money by abandoning C/C++ compiler development and instead selling officially supported packages of gcc or egcs. A classic case of open source being a more viable business model as well as the best technology.

    Linux and *BSD are far from being the only OS'es built with gcc.

  • , Linux & GNU (GNU/Linux anyone?) are becoming the 'Standard Unix'.

    Your point only proves the pointlessness of your argument. If a PC workstation is running a (Linux/*BSD/Solaris/UnixWare) kernel and has the GNU toolset and (KDE/Gnome/WindowMaker/etc) installed, it is going to be virtually indistingishble from a user's standpoint from any other PC Unix system with a similar setup. The differences will only manifest themselves when you are doing system administration, but that's not what you are talking about.

    So stop the pointless advocacy of Unix kernel A over Unix kernel B, because by-in-large users don't care about Unix kernel services. You are better off spending your time advocating *nix itself over other alternatives (especially because it's become a non-entity on the desktop, partially due to such dickering such as yours.)
    --
  • Why the heck would anyone spend the kinda huge
    $$ it takes to run a 390 series machine or a
    Fujitsu or other 3rd party VM processor and
    then run freeking Linux? What kinda moronic
    place would RUN this???

    Do any of these guys know what it costs to run these things!!!?? Let alone the insane maintnance costs IBM extracts from customers!

    The electricity alone for a month would buy a
    couple 'a beautiful 4p VA boxes. Not to mention
    the stupid cooling req., floor space, etc.

    This is yet another in a long line of STUPID VM
    tricks from IBM so that this old-arse arch. stays around. What a pathetic waste of
    resources. (and floor space)

    fsck this! Whats next IBM 502-linux? or maybe
    DEC 20xx series linux? fscking-A!

    da'fly on da' fly in da' valley
  • Of course! I knew there had to be a good reason for it. Thanks for reminding us that every reason doesn't have to be a business reason. I'll go off and slap myself on the forehead now... ;)
  • You can, indeed, attempt to make your own private version of Linux. What you can't do is sell "our" software as your own.

    You can sell the part that you wrote, if you can. There are many techniques that allow this, including giving away a custom version of Linux (which need not even be operable) and selling your code as a patcher to that version. The problem is getting anyone to buy your incremental improvement.
  • (Ok - so this really isn't related to the issue of Linux porting, but....)

    With all due respect to Linus, I wouldn't say that microkernel research has been a total waste. While it hasn't turned into what the researchers probably intended (replacing regular monolithic kernels in all OS's), it has produced useful OS's. The T3D operating system (forget what it is called) and Unicos/mk on the T3E are both microkernel based. I've heard that ASCI/Red (the Intel monster machine - check the Top500 list) runs a microkernel based OS. So, some of the fastest machines in the world run microkernel based OS's. Also, MacOS X is based on the Mach microkernel.

    The reason a microkernel based OS is easier to port is that there's less there to port. Linux and the *BSD OS's have, however, become a marvel at how easy they are to port.

  • by cabbey ( 8697 ) on Saturday December 18, 1999 @10:14AM (#1463723) Homepage
    goto chips.ibm.com [ibm.com] (their microelectronics site) and search for s390 [ibm.com]. The last [ibm.com] link is the best.

    Also check out the Blue Logic(TM) [ibm.com] section for more one the technology that enables the G6 to reach 1600 MIPS.
  • by Anonymous Coward
    Some of IBMs customers have expressed an interest in running Notes/Domino on their mainframe. Rather than porting Notes/Domino to yet another platform, IBM found it easier to port Linux to a VM on the S/390 and then port Notes/Domino for Linux to support the S/390 platform. It would be interesting to see Apache running on a S/390...
  • Well, that all depends on what you call "scales well". Compared to NT, Linux scaling is fantastic (up to 8 processors or so). Compared to big iron OS's like Irix (up to 512 procesors) or Unicos/mk (up to 2048 processors), it doesn't (I should probably include OS/390 in here as a "big iron OS", but I know nothing about it). Unicos/mk is a very hard comparison to make since it is a microkernel based OS, but you can pretty easily compare Irix and Linux - Irix has a very well threaded kernel that allows you to declare certain CPU's (generally the ones close to the I/O) to handle interupts while the others keep processing. There are very few large grained locks in the kernel. Kernel threads are premptible and schedulable in Irix (which allows you to run real time aps, in addition to making the kernel more responsive to high priority tasks).

    Linux, of course, is getting much better very quickly (and one of the projects that I am on the periphery of at SGI is working on this). The zone memory allocator and underpinnings of NUMA support are excellent first steps. Even so, I don't think you'll see a 512 processor single system image Linux machine that has reasonable scaling any time soon. Basically, Linux scales well on the "low end" but not the high end.

    I speak for myself, not SGI.

  • You've got the source, why not just recompile? Why on earth would you use a compatibility library when you can just compile it as a native binary?

    The primary reason for this is to cover the cases where you don't have the source. It's to allow you to run any commercial packages that may be available for Linux, but the publisher has not seen the need/demand for a native *BSD version.

    Face it, not all software is open source, and currently, in the i386 market, Linux has the mindshare advantage over BSD. A software publisher moving from the more mainstream (Solaris, HP-UX, Irix, etc.) environments is going to choose Linux over *BSD every time.

    The compatibility libraries are just giving you, if you so choose, the option of running these software packages on your BSD system. Remember, you can still choose not to.

  • While I wouldn't normally say this, because I'm against intentional crippling of software (in general), but please leave OUT support for SNA and Twinax! Two dead technologies they are, that no one seems to be willing to let go of-- so maybe if they were unable to use them, they may convert over to something more MODERN like.. Token ring or Arcnet :) (groan from gallery)
  • you're missing the point here... this is aimed at people who already have 390s, lots of them, for legitimate bussiness reasons. They have some extra cycles (or can easily upgrade a system to get some) and want to migrate an existing application or one they wrote in house from an overloaded PC to there trusted mainframe.

    There is also the geek factor... IBMers are geeks first and foremost, especially the engineers, there have been a number of projects that came out of both research and development that were started by a much of engineers sitting around at lunch talking about how cool it would be to do X, or sitting in boring meetings with managers dreaming about how cool it would be to do X (ya know, standard geek stuff...). So they do it on a weekend or after hours and it gets going and works and they bring in a few geek co-workers and talk about it and eventually a manager hears about it and says "we can market this." Then a bunch of managers rub their heads together and figure out a plan and PRESTO you've got a product with no declared bussiness use, but the geeks of the world will find it and put it to use in their projects and eventually someone will come up with a bussiness use.
  • You forget Cray/Linux, of course.

    - UNICOS? Bah! Who needs dem steenkin' UNICOS? We'll just go and install RedHat and-- oooh, pretty Enlightenment themes....

    - HEY! The T90 just crashed! The missiles are out of control!

    - What's that on the screen? "1 0WN J00 L0Z3R"... what does that mean?!?
  • What is the value of a Linux port to the S/390? The price/performance ratio would be awful -- even before taking into account the hardware maintenance costs of an S/390.
    People aren't buying S/390s to play Quake on. The real selling point for these systems is data transfer and reliability. Take a look at the data transfer rates you see on one of these boxes - there's a good reason why some ridiculous percentage of the data in the world (I've seen estimates in the 50-70% range) is sitting on storage connected to an IBM mainframe.
  • OK, fair enough. You look at this from a different perspective. I basically see an asset where you see a detriment. I would hate for me to work on some piece of software for a year or two only to have it wrapped into some closed source product. I prefer the GPL as it ensures that the source code will remain accessible for all to see/use/study as they wish. I do think this encourages contributions to GPLed software ... I'm not saying the BSD licences are wrong or evil, but I would think long and hard before publishing anything non-trivial under their licence for the very simple reason that it allows someone to take the source and do with it what they want without contributing back to the community that created the software in the first place ... given the fact that I don't ask for monetary compensation for my software, I think that's only fair ... if you consider this to be limiting (and I can see your point of view), you are more than welcome to write your own version of whatever software it is you need/want (I hope this doesn't come over negatively/flaming as that's definitely not my intent) ...


  • You may not be free to do what you want with Linux but Linux is free for anyone to take, use, and adjust as necessary.

    When people mention "free speech" vs. "free beer" all they want to point out is that GNU GPL doesn't care about the money aspect but stresses the rights and resposibilities with respect to the code.

    Rather than nitpicking about words I suggest to accept whatever license the author chooses. That includes refraining to use proprietary software if you don't want to pay the price.

  • The reason a microkernel based OS is easier to port is that there's less there to port.

    ...in the sense that there's less kernel code to port.

    However. rather a lot of kernel code (in the sense of "code running in kernel mode") doesn't need to be ported, it just needs recompilation; is the amount of code that has to be changed to run on a different platform actually significantly smaller on microkernel-based OSes? (If you answer "yes", please back up the assertion with figures for several "traditional" OSes and at least one microkernel-based OS.)

  • even the proprietary SAP, Oracle, etc.

    Yeah, it's so nice that, now that Linux will be running on S/390, they'll finally be able to run SAP on OS/390's.

    Oh, wait, they already can [ibm.com].

    I think Oracle does as well, but their Web site requires Javascript and, as I'm currently running a UNIX version of Communicator, there's no way I'm turning Javacrash^H^H^H^H^Hscript on.

  • Does anybody know where I can find the gcc patches to compile for S/390?

    S/390 Linux, or S/390 OS/390? In either case, there are links from the Linux on the ESA/390 Mainframe Architecture [linas.org] page.

  • RE: "It would be interesting to see Apache running on a S/390..."

    I believe the folks in Raliegh have done this (as well as some of IBM's more adventurous customers/clients): tho it's called WebSphere, it's just Apache 1.3.6.x/JServ 1.x w/ a Comanche-style GUI manager (running as an applet), connection pooling, and EJB support. Only runs on IBM O/S on their "Big Iron" hardware (A/S, R/S, O/S, etc.). This is true sweetness, however, as it means that the pool of support and development will be increased greatly. One more step towards World Domination!

    All props to Alan and the rest of the crew!
  • The reason a microkernel based OS is easier to port is that there's less there to port. Linux and the *BSD OS's have, however, become a marvel at how easy they are to port.


    At least in the case of Linux (not sure about *BSD, I've never seen the source code), the reason for this is that very little is actually written in machine-dependent assembler...basically just enough to get the thing booted. Most of the rest of the kernel is written in highly-portable C code.
  • Depends if the S/390 support is for their PowerPC-based CMOS boxes or not.. the CMOS boxes are pretty efficient (about the size of a RS6000 990)

    ...and aren't PowerPC-based. The System/3xx instruction set [ibm.com] isn't the same as PowerPC; it's a 16-general-register CISC instruction set, with variable-length instructions, memory-to-memory instructions, and register-to-memory arithmetic instructions.

    Perhaps you're thinking of the AS/400's, which moved from an apparently 3x0-ish CISC instruction set to an extended PowerPC instruction set - but the ABI for S/38 and AS/400 boxes isn't the native instruction set, it's a higher-level "virtual" instruction set, that gets translated to the native instruction set by low-level OS code; the ABI for S/3x0 is the S/3x0 instruction set plus the OS calls.

  • At least in the case of Linux (not sure about *BSD, I've never seen the source code),

    The BSDs are definitely the same in this regard - and, I suspect, most of the commercial UNIXes (definitely true of SunOS 4.x and 5.x, true although to a lesser degree in pre-4.x SunOS which didn't abstract the MMU to the degree 4.x did), and Windows NT, and BeOS, and a pile of other relatively modern OSes are the same in this regard as well.

    the reason for this is that very little is actually written in machine-dependent assembler...basically just enough to get the thing booted.

    I wouldn't put it in exactly that fashion - the assembler-language code is also used to manipulate things not directly manipulatable from C (e.g., flushing caches and TLBs; no, writing assembler-language code using "asm"s does not let you manipulate that stuff from C, it lets you include in the midst of C code non-C code to manipulate them - "asm"s are no more portable than assembler-language subroutines, in fact they could be thought of as inline assembler-language subroutines) at times other than just when you're booting.

    In addition, there may be C code that is machine-dependent as well, in that it might e.g. construct page tables.

    However, the point remains that the bulk of the code running in kernel mode isn't that sort of machine-dependent code - file systems, process manipulation above the low-level code for stuff such as context switching, network protocol implementations, and even a lot of drivers are largely machine-independent code, as you noted:

    Most of the rest of the kernel is written in highly-portable C code.
  • And, for people curious what instruction set S/3x0's implement, read the ESA/390 Principles of Operation [ibm.com].

    The core instruction set isn't particularly exotic (32-bit, 16 general-purpose registers - although R0, when used as an index or base register, means "use 0 as the value" even though R0 isn't a RISC-style always-zero register; the POWER-family instruction sets may have picked up that idea from S/3x0 - a smaller number of floating-point registers, variable-length-instruction CISC with memory-to-memory string/decimal instructions and memory-to-register arithmetic instructions), although it does have some fairly fancy add-ons, and has an I/O architecture oriented towards handing "programs" to channel controllers to do I/O data transfers.


  • Four things characterize a mainframe:
    • IO. Both in terms of read/write speed and in terms of access time. Were are in talking about systems running databases in the terabyte range with access times in the nanosecond range !

    • High availability. If your business is relying on your database, and if you're putting everything into one huge database running on a single (or a few) mainframes, then you want to be pretty sure, that this machine is having max. uptime.

      If you OTOH are running a supercomputer for research or simulations, you don't bother that much for an occasionally break-down. In fact: Chances are that it was your program, that brought down the system.

      In my last job, I was working in one of the biggest mainframe installations in Europe. We had close to 10000 users on our systems and the cost of having them sitting idle while we were bringing the system back on its feets were something like 5000$ a minute ! Not to mention the cost of lost business oppertunities.

    • Slow CPUs ! This may come as an surprise, but mainframes are having relatively slow CPUs due to the fact, that their performance limited by the IO subsystem. In addition: Business transactions are for the overwhelmingly majority consisting of simple additions/subtractions/assignments. An occasional multiplication sneeks in here and there, but generally you don't need a fast CPU.

      The mainframes I was nursing was in fact far slower than my linux box at home.

    • Transactions. Mainframes are transaction machines. You're interfacing a mainframe in terms of transactions and the system is specifically optimized towards being able to handle a large amount of transactions and to do it fast. A transaction in this context could be a bank account transfer or a click. What matters is that you can't afford loosing any data and you can't afford losing your data integrity (customers don't like being billed twice, and your managers don't like customers not being billed at all :). Mainframes are optimized towards this kind of data processing.

      OTOH non-transactional based interfaces suffer from this: If your editor was connected direcly to the system, then every single tap on your keyboard would be treated like an transaction - with rollback/rollforward options, logging, backup etc. etc. All of which would bring your very poor conceived performance.

      Being a closed envirronment and often tied to a single supplier don't help either, so the standard of the userland software on mainframes are very poor.


    A better name than Mainframe would probably be "database machine".

    -Claus
  • Social Contract Theory my friend, in order to protect the freedoms of all, all must have some limitation to their freedoms.

    In an anarchy, people are free until any conflict arises, in which case the strongest wins.

    In a social-contract-ordered liberal society, when rights conflict, they are evaluated in tabla rosa without the particular people involved being the decision-making criteria, allowing for a consistant set of rights to be afforded to all.

    In English

    w/o limitations on freedoms, the strong are free and the weak are fscked. With some consensual limitations, all are equally free.

    w/o the GPL, I cannot hope to fight a commercial ISV. w/o the GPL, no ISV will release source on the grounds that it will be immediately used by a compeditor (a one-way value exchange) as opposed to being used and then in doing so, providing value back to the original ISV...

    pragmatic political philosophy

    We are all in the gutter, but some of us are looking at the stars --Oscar Wilde
  • Linus: "I remember when World Domination was just a joke..."

    IBM: "I remember World Domination...."

    (Disclaimer: Yes, I've recognized how IBM's become one of the cooler companies in the industry over the last few years, much to my slack-jawed amazement.)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Saturday December 18, 1999 @03:40PM (#1463757)
    What is a mainframe from the point of view of linux? What sort of changes are required to port to a mainframe, particularly to a S390? Folks talk about stuff like IO channels and VMs but what are the salient features that I don't know about?

    I/O channels: at least on System/3x0, I/O is done by constructing a "channel program", which is a series of commands whose opcodes tell the channel, and the peripheral attached to it, to perform some operation (read data, write data, search a disk track for a block whose "key" has a certain value, rewind a tape, etc.) - there's also a branch instruction (Transfer in Channel) and, as I remember, some ability to do conditional skips over channel commands. The CPU just issues instructions such as "start I/O" to start a channel program; the channel program does the data transfer.

    VMs: one "meta-OS" running on S/3x0 mainframes provides, to the OSes running atop it, a "virtual machine", that looks much like a real S/3x0, and whose disks might be subpartitions of the real machine's disks, whose communications controllers might be part or all of the real machine's communications controllers, whose system console might be the terminal on which the operator of that virtual machine is logged in, etc.. (VMware [vmware.com] is somewhat like this.) Linux could run on one of those "virtual machines", and one of the other OSes for S/3x0 could run in another one, so that you can run applications for Linux and applications for, say, OS/390 on the same machine, without having to port a UNIX application from Linux to OS/390 (which has a UNIX environment - but it's not completely like the UNIX environment to which most UNIX programmers might be used, e.g. it doesn't use ASCII as its native character set, it uses IBM's EBCDIC).

    Assuming I'm familiar with von Neuman architecures, stack machines, microprocessors, minicomputers, memory mapped memory, memory mapped devices, IO ports, interrupts, and the unix concepts of streams, char devices, block devices, etc... what don't I know about mainframes?

    Well, IBM mainframes have a fairly conventional von Neumann-architecture instruction set (CISC, 16 32-bit general-purpose registers, variable-length instructions, memory-to-memory character/decimal arithmetic instructions, memory-to-register and register-to-register binary arithmetic instructions) with some specialized add-ons. The CPUs in them are, these days, microprocessors implementing that instruction set.

    I don't know if OS/390 implements memory-mapped files, but the hardware certainly permits it - it has a fairly convention in-memory-page-table MMU.

    I/O devices aren't memory-mapped, however; you tell them to do things by telling a channel to send them commands. The channel program can interrupt the CPU either to say that it's finished or, as I remember, to notify it that it's reached a certain point in the program.

    The UNIX I/O model isn't what OS/3x0 has traditionally implemented, although the UNIX environment atop it implements that, and a Linux port would implement such an I/O model.

    One thing I *do* know from using them briefly, is that IBM "terminals" (3270s? something like that) are really weird: they are not simply connected via a serial cable. They have these extra control signals that light up indicators that say "you can't type now, I'm busy" and the text editors seem to do their editing on the "screen" locally and then send the changes back when you are done. I realize this has nothing to do with the kernel, but it would seem to make the whole experience quite surreal.

    How surreal was your experience posting your article? You presumably filled in the text in the "Comment" box, doing any editing locally, and then sent the changes to Slashdot's server when you were done by pressing the "Submit" button.

    I remember, several years ago, reading some magazine in which somebody described much of the Web as "3270 for the '90's". A lot of the stuff with HTML forms and HTTP POST operations resembles the way I think 3270's work.

    The instruction set and I/O architecture of S/390 is described by ESA/390 Principles of Operation [ibm.com]. Links to that and some other manuals can be found on the Linux on the IBM ESA/390 Mainframe Architecture [linas.org] page.

  • looking at s390.ibm.com [ibm.com] i wonder why linux runs on those and not AIX

  • Linux has more developers, but few that are full time developers. A company could easily hire 1/4 of the linux developers and get a good/stabe OS out. the problem isn't money, it's that they'd rather add more features than actually get some work done on making it stable.
  • you seem to have no problem with using someone else's free code, and selling it. your problem is that you do not want your code to remain free. What inclination then, does the community have to allow you to use this source? The BSD license _is_ a bit more altruistic to the commercial world (but , but the GPL insures that the next person with an idea to bulid on your code...can!

    (besides, IMHO, people have free speech, financial concerns such as buisinesses, are not, and should not be considered citizens. Rights such as free speech, should not apply)
  • I think one thing that stifles development is when programmers cannot take ideas or code from work, further develop them, and distribute them outside of work. I certainly have worked on projects for the University of Wisconsin that I think would have benefitted from releasing under an OSS license. Unfortunately, according to some intellectual property lecture I attentended, I need to get approval from some bureaucrat to say that the UW does not stand to lose a chance to make money off of the code I wish to release.

    Because of these stupid rules, I have not released much of my code. However, if I create software from home and it is useful at work, it gets released. This is true even if I work on it in off hours specifically for use at work. For smaller projects, I prefer to just do them at home and release the code. Note that I get paid dirt working from home, but I do have fun.

    So far, it seems as though RHAT and LNUX have no such stupid rules. Should they come up with stupid rules like that, I think that they would slowly lose their importance in the Linux world.

  • The first versions of AIX did run in VM. Not sure if the current versions do or not (I didn't think they did but I could be mistaken)
  • Like, for instance, WINE? It's got its own icon here at Slashdot. :)

    Cheers,
    ZicoKnows@hotmail.com

  • by hensley ( 275 )
    you're not the first...

    just change into your dir with the patch files and run /usr/src/linux/scripts/patch-kernel (or wherever your kernel sources are).
  • The moral of IBM's and MS's story is that maintaining a multi-platform commercial OS is a money loser.

    Not to mention Micros~1's first foray into cross-platform OSes: Xenix was apparently available on a whole range of machines. (But that was back when they did write multi-platform code, such as their versions of various programming languages.)

  • I don't want to diminish the great achievements of the many people porting Linux to such diverse platforms - however, Unix is also monolithic and was ported to a vast variety of minicomputers, mainframes, workstations and PCs. In fact, this continues with PDAs and other esoteric platforms - see www.netbsd.org (NetBSD is the BSD variant that focuses on portability).

    In the 1980s I used to be a sysadmin for Amdahl's Unix on IBM mainframes - it would be good to see Linux moving into the same domain.

    One interesting approach might be install-time compilation of Java bytecodes into machine code (as done by TowerJ on Linux and elsewhere), providing very good performance and a *single binary standard* for applications. Combining Linux and compiled Java could provide good enough performance for Linux on a range of architectures, even for companies that need to ship binary application software.

    Just think, you could download a single binary and run it on anything from a PDA to a mainframe, without the JVM or application having to deal with OS incompatibilities.

    In other words, there could be two very high volume software markets (at least for binary applications) - Windows on x86 and maybe IA-64, and Linux+Java on any architecture.

    Unfortunately, unless a really good JVM with install-time compilation gets open sourced, it's more likely the Linux market will turn into 'Linux on Intel plus a few other architectures'.
  • Some of these systems can be quite huge...
  • "The real selling point for these systems is data transfer and reliability."

    Well, I think you missed the #1 and #2 selling points there: backward compatibility and bureaucratic inertia. ;)

    While the reliability is certainly excellent, I wonder about the data transfer rates. Have you seen (or performed) any benchmarks that support this claim?

    I'm a Unix sysadmin in a shop with several S/390's and the only experience I have with them is network data transfer. Between the Sun 6000 and Sun 10000 I can get 66 Mbps using ftp across our 100bps network. From an S/390 I only get 21 Mbps.

    Of course, you're talking about aggregate disk transfer rates, not network speeds. Do you know of any resource where test results are actually available?
  • you covered a lot of stuff knowledgeably and clearly, thank you!

    BTW, when I wrote "surreal", I was thinking of the porting process and running programs like emacs to editing code or run a debugger ... yep, I could try doing that through HTTP as well :)

  • I think you're wrong here, depending on your definition of the term USER. Anyone can take a GPL program and enhance it to their needs and not release the changes to the community, as long as they do not try to distribute their code or sell it. On the other hand, if you are not a USER (my definition) and are a company or developer or integrator who wants to modify code and then sell your changes or services to the community then you must release the source changes. I think this is fair. USERs are free to modify the code all they want, but if they want to try to profit from their changes by selling services or support then they must release the source. If they don't want to try to profit from their changes, what possible reason could there by for not releasing source changes?

  • I don't think that WINE is a Linux-only project, is it? Currently it's definately x86 only, but it's my understanding that it runs under multiple x86 Unix-like OSes. In fact, from the WINE about [winehq.com] web page:

    Wine works on most popular Intel Unixes, including Linux, FreeBSD, and Solaris.

    Hence, that the WINE project is aimed at providing binary compatability on Unix like OSes, including Linux, FreeBSD, and Solars, is not comparable to the FreeBSD kernel programmers providing binary compatability for Linux programs. Your point is meaningless.
  • But SNA would take business away from Microsoft's SNA Server product. I believe there's also a Linux-SNA [anu.edu.au] project but it seems to have moved, and the new site is not responding yet.
  • http://www.acude.org/roam.htm is a good site on mainframes and linux, and should cure some of you who still think that IBM runs the world :-)
    moderater note: last sentence not intended as troll, just a fact of the modern mainframe world.
  • It's a way for users of Unix-like OSes to be able to use Windows apps. The fact that it gets a lot of attention certainly says something. Your confining my statement to only Linux totally misses the point. Sheesh, exactly how myopic are you anyway?

    Cheers,
    ZicoKnows@hotmail.com

  • 'm a Unix sysadmin in a shop with several S/390's and the only experience I have with them is network data transfer. Between the Sun 6000 and Sun 10000 I can get 66 Mbps using ftp across our 100bps network. From an S/390 I only get 21 Mbps. Of course, you're talking about aggregate disk transfer rates, not network speeds. Do you know of any resource where test results are actually available?
    Not really. I have found a couple articles in the last year or so talking about places that are actually using S/390s as high-end webservers, strictly because they can handle such high traffic volumes on database-backed sites. Your quoted network rates make me wonder if somone decided it'd be cheaper to get the El Crappo network adapter; also, I believe the TCP/IP stack was somewhat suboptimal until comparatively recent versions.

    You're right though, the big selling point has been disk transfer, not network. The biggest gain from the mainframe at that point is that these things have heavily decentralized storage systems, which becomes really when you have thousands of simultaneous users all doing disk I/O. This is similar to the performance gains of SCSI with multiple programs doing disk I/O but more so. Those people I've seen using them for webservers had the common scenario where a given page-load might only generate a 50-100K of network traffic but it'd have to chew through megabytes or gigabytes worth of DB/2 tables to generate that 100K.

  • Why are they adding features to 2.2 if it was supposed to be stable code with only bugfixes?
  • by Scatter ( 5031 )
    I'd like to see a dmesg from an S/390 booting linux...:)
  • The corrected library will be available shortly. Thanks. Daniel Frye IBM
  • they ported something from AIX to OS/390, so what ?

    this is the complete list of openrating systems running on S/390: http://s390.ibm.com/software/

    reasons for this might be, that linux-kernel, gcc and glibc have been written with portability in mind and AIXs not. or the S/390 department had some problems getting the AIX-source :)

  • I see no Slashdot icons for either of the things you mentioned. In fact, I can't remember there ever having been any stories about UAE. Even if that weren't the case, there's a big difference. The Amiga is a dead system, and the popularity of UAE is due to people wanting to keep a piece of the past around. This subthread, however, is about emulation where both systems are still viable, and the users of one system want to be able to use the "kewl stuff" that the other system has available to it.

    Cheers,
    ZicoKnows@hotmail.com

  • An S/390 has features the UNIX world dreams of. Forget hot-swappable HDDs, RAID blah -- you can rip a network card out of these things, and it'll just keep going.

    The nice thing about running Linux on a VM is that now you can have linux running on the most reliable hardware there is: virtual hardware.

    Also, IBM may have liked to port, say, Apache or Lotus Domino to VM -- now there's no need. They can run domino on Linux on VM. Trust me, that'll be one *reliable* Domino server. Performance might not be the highest (mainframes prefer batch-oriented stuff, and those extra levels of abstraction won't help) but in terms of uptime -- whew!

    Oh, and a new S/390 isn't as big as you think.

    --
  • Yes there was: AIX/ESA. Discontinued many years ago, together with AIX/386 on PS/2.

Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened. -- Winston Churchill

Working...