IBM Announces First Linux-only Mainframes 218
A reader writes "The new Z-series mainframe for Linux, which costs $400,000 and is aimed at processing transactions at large businesses, is IBM's first mainframe computer sold without IBM's traditional z/OS mainframe operating system. More info at the IBM zSeries page" This is something that IBM and others of Big Iron vendors of *NIX have said - as Linux grows in maturity, they want to replace their *NIX with Linux. However, there's still work to be done in that area.
Link to Sourceforge Foundry broken (Score:4, Informative)
url (Score:1, Informative)
(Use the Preview Button! Check those URLs! Don't forget the http://!)
More... (Score:4, Informative)
No Unixes ran on zSeries before (Score:4, Informative)
And with Linux, you do loose a lot of the RAS characteristics that z/OS provides, as well as 40 years of compatibility with existing workloads. Linux is being sold as something to run new workloads on, workloads that z/OS previously wouldn't have been considered for.
Story on ZDNet about Linux + zSeries (Score:4, Informative)
Re:Relative costs? (Score:2, Informative)
Re:NO Z/OS? (Score:5, Informative)
z/OS is geared at high volume transaction, database, batch processing. it runs either z/VM or more typically natively or in an LPAR.
An LPAR is a 'logical partition', a way of dividing a m/f up into several virtual machines.
for now, these are static and implemented when a partition is 'booted' - IPL'd (initial program load) in m/f terms.
VM on the other hand supports hundreds, even thousands of dynamically generated virutal machines. You can run VM inside an LPAR providing two levels of partitioning. I expect VM and LPAR technologies will converge at some future point.
meanwhile everyhting can talk to each other over 'hipersockets' - memory to memory pipes that looks like a tcp/ip network to your software - blindingly fast
Cost Justification (Score:3, Informative)
Re:A step in the right direction... (Score:5, Informative)
Architecture is the key. What's the difference between a 120 MIPS mainframe and 3000 MIPS desktop, and why is the 120 MIPS mainframe faster in mainframe type applications?
Architecture. Specifically, things like I/O, process handling, etc.
Don't get me wrong, I'm a strong believer that "desktop" type hardware can compete with the big boys, especially considering the cost diferences and the extra speed, boxes, redundancy, etc that you can buy with all that cash you save. But... there are times when the big mainframe architectures really do have a reason for being.
Just my $.05 (inflation, you know).
Re:HOT SWAPPING!!! (Score:2, Informative)
Is that wise? (Score:2, Informative)
Finnaly a subject I can talk about (Score:4, Informative)
Only recently (last 7 years) has speed been a considiration, and that was thanks to the PC revolution. But again, you were alwsys dealing with two camps: Mainframe guys, and PC guys.
So all this means is that there is another choice for people who want the " 5 9's",the holy grail of computing, and not Windows, Unix or any other platform other than the mainframe can deliver that.
Re:HOT SWAPPING!!! (Score:3, Informative)
All of the new SunFire range (3800, 4800, 4810, 6800, 15000) have full hotswappability on PSU, disks, system controller boards, CPUs, memory, etc etc etc.
The SF15,000 is the 106 CPU top-end system, while the SF3800 only goes up to 8 CPUs.
Oh, and you can mix'n'match different speed CPUs in the same system too - useful for expansion in the future.
Hope this helps!
Re:A step in the right direction... (Score:3, Informative)
PCs crash a lot. They're made from cruddy hardware because the average consumer either doesn't know the difference, doesn't care, or can't afford anything better. Mainframes have uptimes in the years; some have benn going for decades. They usually have hot-swappable everythings, including the usual power supplies and disks, but also hot-swappable CPUs, memory, expansion cards (network, etc), and even motherboards sometimes. Finally, they have a high degree of self-awareness. Today's PCs are starting to get some of these features (your BIOS might know the speed of the CPU fan, wheeee) but the mainframes are way ahead. They're set up to figure out when things are about to fail. When a potential failure is detected, the mainframe will call the vendor and order replacement parts automatically. A service tech will usually be there within hours to replace the part, and the part will be taken back to the lab to see why it failed. The knowledge gained from the failing part is used to design the next revision so it doesn't fail.
When it comes down to it, CPU power isn't all that important in the mainframe world. They do a shitload of I/O, and they just work. An Athlon XP might run circles around a mainframe in Quake 3, but its components are slow and unreliable.
Amazon, Burlington,Boscov, telia. (Score:2, Informative)
Burlington was mostly Unix.
Boscov had 70 aix and >500 M$.
telia dropped mostly Solaris.
Home Depot is apparently going to drop all M$.
more and more are showing up and while they are replacing some unix, it is also replacing an equal or bigger percentage of M$.
As the economy worsens and the companies that are making profits are running linux, well...
It is exactly what happened in the late 80's early 90's when M$ was the correct way to go.
Hardware Maintenance is irrelevant (Score:5, Informative)
It's not the maintenance that is the problem, things like configuration management and data integrity are more important. If you have a hundred servers, then you have a hundred places to check that everything is in sync. If you are running a small shop with a dozen or so machines and one administrator then they can keep all the state in their heads. When you get up to hundreds then the state is larger than one person can easily cope with and you start having to communicate state to others. With hundreds of boxes, it is easy to overlook things, with fewer boxes, the communication is easier, and cheaper.
The other thing is CPU residency. Lots of small boxes wastes CPU power because they tend to be devoted to one task and are only capable of that task. The problem is, they are so small that you can't add other tasks to them so you need a new box... Generally, CPU residency on small boxes runs about 10%, with mainframes, this can rise to 90%. Take two tasks - one runs during the day, one runs during the night. Conventional wisdom would allocate two small boxes, one per task wasting them for most or their life. Mainframe usage would run them both on the mainframe - this gives each process more power when they run and doesn't waste the box when they don't. Most traffic tends to be peaky but only for a short period of time so if the box is large enough to hold them both, you get a saving whilst still making all the tasks faster.
Small boxes are good when you need maximum cycles per buck and the task is easily partitionable with minimal interprocess communication and the tasks are continuous. When the tasks are not easily partitionable, need lots of IPC or are peaky then larger boxes make sense.
The thing to remember is that where the scale is large, you need to make use of that scale to get maximum performance. You don't see chemical plants using hundreds of small vats, they use a few really big ones. With these systems they are used at a scale where communications and simply keeping track of what is going on is a major exercise and hence a major expense.
My Experience? Well - put it this way, the SunFire 6800 turned up a few weeks ago, the 4800 turns up on wednesday as part of a plan to replace a Tandem mainframe and they will be sitting next to quite a few racks holding Sun E3500s, E450s, E250s, t1s, HP netservers, IBM RS6000s and SGI Origin 2000s and indeed a MacOS server or twenty. A lot of our comms talk to Stratus mainframes and the machine room cooling plants are a more pressing problem than CPU speed.
Re:A step in the right direction... (Score:2, Informative)
I also agree with you that "desktop" style machines running something like Linux *can* offer similar levels of reliabilty and performance, but in a completely different way. In a nutshell - instead of one ultra-robust machine with multiple redundant sub-systems, you go for multiple redundant machines (although you could define the cluster as the machine - in which case it's no different...hmm
I've successfully applied this pet theory of mine over the last 3 years wherever possible. Even things like ethernet switches - we used to buy Cisco 550X chassis which come with 2 of everything important, like PSU, routing module, supervisor module, backbone interfaces and so on, but they cost £35K each for the config we typically buy. Sure they hardly ever fail, and if a component fails, there's a backup. However - recently we started buying smaller cheaper swicthes - but lots of them - typically 3 where 1 would do: total cost about £15K for the same scenario
Web servers lend themselves easily to this too (especially if you use Apache and Tomcat (or whatever it's called this week
I always thought that IBM continued developing the Mainframe to support existing OS/390 customers with large complicated mission critical apps on them - I can see some use for a mainframe running Linux (and I bet their are more Linux savvy techies otu there than z/OS - which would help with recruiting admins for the box), but I still feel that the multiple-smaller-boxes-running-linux solution is a better bet - as it can be any size you want within reason - start off small for dev/testing, and then pile on the hardware for production.