Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IBM

IBM's New Mainframe 7nm CPU Telum: 16 Cores At 5GHz, Virtual L3 and L4 Cache (arstechnica.com) 90

Long-time Slashdot reader UnknowingFool writes: Last week IBM announced their next generation mainframe CPU Telum. Manufactured by Samsung's 7nm node, each Telum processor has 8 cores with each core running at a base 5GHz. Two processors are combined in a package similar to AMD's chiplet design. A drawer in each mainframe can hold 4 packages (sockets), and the mainframe can hold 4 drawers for combined 256 cores.

Different from previous generations, there is no dedicated L3 or L4 cache. Instead each core has a 32MB L2 cache that can pool to become a 256MB L3 "virtual" cache on the same processor or 2GB L4 "virtual" cache on the same drawer. Also included to help with AI is a on-die but not on-core inference accelerator running at 6TFLOPS using Intel's AVX-512 to communicate with the cores.

This discussion has been archived. No new comments can be posted.

IBM's New Mainframe 7nm CPU Telum: 16 Cores At 5GHz, Virtual L3 and L4 Cache

Comments Filter:
  • It would be interesting to see this in various PC benchmarks.
    • Re:"me too" (Score:5, Insightful)

      by sphealey ( 2855 ) on Sunday September 05, 2021 @12:25PM (#61765817)

      Mainframes aren't designed to serve the same load profile as personal computers. They are designed to handle e.g. 200 channels of input requiring database retrieval, decimal arithmetical calculations, database storage, and output with multiple levels of error correction every second of the day, 365 days year, with 2-3 years between maintenance outages. A processor designed for that environment probably won't run Photoshop or the latest 3D game very well.

      • Yep, but forty years ago, Big Iron and a PC would be laughable on such simple spec comparisons. Just sayin'.
        • Comment removed (Score:5, Informative)

          by account_deleted ( 4530225 ) on Sunday September 05, 2021 @05:44PM (#61766793)
          Comment removed based on user account deletion
          • Re:"me too" (Score:5, Informative)

            by arglebargle_xiv ( 2212710 ) on Monday September 06, 2021 @02:57AM (#61767937)
            Yep, and you can see that in the otherwise rather vague press release (blah, cache, cores, blah), the important bit is:

            The predecessor IBM z15 chip was designed to enable industry-leading seven nines availability for IBM Z and LinuxONE systems. Telum is engineered to further improve upon availability with key innovations including a redesigned 8-channel memory interface capable of tolerating complete channel or DIMM failures and designed to transparently recover data without impact to response time.

            That's what you're paying for with mainframe CPUs, not a PassMark score.

      • So ... idling Windows 11 then.

    • Re:"me too" (Score:5, Interesting)

      by BAReFO0t ( 6240524 ) on Sunday September 05, 2021 @01:04PM (#61765915)

      Clearly you are too young to know IBM's relevance.
      IBM are more like the innovator of things that then trickle down to consumer CPUs.
      Without IBM, AMD would not even exist anymore. Thrice over. (E.g. the original Athlon was based on innovation brought in by IBM.)

  • by Viol8 ( 599362 ) on Sunday September 05, 2021 @11:50AM (#61765733) Homepage

    OTOH - max 256 x 5GHz Cores!

    I suspect there are some benchmarks trembling a bit right now.

    • It'll run JCL so fast, you'll think you just completed the Kessel run in less than 12 parsecs.

    • Mainframes are not about raw number crunching performance, you can build PCs that can compute rings around this thing for much much less money. They can't shovel data as rapidly, though.

  • Nope (Score:4, Informative)

    by Editorial Failure ( 7475146 ) on Sunday September 05, 2021 @11:54AM (#61765743)

    No, it doesn't use AVX-512, not even to talk to the AI thingummie on the dies. Or at least, it doesn't say to in TFA. It says IBM does NOT do things like intel's AVX-512. Which makes much more sense than intel's AVX-512 ever did.

    The article is otherwise very thin on details, and so is best skipped for something better. Just like 99%+ of today's /. "summaries".

    • Completely off topic, since the article was so light on details, does anyone know if AVX-512 can only work on x86 design? I mean, can it be ported to another ISA, or is it fundamentally tied to the x86 architecture? I don't know the first thing about it, so I thought I'd ask.
      • by jsonn ( 792303 )
        Most actively developed ISA have a vector extension of some form. The capabilities in terms of lanes and bits per lane vary, some current forms involve dynamic partitions etc, but it is not inherently Intel specific in any shape or form.
    • Or at least, it doesn't say to in TFA

      There were 2 articles. The 2nd one [arstechnica.com] says: "The new inference accelerator is placed on-die, which allows for lower latency interconnects between the accelerator and CPU cores—but it's not built into the cores themselves, a la Intel's AVX-512 instruction set."

      • Yes, indeed. But it doesn't say what you think it says. That sentence reads "[IBM's] inference accelerator is not built into the cores, whereas intel's AVX-512 instruction set is."

        It's a comparison. "IBM does a thing differently from how intel does this other thing." Concluding that IBM used AVX-512 for anything at all, as the summary does, indicates something went wrong with the reading comprehension.

  • Is this good? How does this compare to x86 and Linux? I've never had the misfortune of working with IBM mainframes, but I know they're legendarily expensive and have heard they're underwhelming in performance terms.

    From a quick google, I see I can buy an AMD 128 core dual socket mobo/CPU set for 14k. I know you can get 64 cores in a single chip for a few thousand. I've heard you can't really buy an IBM mainframe for less than the cost of a nice house. Is this interesting for anyone starting a new ap
    • by ArchieBunker ( 132337 ) on Sunday September 05, 2021 @12:17PM (#61765803)

      You don’t buy mainframes to crunch numbers. If you used a bank or credit card then you used a mainframe. That is the target market. As for virtualization IBM was doing it in the 1970s on the S/370 line. The whole architecture is so alien compared to anything from the pc realm. I’m not sure if it’s still true but up until very recently their new systems had native binary compatibility going back to the 1960s on the S/360 system.

      • by Gabest ( 852807 )

        I have a 6 six core cpu at home, basically a mainframe from several years ago.

        • by BAReFO0t ( 6240524 ) on Sunday September 05, 2021 @01:17PM (#61765955)

          Not even remotely, at all.

          Or does it have a backplane? (Not to be confused with a mainboard?)
          Does it natively ONLY run virtualized environments of different kinds? With hardware support for obscure things like different byte sizes, different byte orders, environments that don't even have a concept of a file and use records instead, and use EBCDIC?
          Does it offer CPU interconnects that can cross boards, racks, and even buildings if necessary?
          Does it do error detection and correction on ALL the things natively in hardware?
          Can you swap cards, drives, RAM, PSUs, and even CPUs while it's running, without it even blinking? (Unless it has blinkenlights for that purpose. :)

          Don't confuse a mainframe with a server.
          You literally may not even be able to turn the thing on and enter something, unless you know how. (Though that may have changed, but I remember front displays with obscure codes, serial ports and opening up the thing being the only ways to set it up with anything you and I would call an operating system that you could then use.)

        • by hey! ( 33014 ) on Sunday September 05, 2021 @01:38PM (#61765995) Homepage Journal

          Nope. You're thinking of *supercomputers*. Mainframes aren't about calculation speed, they're about massive data throughput with high reliability and availability. Your CPU might have the computing speed of a mainframe from several years ago, but the speed at which your CPU gets data from RAM is far too slow to handle a mainframe workload.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          You really shouldn't comment on things you obviously have no idea about. IBM z Series mainframes have totally insane redundancy/resilience and ridiculous I/O channels. They are not for raw number crunching. They are for stuff like processing millions of financial transactions quickly and incredibly reliably. They run some of the most critical parts of the banking system (and other critical infrastructure, like the UK railways and Postal service). Your six core system is a toddler's toy compared to even an a

        • by bws111 ( 1216812 ) on Sunday September 05, 2021 @03:14PM (#61766289)

          Oh brother. Here are the specs for the current generation z15.

          Up to 190 cores for customer use (there are a bunch more cores used for things like IO processing and hot swap spares). Each core running at 5.2GHz

          Up to 40TB usable memory. That is fully redundant memory.

          IO bandwidth of 1152 GBps

          5.3 million IO/s

          Please enlighten us as to which 6 core PC that is

      • by sphealey ( 2855 ) on Sunday September 05, 2021 @12:28PM (#61765823)

        - - - - - I’m not sure if it’s still true but up until very recently their new systems had native binary compatibility going back to the 1960s on the S/360 system. - - - - -

        Until Y2K was well past one of my former employers was paying IBM part of the cost of maintaining the 1401 architecture emulation subsystem on the 370 and 390 series. "Part" because they weren't the only organization in the world running 1401 assembler code in the year 2000.

    • I haven't done much with mainframes over the last 50 years either, but that market is driven not as much by performance and capacity as it is reliability and the ability to repair faults while operating. Cost is a factor but the buyers of mainframes expect to employ their hardware over much longer than those who chase the latest Intel/AMD chipset so the cap-ex is amortized over a long time and is in fact much less than what they pay for SLA contracts.

      • Not a mainframe expert, but how is deploying a fully redundant architecture using PC-based hardware less desirable? I can imagine from the cost perspective it would be much cheaper. Even more so if you factor in the labor.

        In other words, what does a mainframe provide that could not be replicated?

        • by Anonymous Coward on Sunday September 05, 2021 @12:58PM (#61765897)
          Very, very high availability, Backwards compatibility with your existing mainframe code. Processors and hardware optimized for the workloads (going hand-in-hand with this, certified compliance with the standards required for these workloads.) Everyone who doesn't need these things has already moved away from mainframes. Making x86 with comparable hardware features would, after all is said and done, probably not be all that much cheaper.
        • Because it’s more complicated and just as expensive to maintain a room full of pcs. Can you guarantee your new pc solution behaves exactly like the system it replaced? Unlikely. Mainframes do.

          • Because it’s more complicated and just as expensive to maintain a room full of pcs. Can you guarantee your new pc solution behaves exactly like the system it replaced? Unlikely. Mainframes do.

            I've heard this many times and really really don't understand it. I have worked with business systems for decades. In Java, for example, the inputs and outputs in a well designed system are well documented and unit tested. Recently, functional programming became fashionable and the situation has even improved. If you don't like Java, C#, Kotlin, Go, and various others are worthy modern alternatives.

            Theoretically all business software can be mapped out with a flowchart. All known inputs and outputs

            • by Shinobi ( 19308 ) on Sunday September 05, 2021 @06:32PM (#61766909)

              You bring up the software, but the point about the mainframe was that the software will work EXACTLY as before, even if you hop between 3 generations of mainframe HARDWARE. Meanwhile, on the x86 side, there have been function discrepancies just moving between Sandy Bridge Xeons and Ivy Bridge Xeons, or between different Opteron generations, or from Opteron to EPYC.

              There's also the massive I/O where a single mainframe can outperform a large cluster when it comes to connecting to hundreds or thousands of units simultaneously.

              In terms of Google, Netflix etc, they all deal with data that it doesn't matter if it gets temporarily lost and has to be redone if a node goes down. That's unacceptable in banking, insurance etc etc, but for Google, it's long been stated that it's acceptable. If a search node physically breaks down, no matter, the user will just type in their search again after the error message. Another trade-off is the sheer amount of floor space and wattage that is burned up by the likes of Google, Microsoft, Amazon etc

              • You bring up the software, but the point about the mainframe was that the software will work EXACTLY as before, even if you hop between 3 generations of mainframe HARDWARE. Meanwhile, on the x86 side, there have been function discrepancies just moving between Sandy Bridge Xeons and Ivy Bridge Xeons, or between different Opteron generations, or from Opteron to EPYC.

                There's also the massive I/O where a single mainframe can outperform a large cluster when it comes to connecting to hundreds or thousands of units simultaneously.

                In terms of Google, Netflix etc, they all deal with data that it doesn't matter if it gets temporarily lost and has to be redone if a node goes down. That's unacceptable in banking, insurance etc etc, but for Google, it's long been stated that it's acceptable. If a search node physically breaks down, no matter, the user will just type in their search again after the error message. Another trade-off is the sheer amount of floor space and wattage that is burned up by the likes of Google, Microsoft, Amazon etc

                I have worked with Java for almost 25 years now. Code I wrote in 1997 works IDENTICALLY today, with one exception, it's faster and more efficient. That's the whole point of the JVM. When you write code correctly, it works identically even between operating systems. Unless you go out of your way to be stupid, business code works identically in Java 1.0 as it does in Java 17 pre-release. No recompilation needed...and as long as you don't use any newly assigned keyword as variables, you can even continue

        • by chmod a+x mojo ( 965286 ) on Sunday September 05, 2021 @01:03PM (#61765911)

          Cluster of virtual machines - all of your services are on different hardware, if any of that hardware fails / has to go down for maintenance you damn well better hope that your failover plan works and switches out to a new VM seamlessly or you lose that service. You also can't hotswap hardware as easily. Failovers also mean either 1: you are running a VM at idle constantly on OTHER hardware than the original host, or 2: it takes time for you to spin up a new VM and sync the service and DBs for that service ( which have to be mirrored realtime on yet more different machines, eating bandwidth ) VS. the mainframes scheduler hitting a polling event and noticing Hey, CPU's 124-136 are dead, execute this on group 200 instead.

          Mainframe - Oh, one drawer of CPU's took a dump? Pull it out and keep on running, while you wait for replacements to arrive. No interruption at all to any of your services since they all run on a physical cluster. Also, when the new CPU's arrive for the drawer you just plug them back into the mianframe and it will start using them redistributing the load internally.

        • You can't do it with PC-based hardware.
          Unless you introduce mainframe technology, and make it essentially a mainframe.

          E.g. how would you *guarantee* prevention of all bit flips *everywhere*? That includes the CPU, all busses, and all memories/buffers of any kind. PCs are only getting there, now that it's becoming a real problem for consumers. (CRC and the like only goes so far. There are already tons of collisions. You just don't notice because usually it doesn't matter. A game might show something wrong. A

        • by sjames ( 1099 )

          Imagine, a CPU fails and the machine keeps going. A stick of RAM fails and the machine keeps going.

          Depending on requirements, it may be cheaper to deal with warm or cold failover, or to engineer your own solution in software. Or it might be cheaper to go the mainframe route.

        • Very high availability and massive bus bandwidth --
          I am reminded of the old mainframe when I was in college 30 yrs ago.... the maintenance dept. told me it had only been rebooted once in 30 years because of a building fire, that machine ran everything for ~5,000 students and faculty. And of course, the accounting dept.

          Most mainframes, you can hot-swap pretty much anything without even causing a blip. Fully error detecting and correcting at every level, etc.

        • by Bert64 ( 520050 )

          A fully redundant architecture using generic hardware needs some form of interconnect to keep all the nodes in sync... Generally the throughput and latency of that interconnect will be a lot worse than what the mainframe can offer.

    • Re: (Score:3, Interesting)

      by KiloByte ( 825081 )

      Not only it's ridiculously expensive and ridiculously slow, but it's also the only bad-endian arch kept alive.

      In Debian, we have s390x as an arch with no porters and no users, that's somehow kept alive only because IBM pays Ubuntu (not Debian!) to keep it alive. Which costs us developer time as maintainers are supposed to fix arch-specific problems there, for no benefit to the distribution, just to let IBM nickel and dime legacy customers.

      I have no problem with ppc64el which has at least some users -- but

      • You're ignorant, a mainframe can do the transactional work of two thousand x86 servers. Look it up. Your MIPS-hot little x86-64 would be IO bottlenecked trying to do mainframe work.

        • Re: (Score:2, Funny)

          by KiloByte ( 825081 )

          Your MIPS-hot little x86-64 would be IO bottlenecked

          Well, my home desktop has a RAID0 of 8 Optane NVMe-s tucked into every spare PCIe lane (2 in NVMe slots, 2 in 4x card slots, 4 on a 4x4x4x4x carrier board; 1 NVMe slot is taken by a legacy (flash) disk, a 16x slot by the GPU, a 1x slot by a RS232 card). For lukewarm data, there's that legacy NVMe, for cold stuff a piece of spinning rust.

          Now let's see that dinosaur of yours get a better IO latency.

          And that's just a 2018 consumer-tier machine. Any large server today has a few TB of pmem -- which has ~100x b

          • by bws111 ( 1216812 ) on Sunday September 05, 2021 @03:42PM (#61766361)

            Is this supposed to be a joke? Current generation mainframes support 384 FICON ports, 48 25bps ethernet ports, 96 1000baseT ethernet ports. Total IO bandwidth of 1152GBps. 5.3 million IO/sec

            • Is this supposed to be a joke? Current generation mainframes support [...] 48 25bps ethernet ports

              Meanwhile, that "ordinary x86" you downplay takes 400Gbps ethernet cards. There's a wee bit of difference in latency between 25Gbps and that. And durable storage which works at memory speeds, instead of IBMs "our UPSes never fail" which leads to a big oops when they do. Granted, your mainframe can take more network ports, but you need to separate it into many VMs (as the OS can't do coherency in one piece), we can compentarize this into physically separated machines just as well. There's not much gain f

          • You couldn't run a thousand programs all doing a couple thousand transactions of IO per second simultaneously on your storage, your system would be essentially deadlocked waiting for itself. A mainframe can have hundreds of FICON channels each with up to 32K devices each. No comparison to desktops.. for the job of transactions on data in storage. Maybe for other types of jobs your desktop, or "supercomputer array" of them, might do better but that's not where mainframe lives.

            You should read over mainfra

          • by Shinobi ( 19308 )

            Can you run a couple of hundred Infiniband and Fibre Channel physical devices simultaneously, without contention, in a single server of the type you propose?(Before you type something stupid, you might want to do some research into PCI-E Root Complexes and how they bottleneck and congest). A mainframe can do that. Also, a mainframe has advantages in the co-processors for crypto, data integrity checking, communication etc, freeing up the CPU's to do actual work. Mainframes are the trains and ships of the com

      • by bws111 ( 1216812 )

        Where does this idiotic 'slow' idea come from?

    • I suspect the primary use case if for banks and other industries with Jurassic code who have decided their application(s) would cost more to refactor and test than they consider "worth it" so this is just the next in a long line of hardware refreshes as their older equipment ages out (again).

      • It may well make sense. If you are successfully processing a hundred billion dollars of transactions a year, and a few million $ for a new mainframe means that you are sure you will continue doing so for the next few years, it may make sense to just spend the cash and continue. Many attempts to replace high reliability legacy code (like air traffic control) have ended up taking far longer than predicted.
      • Jurassic code? there is plenty of new mainframe code too, a pile of x86-64 with only the ability to have high GFLOPS and MIPS won't cut it when needing million of transactions per second, there would be I/O bottleneck. Mainframes do what your commodity processors in parallel can't.

      • It is the near-no-downtime aspect, that you can replace essentially anything with the system still running.

    • Mainframes are extremely high performance... for a certain kind of work.

      So you're interested in pure number crunching? What if you had banking or insurance code, written in a mix of COBOL, Java and C+++, that required you to do over a million transactions per second and had to up with no downtime at all? That's the mainframe market, the very back end of your bank and your insurance company and your global airline reservation system.

    • by Coldfusion97 ( 175932 ) on Sunday September 05, 2021 @12:45PM (#61765859) Homepage

      I worked at IBM for about 4 years in & after college system testing Linux and middleware on the mainframe. When I first started, I wasn't particularly impressed with them compared to commodity x86 servers but over time I started to recognize some of the power and positive aspects that they provided over racks full of commodity servers.

      First off, they are I/O monsters. They were and are still widely used for transaction processing. It helps that IBM develops either specialized hardware instructions or separate hardware components to accelerate or offload workloads. Virtualization support has been baked into the hardware for decades, including partitioning resources at the hardware level. Crypto, networking, and other I/O can be offloaded to "cheaper" processors or SoCs, leaving the million dollar CPs (central processors) for your application. Over the past 15 years since I moved on they've added additional specialized hardware and capabilities to stay abreast of current computing needs/trends.

      Folks seem to not want to think about this in the age of microservices, but there's a significant cost to scaling out in terms of complexity, performance, and HW/SW maintenance. It can be more convenient, simpler, and possibly less expensive to scale up, particularly when IBM lets you connect them together in a parallel sysplex configuration for even more redundancy. It definitely takes big piles money to do so, but it could provide a decent ROI if you don't need clusters or data centers worth of gear and as many people to keep it going – think "data center in a box" or "cloud in a box" (albeit expensive).

      (I left IBM to go work on x86 virtualization and cloud computing, so I've also got plenty of experience at the opposite end of the spectrum too.)

      • Don't forget it's not all script kiddie WhatWG devil may care spaghetti code, but sometimes literally life and death code that may make or brake entire economies. The kind of code where 10 lines a day is a lot for one person.

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Sunday September 05, 2021 @01:18PM (#61765961)
      Comment removed based on user account deletion
    • If you are truly interested in understanding different architectures and pros and cons, I would recommend reading quality technical articles, like the linked one at arstechnica, or this one that describes the cache design: https://arstechnica.com/gadget... [arstechnica.com].

      Regarding your questions about comparing core counts:
      IBM is very good at designing hardware to achieve excellent symmetric multi-processing performance. Just naively throwing a bunch of cpu's together into a system results in rapidly diminishing per
    • Is this good? How does this compare to x86 and Linux? I've never had the misfortune of working with IBM mainframes, but I know they're legendarily expensive and have heard they're underwhelming in performance terms.

      A mainframe is not a supercomputer made to crunch lots of numbers outright. A mainframe is for handling a large number of I/O transactions with redundancy, high availability, and reliability. They are not made to simulate weather; they are made to handle tens of thousands of financial transactions at once while checking the accuracy of the transaction multiple times. Also parts of the mainframe can be replaced while it is operational and 6 nines or uptime is normal.

    • by Shinobi ( 19308 ) on Sunday September 05, 2021 @06:02PM (#61766835)

      Mainframes are a completely different computing world. If you think of it in terms of x86 server hardware, it'll make little sense, other than to look at it like this: Virtualisation? Mainframes had it before IBM PC's were even a thing. The reliability and data integrity features in the high-end Xeon and EPYC CPU's, that have driven their costs up? Those came from the mainframe world originally. And in a mainframe, all that reliability and data integrity is throughout the entire system, with data integrity checks not just in storage, but in transit for example. There's also encryption available for all the stages inside the mainframe(I think even the cache can be encrypted)

      Another aspect where mainframes are different from x86 servers, though there is some movement towards it, is the sheer amount of I/O capacity. No bus to bottleneck the system, for example, instead each device has a dedicated channel.. The ones I were peripherally worked with, which was a long time ago, can best be described as being built around system wide crossbar switches on steroids, running 100's of physical network links without contention.

      Also, the mainframe crew taught me that they don't really think in terms of transactions as such, at least not in the way the x86 crowd does. Instead, they think in terms of messages. An example of a message is a request that comes in to do an account summary for Person A, where they'll check multiple databases, run stored procedures(such as adjusting for different laws and regulations etc), cross reference for errors, and possible interactions with Person B and/or C, collate everything, and then send back the data. All that is a single message, and the bigger installations push billions of messages per hour.

      Then there's the redundancy and reliability aspect of the system as a whole, with all major components hotswappable without interrupting any jobs. And yes, you can cluster them, that's called a sysplex in the old terminology

  • by Anonymous Coward

    In hot grits, with Natalie Portman.
    Meh.
    Kids today.

  • Comparable in performance yet proprietary architecture so they can ask out-of-this-world prices.

    I mean, really, mainframes these days are merely souped-up PC servers running old COBOL and FORTRAN legacy software.
    • by Anonymous Coward

      I mean, really, mainframes these days are merely souped-up PC servers

      The ignorance on this thread is astounding (but not surprising). You have literally no idea about the ridiculous resilience, reliability, scalability and insane I/O capabilities of a modern z series IBM mainframe.

      • Comment removed based on user account deletion
        • Mainframe enlightenment would be out of range for most, but guessing that the homelabbing [reddit.com] trend is exposing a decent amount of PC-only folks to the world of server hardware. Similar in spec to what you got, I caught the bug a few years ago with a Dell R720 and it's been pretty addictive. The parallel rise of cheap auctioned hardware and VM technology in general dovetailed nicely over the last decade or so.

          (Btw, yes, the NSA is indeed stealing my porn but the proof is too long to fit into the bottom of thi

    • Comparable in performance

      If you don't have any use cases, you have no reason to know what IO even is.

      Or what performance would mean.

      For you, it has worse performance; it doesn't run any of your games.

  • by Anonymous Coward

    You either have it or you don't.
    Sounds what they really have is a cache that's reconfigurable between how much is exclusive and how much is shared. Nifty feature, could be really useful, but needs a better name. Calling it "virtual" is a bad idea.

Happiness is twin floppies.

Working...