Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
IBM

Is IBM's Power4 A Threat To Alpha, Sparc, IA-64? 103

HiyaPower writes: "There is an interesting discussion here about the IBM Power4 chip. While it is most directly compared with the upcoming Alpha, it also has ramifications for the penetration of the IA-64 and/or Sledgehammer into the server market. Conclusion drawn is that the Alpha, etc., may be in for some very tough sledding. Now if only Apple could be persuaded to use these instead of what the article terms its "embedded controller chips...""
This discussion has been archived. No new comments can be posted.

Is IBM's Power4 A Threat To Alpha, Sparc, IA-64?

Comments Filter:
  • The article says POWER4 implements the 64-bit PPC ISA. Does that mean it could run current PPC binaries with no problems? If so, Apple could drop it in to future Macs with nothing more than some changes to the kernel...

    Jobs has got to be pretty pissed at Motorola by now. Rumour has it he's shopping around for new chips. I bet AltiVec is the only thing holding him back. AltiVec is truly amazing for certain tasks...

    Is there a technical reason why IBM is avoiding AltiVec? Could AltiVec somehow be responsible for the problems Motorola is having boosting PPC clock speed?

    Oh, and can you imagine a Beowulf cluster of these? Sorry, just had to say it. It would be pretty mind-blowing, wouldn't it?
  • The only really beautiful/useful Mac I've seen so far was when I took a photograph of a BSOD, turned it into a nice picture, and installed it as the desktop for the Mac. Appropriate icons, and there yo go...

    I can really identify with you, so much.
  • If Linux on PPC is a key part of their strategy, it would be nice if they threw some support to open source compiler optimizations for PPC. It'd be a shame to have Linux underperforming on these puppies because the optimizer was not everything it could be.
  • Motorola owns Altivec. almost 2 years ago Motorola and IBM split on the main PPC design. Motorola wanted Altivec which adds to the die size and therefore limits clock speeds IBM wanted really fast chips. So now Apple chooses between altivec and higher clock speed chips They took altivec, 'cept Motorola couldn't keep up with demand. So Apple presuaded IBM to licence the 7400 from Motorola. As fara s I know most of the chips shipping in Macs now are made by IBM of a design by motorola.
  • Hell yes - but macs are optimized for colour!
  • Since UNIX supported 32 bits in 1979 (VAX) and
    it took MicroSoft until Windows 95,
    I hope they are faster this time around!

  • i'm sure it's been said before but...
    the likelyhood of these puppies ever landing in a colorful case with a fruit on it is like, nil.
    a) does IBM even sell these for other companies use in their boxes? i think no.
    b) apple would have to redesign like everything to put one of these in a workstation/server at which point their whole unified motherboard architecture plan goes out the window
    c) they require a fan and you know how steve feels about fans
    d) even if IBM would sell Apple the chips, can you imagine the price? you think a mac's expensive now?

    and lastly, just for the obnoxious value, can you imagine an Appleseed cluster of these?! sweet!
  • Pipelines speed up a processor by overlapping execution times; an actual instruction might take 12 clock cycles to execute if the pipeline is 12 stages deep - but the effective throughput is one instruction per clock cycle - once the pipeline is filled.

    An over clocked micro-sequencer could have the same effect without the problems of pipeline refilling at a branch. Overall, I suspect the over clocked sequencer approach might offer comparable performance with a much quicker design cycle - and or - lower design cost.

  • Fact is that not a single ISP uses S/390 systems for serving web content. If the IO of these machines would be so excellent, why don't they use them?

    Bzzt. Wrong answer.

    Granted, S/390 is not the most popular hardware for ISP's, plenty use S/390. Here's an article [crn.com] about one.

    Here's an article [cnet.com] where ebay discsses the possibility of moving to the S/390 platform.

    This article [internetwk.com] discusses how some government agencies are web-enabling their mainframes.

    I'll grant that traditionally IBM mainframes can be a bear from the usability perspective. However, things are changing quite quickly, especially with the advent of Linux on the S/390.

    have a day,

    -l

  • No, their contract says they can only buy the same chips from IBM if Motorola cannot supply them. That mattered when IBM and Motorola were both producing basically the same G3. Apple could simply do a deal with IBM for the Power4 (which Motorola has zilch to do with) and kiss Motorola's ass goodbye.
    Of course, there are plenty of other reasons this will never ever happen which I don't feel like typing yet again but it's not the contract.
  • I would just like to note that the systems you will not see this chip in PC's the Power4 will not even be in server class systems. IBM is going for the top of the line big iron systems and super computers( they still have to make a 100 teraflop system for the US gov.) The lowest area you might see this chip is in the High end of the AS/400(now eServer i) line.
  • They have had a long tradition in RISC development

    Well, considering that IBM researcher John Cocke invented RISC... [ibm.com] (scroll down to 1980). Also do a keyword search on 'Ted Codd'. Guess where he worked?

    IBM basically suffered from BigCorpBlah syndrome, which afflicted most big corps during the '50s thru the '90s.. So much cool shit got invented and totally ignored, and left to their inventors to splinter off and start dozens of revolutions..

    One has to wonder if Micro$oft R&D is sitting on something interesting that is being smooshed because it doesn't fit into some marketdroid's PowerPoint slide... M$ has the size, hubris, and complacency of a BigCorpBlah victim..

    Your Working Boy,
  • Im not too interested in defending MOSR. For the most part the site is wishful thinking by someone who likes macs however getting the detail right a few days before the launch might mean that someone from apple actually talks to them (which for the most part i doubt).
  • You'd be surprised what IBM could do with G4... currently they have PPC 405 embedded in things such as TiVo, G3 used around, and sourced to apple, Power4 for heavy lifting...

    1) IBM could use additional revenue.

    2) IBM could use the reputation boost from the Apple community of breaking the long standing mhz freeze.

    3) Never underestimate what IBM will find a use for. If (and when, if my predictions are correct)
    they do produce a PPC7400 (or variant) be sure they'll have other uses for it.


    A host is a host from coast to coast, but no one uses a host that's close
  • Just keep your fingers off the mouse button, and Photoshop in the foreground so that all of your CPU time isn't given to the mouse click or the Finder. :P

    And that is what a lot of people do : open their mail or plan their next move as photoshop chugs on. It isnt a fatal flaw for many people.

    MacOS X Server has absolutely nothing to do with MacOS X. It is a totally different operating system. MacOS X Server is based on NeXT, not BSD like MacOS X.

    Um , "absolutely nothing" seems a bit harsh. NeXT, mac os x , and mac os x server are all based on both mach and BSD. The point of my bringing up macosxs was because you declared that mac os x and every other promised os was vapor.

    RELEASE means that it is a FINAL PRODUCT

    oh i thought release meant it was something that had been released. mac OS X Server has been released as a final product

    On the contrary, your post is the one that reads like something written by a 15 year old. If you want to get particular, the only word that you capitalized correctly is multitasking - something that your precious Macs can't even perform!

    my macs can TOO capitalize! ;) (btw i don't proofread for punctuation either)

    In retrospect questioning your maturity wasn't appropriate. But... to me both of your posts read as whiney and less than what i would expect from an adult. You reply by saying "oh yeah i'm rubber and you're glue ...!" and then you procede to take out your computer list and wave it around.

    MacOS is shit.

    how eloquent. and the worst is that apple will probably screw up macosx

    Only on Slashdot can your post be moderated down for stating facts.

    welcome to slashdot! everywhere else you'd get flamed... : )

  • In the long run, we all know that what will count is OS support. IBM has a strong, stated linux strategy, but we'll see where it goes.

    Don't get me wrong, I am one of the few who actually like AIX. I think it's a mature, useful operating system with some really cool characteristics (fairly integrated hw support and debugging, excellent logical volume manager (better than veritas, imho)). nevertheless, it remains to be be seen whether IBM can actually bring Linux to their whole server platform (including these bad boys).

    (There have been instruction set changes in the IBM processor line in the past, particularly between the POWER, POWER2 and POWER3 architectures, so I'm interested to see what the differences in this instruction set are...).
  • There have been some studies recently about what kind of system would best run Oracle and serve transactions. (See the Alpha-multiprocessor article in this year's ISCA proceedings.)

    The answer is a multiprocessor (because Oracle is threaded) with a shared L2 cache (because the threads share most of the L2 footprint.) The advantage is quite large.

    So, what IBM is building here is a server engine. Which is why there will be a 128-processor system, but there won't be Altivec.

  • What ever control Microsoft has over the 64bit processor market is shrinking. Even if Microsoft isn't "Punished" by the DOJ companies now have the courage to stand up and do things Microsoft doesn't like. Such as selling a PC with Linux Pre-installed (previously unheard or by any Major distributor.)

    I have feeling we are going to be seeing a lot more from Apple, the Linux community and maybe even BE.


    - tsi
  • Two vendors for CPU? I'll admit to not following Apple like I used to, but who did they buy CPUs from other than Motorola? IBM? AFAIK they've only ever run on Motorola CPUs, 68000 family and then PowerPC..
  • by _outcat_ ( 111636 ) on Tuesday October 17, 2000 @08:18PM (#697851) Homepage Journal
    To IBM's credit, myself and some geek friends were at a smallish local tech convention and some IBM guys were there talking about their new nomenclature and such in their server lines. For the really big stuff, the S-390's ( I think pSeries and zSeries but I could be wrong) they run stuff that handles HUGE, HUGE payloads ... run AIX ...(isn't there OS-390 too) but for their middle range stuff, we couldn't get the guys to shut up about Linux. One of our guys mentioned it and the talk was all about that the whole time. "Our customers like it because we don't have to package costly licenses. And it's very, very, very scalable and flexible, we can run it on everything. And it's a UNIX so we can integrate it with AIX ... " on and on...

    So just for that IBM's not a bit bad, and their NUMA-Q architecture looks REALLY neato. As for putting Alpha and Sparc out of business...Hey, you build a better mousetrap. Big Blue has always had great R&D and put out some of the best products out there. That doesn't mean Alpha and Sparc and such are going to plummet.

    I say kudos to Big Blue.

  • Apple cannot use these chips because their contract with Motorola forbids going to another company.
  • Motorolla drops the hot potato beats!
  • That line first appeared with the 386. And it was wrong then, too.

    The line with the 386 (and 486, and P5, and PPro) was that it was destined to remain a workstation chip for some time. This was Intel marketing bluster, yes, but it was moderately true--the initial versions of each of those chips was produced on an old process; once the chip moved to a new process, it became feasible for upper-end mainstream machines. In a two years, they were mainstream.

    Right. But the difference here is, these chips are not intended for workstations. They're not intended for moderately sized servers. They're intended to replace mainframes, and to run high-high-end scientific code. In case you didn't read my other post [slashdot.org], these things are going to cost *at least* $10,000 *apiece* to MAKE. Just for the MPU. Moreover, they will not work up to their full potential without *massive* bandwidth, which still costs mucho $, last time I checked.

    What servers really need is multiple CPUs and huge I/O bandwidth, not faster individual CPUs.

    Oh wait, you didn't even read the article. The POWER4 *is* multiple MPU's--it's 8 cores on 1 die. This thing doesn't come in anything less than 8-way configurations. As for I/O bandwidth, would 84 GB/s be enough for you?

    You still think this thing is a desktop chip???

    On the other hand, Apple can't afford to change CPUs again.

    Or maybe they can't afford not to. Contrary to all those MacOS Rumors, there are no definite plans for Apple to move even to the upcoming G4+ MPU's, which are essentially another incarnation of the tired G4, just with a stretched out pipeline which will get it to 800MHz at the cost of lower IPC. There aren't definite plans yet because the G4+, like the G4 and the G3 before it, is not a desktop MPU but rather an embedded/DSP chip which Motorola happens to sell to Apple to use in Macs. The design, ramping, pricing, roadmap, are all influenced primarily by Moto's embedded customers first, not by Apple. Unfortunately, despite the fact that they are utterly dependent on them, Apple has decided to treat first IBM and then Motorola rather poorly, and thus haven't gotten much in the way of support when they decided that they may, perhaps, want to increase the speed of their top MPU more than 50 MHz in a year. Whoops. BTW, did you catch Apple's earnings today? Ouch.

    All of this is too bad--OS X looks like perhaps the best thing going as far as operating systems goes. There are always rumors that Apple's going to finally make their surprise move to x86. Their experience with PPC the last year or so, and the accompanying beating their bottom line has taken, might be the thing to finally push them over. I personally think they might still be able to carry over enough incompatabilities to stay the sole supplier of MacOS hardware--after all, the XBox uses x86, and it will be plenty incompatible with PCs. Migrating software will be a gigantic pain...but on the other hand, it's not like the Mac has too much in the way of software anyways. (OS X, and any Cocoa programs, will port very very quickly.) Who knows?
  • Slashdot next year:

    "AMD is 64 bits! Which one is more advanced? Signal 11!?"

    "Do the math!"

    You heard it here first.
  • by cabbey ( 8697 ) on Tuesday October 17, 2000 @08:28PM (#697856) Homepage
    • IBM isn't leapfrogging anyone, this isn't a new radical change from their current tech, it's just one of the few times they've actually told the industry what's going on inside. The "tour de force" is NOT the technology it's the fact that you, Joe Consumer, are being given a glimpse of the technology which is "ho-hum" inside IBM.
    • the core features "apple would die for" are just integrating existing IBM architectural feats from the S/390 and AS/400 (er, I meant zSeries and iSeries) architectures... which by the way, are routinely bad mouthed. I enjoy the irony in their being admired this time.
    • regarding the 10 to 12 levels of logic, one other case, that is only hinted at here, but was mentioned earlier, is the support for the old POWER instruction set... software trapping ain't cheap.
    • re the clock speed: let's all remember that it wasn't too long ago that the RISC camp decided to get rid of the gloves and step up to the CISC bigots clock rate == length of manhood competition. Before then lots of RISC machines operated at significantly lower clock speed than CISC machines. i.e. I've got a Power2 that runs at 66Mhz and smokes a Pentium II at 300Mhz for a LOT of stuff. Now that someone has thrown off the gloves and said "ok Intel, we'll see your 1Ghz" things will get REAL interesting in the PowerPC vs x86 world. I've got to thank Digital, er Compaq, for entering into the contest first on this one.
    • nobody said there weren't delays... even IBM isn't THAT good, least of all the managers. But after being through the antitrust crucible there is one sterling rule at IBM, you NEVER announce something UNTILL IT'S DONE - there are LOTS of procedures to ensure that, and lots of managers are employed just to conduct that stuff.
    • if the Alpha does show to be the worst hit by the power4 competition it will be a sad day indeed, Sun on the otherhand.....
    • it already runs Linux; too bad Linux doesn't scale up to as many processors as they'll be putting in systems as well as AIX and OS/400 do. (note, not a troll or a flame, that's FACT that even the kernel folks don't disagree with. Linux DOES NOT handle 24 processors just now... we need to fix this!)
    • as someone else has already noted, to REALLY see the benefit of this you need your applicaiton to be well behaved, and preferably well threaded. this is sadly harder said than done, and the number of skilled engineers capable of writing this type of code at the application level are slim to none because this isn't the kind of stuff that interests applicatiton people. Instead it interests infrastructure types, and when they put out a drop dead gorgeous infrastructure to build your next generation application on, too many idiots refuse to climb the learning curve needed to fully exploit it and the accolades of those who did aren't enough to keep imbecilic pointy haired managers from killing off the infrastructure. (who, me bitter? no....)
    • the site is either slashdotted, or really slow
  • Well, the "NEW INFORMATION" you seek would obviously be my opinion on the subject.

    However, since you mention another source of information, would you be so kind as to post a link, perhaps?

    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Okay, I think the apple comment was all in humor. If any of you are actually thinking that this powerPC-isque chip has any potential to see daylight at Apple, it sure isn't as a product offering. Apple does NOT market high end servers, it does NOT sell number crunchers, and if you think you are going to EVER see a Power4 in a translucent case then I hope your holding your breath, because darwin is waiting for you.

    The PowerX line of chips don't run word, they don't process photoshop filters real well, and they need MASSIVE cooling. In fact, I don't see anything at all that would appeal to an actual Macintosh customer. What DO the chips do?

    Well they crunch numbers, run molecular simulations, etc... I do know enough about the Power4 to give a decent speach on the target market and application uses, however all I wish to do is say upfront, that if your comment has anything to do with Apple and the Power4, you are waisting time, and most of all, showing your lack of knowledge when it comes to computing. Forgive the typos, I have to get back to hour 37 of work.

    chow

    bort
  • IBM POWER3 already has dual multiply/add units per CPU and dual load/store units -- this sounds like just upping the clock speed and multiprocessor integration. What I want to know is this: How will it perform on real apps instead of just LINPACK. I've benchmarked POWER3 on serious scientific apps, and seen the following performance gaps:
    • MM5 is a big sloppy vectorish code for meteorology modeling; POWER3 delivers about 25% of the performance I would have expected from its LINPACK numbers and given MM5 benchmark performance on known SGI's and Alphas.
    • MAQSIP is a big air quality modeling code highly optimized for general microprocessor/parallel; POWER3 delivers about half the expected performance.
    Does POWER4 have a similar gap between performance on a very simple and regular app (LINPACK) and real-world ones ??

    NOTE: for what it's worth, Sun SPARCs give excellent MAQSIP performance, but even more miserable MM5 performance (as compared to processor-peak) than POWER3.

  • I'd like to comment on some of your statements:

    "[...]no unix server can currently compete with even a middle of the road OS/390 machine for heavy server/transaction/database type workloads."

    This would be true if you added "in a ultra-high-reliabilty environment". Fact is that not a single ISP uses S/390 systems for serving web content. If the IO of these machines would be so excellent, why don't they use them? What makes things even worse is the fact that serving web pages is similar to the IO load envisioned by Amdahl, namely relativly large chunks of data being tranfered. As a result even the terminals (3270) are based on transactions of this type. The user edits on the screen and commits changes every once in a while. This is very different to the character based aproach of Unix ttys.

    "[...]a modern OS/390 the IO is handled by up to 1024 of these processers called 'channels'."

    This is unclear to me. OS/390 is an operating system but you seem to make a statement about hardware. True is that 390-IO is based on a channel subsystem. All models from G3 through G7 (GA 2000) have 256 channels. Wrong is that a channel has something to do with a processor. A channel is an IO line with an interrupt on its own. Each of these channels may end in a channel controller. To the channel controller one can attach 256 subchannels. The subchannels end in a device. This makes a total of 64k devices.

    A modern G6 has 16 processors (390 architecture). 14 for workload, 2 for the IO subsystem, 2 in case two of the others die.

    "Nearly everything relating to transactions was done forst on an OS/390, databases in general, relational databases, messageing and queueing software, & etc. are all areas where the intial and continuing innovation took place on OS/390s"

    True. One of the last "innovations" of DB2 on 390 was to optimize the data distribution on the harddisks depending on the speed of the movement of the HD heads. The last 390 harddisk physically built was a 4 Gig drive (I beleave it was called 3390 model 4) in the early 1980s. The disks had a diameter of almost 1 yard. Since then IBM simulated these 3390s through (SCSI-) disk arrays. The controlling software of these RAIDs introduce special waits to not disturb and crash DB2 that relies on specific timings.

    The last disk logically defined was a 8 Gig drive. In case all 64k devices are 8 Gig drives this makes 512 TB storage. This will be a boarder very difficult to cross.

    Is OS/390 innovative? All I know is that the OS/390 filesystem is non-hierarachical, i.e. does not know of directories! The filesystem is not block oriented. If you append data to the end of a file the file may overflow. This means that the user must create a new, bigger file and copy the old file into the new one and delete the old one.

    Programs in OS/390 must be started using a special "scripting" language to supply parameters!

    I could go on with this list. 390 were nice in the 70es. They still do a good job in some places that can afford them and need high reliability. All I'm saying is that the future belongs to a different kind of machine (definitly not 31bit like S/390) with a different kind of OS.

  • The S/390s are the z-series, the RS/6k line is the p-series, AS/400 is the i-series, and so on...
    --

  • First off, I'd like to say that that was indeed a great response. While I'm not sold on IBM's approach just yet, at least I'm more informed now than I was when I read the article.

    The main technique I've seen exploited in DSP programming (I know someone who does this for a living) is software pipelining, which often involves like loop unrolling, except that you have to pay attention to the instructions to make sure you use all your instruction units.

    Compilers these days can do loop unrolling, and past that I guess you'd just hope to be able to reorder the instructions somewhat, to get a decent instruction mix out of the code in the loop, and maximize that magical "IPC" number. However, yes, it's hard to get rid of all the dependencies, and run-time profiling (a la Transmeta) will probably get more popular as research into VLIW systems gains in popularity.

    SMT sounds interesting; does it refer to "threads" in the software sense, or just separate processes? For the moment I'll view it with skepticism, just as I did when Sun built streams into the kernel. Any multiprocessing Unix system should be able to run separate processes on separate processors, and some of them can surely do the same with different threads, depending on the implementation. I guess IBM would just have a more compact and possibly more scalable solution in this case.

    And yes, I realize these are supposed to be server chips for now, and knowing IBM, they might just stay that way. But this sort of technology usually filters down into the PC market quicker than you'd think, especially if it improves the price/performance ratio, as this might do, eventually.

    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • None at all. BSD has how many forks? 4? 5?

    Linux could survive quite easily with at least 3 forks. It is arguable that it has two at the moment with the 8086 versions.

    I see mebbe Embedded Linux, 'Official' (Linus) Linux, and Big Iron Linux as 3 forks that could happily live with each other, sharing code wherever possible or neccessary.

    I mean, its not like the code is going away ... this is Free Software we are talking about here.

  • I have long been waiting for something like this beeing announced by IBM. They have had a long tradition in RISC development (RS/6000) but never really pushed this technology into the (buisiness-) server market. The reason was that their management decided to protect the S/370, now S/390 cash cow.

    IBM politics was that certain techniques (like the ceramic multi-chip modules, copper process, etc.) were reserved for the S/390s. This was the reason why the RS/6000s with AIX were never really competitive (S/70 AIX server).

    When SUN brought their E10000 server into shops IBM thought they'd never loose to SUN they obviously started changing their minds. The (RS/6000 based) 24 processor S/80 outperformes the 64 processor E10000.

    This new POWER4 design makes clear that IBM favours modern Unix-based RISC servers over the old S/390 mainframes.

    This is a good thing especially for IBM S/390 customer who start having problems finding talented people who want to work with dinosaur machines and OSes. (IBM also has major problems implementing new or even innovative software for the S/390s. Exception: Linux/390 ;-).

    Better IBM offers them a safe way into the RISC/Unix world (we are looking at huge amounts of enterprise critical data sitting on all these S/390s) than when they try to migrate on their own.

  • There is one other way to raise the effective work done per clock cycle; raise the amount done by each instruction. While the idea fueling RISC in the beginning was to make simpler - and thus faster - chips - a modern RISC chip is anything but simple.

    I have to wonder if an over clocked microprogram unit with a CISC instruction set couldn't be made competitive again. Ultimately that might be a simpler design - and thus potentially faster than a modern RISC chip.

    One of the ultimate limits on processor performance is how fast you can get instructions into the processor. If you think of CISC as a compressed (Huffman encoded) version of RISC it is easy to see that CISC does have a theoretical advantage there.

    I know this is heresy, and that the modern religion is RISC == GOOD, CISC == BAD; but it might at least be worth someone spending some time thinking about it. If nothing else, the reduced transistor count could do something about the spiraling power consumption problems in processors. Or you could integrate multiple processors on one die and do the SMP on chip with the much smaller cores this would make possible.

  • Sounds good in theory, but failed in practice. The Alpha, when Win NT was available for it, ran IA32 software extremely well with the FX-32 emulator package, and look where that ended up.
  • This is all I get trying to get to the page!

    An Error Occurred While Attempting to Process the Request

    Date/Time: 10/18/00 09:13:50
    Template: D:\html\users\deankent\realworldtechcom\html\page. cfm
    Remote Address: 12.20.217.130
    Refering URL: http://slashdot.org/

    Diagnostics

    ODBC Error Code = S1000 (General error)

    [Microsoft][ODBC Microsoft Access Driver] Could not update; currently locked by user 'admin' on
    machine 'CFX1'.

    SQL = "UPDATE adlog SET total = 12394 WHERE stamp = '2000/10/18' AND adid = 10 AND type
    = 1.0"

    Data Source = "ADS"
  • I suppose its all going to depend on how good their branch prediction is. Their pipeline is getting pretty big and when they have a break its gonna take forever to flush'n'fill that pipeline. I think the 2-core-per-die design is interesting, but hardly revolutionary.

    Also, article's conclusions are pretty badly flawed, esp. regarding the Alpha's future. The supposition that the POWER4 will eclipse the EV68 should be obvious but inconsequential, given how far the POWER4 is behind the EV68 in terms of getting to market, and the POWER4 has yet to convince anyone that it will be able to compete with EV7 in real world performance (of course, everyone's guesses could be wrong and EV7 could end up a big dog, but its unlikely).

    Furthermore, considering the article is dated October 16th (2 days ago) I found the following statement interesting:

    [Sun] will likely have refreshed its large-scale system products with the UltraSPARC-III by the time the first POWER4 systems ship, Sun's offerings will probably still be clearly outclassed.
    Considering that UltraSPARC-III is already shipping, this statement was confusing. But it doesn't really matter, UltraSPARC-III at 900MHz is already outclassed by Alpha EV67 at 667MHz [cite] [ideasinternational.com], no one even considers them a contender in the performance race.

    And what about HP? They gambled too heavily on IA-64 and cut back development of their PA-RISC, which was just recently getting interesting. Now that IA-64 is delayed and will probably be a dog (notice how Intel hasn't been leaking any spec numbers?), HP is realizing what a mistake they made and has revitalized their own development; it should be enough to keep them from completely being left behind, but just barely.

  • Its so much vapor that you can download the beta and use it..

    Oh wait you obviously have done that. OS X is actually pretty damn nice looking........

    Jeremy
  • *snicker*
    baby if you're going to correct peoples' spelling, you really ought to double-check your own.
  • by mr ( 88570 ) on Wednesday October 18, 2000 @03:49AM (#697871)
    According to Robert Morgan who runs Apple Recon [pelagius.com]

    Steve Jobs said in a visit to Motorola
    "It will be great in two years when we arn't using your chips."

    After this statement is when Motorola publicly started calling the PPC line 'embedded'

    How often in YOUR relationships can you walk up to your relationship partners and tell them 'to hell with you, I'll be leaving in 2 years.' and NOT expect said partner to keep giving a damn about you.

    Apple then made the problem WORSE by pubically calling altivec 'the future' and spent hours about how wonderful altivec is. Apple will have a hard time leaving AltiVec with all the statements about how wonderful altivec is.

    Jobs ego put Apple in the place Apple is. Motorola only reacted to the actions Jobs took. It is not like Motorola NEEDS Apple, and took actions to protect Motorola's investment.

    Jobs wants to be the 'saviour' of Apple, fine. Then Jobs must also take the mantle of the person who helps kill Apple also. Amazing how the history of Jobs repeats.
  • I would hate for Apple to have to give up on AltiVec.

    Yeah, it seems to work very well with MP3 ripping.. My 500MHz Cube can do 128kbps VBR normal-stereo at between 2.5x and 5x realtime..

    Your Working Boy,
  • Yeah, it seems to work very well with MP3 ripping.. My 500MHz Cube can do 128kbps VBR normal-stereo at between 2.5x and 5x realtime..

    So can my 500mhz P3 - And I'm doing the mid/high quality encoding which goes up to ~224kbps. I don't know if that makes encoding faster or slower. MP3 encoding (ripping is a function of CD-Rom speed and I/O bandwidth, and in and of itself has nothing to do with mp3) does not parallelize well, at least as it's done today. So altivec really doesn't buy you anything there.

  • These chips are not to be confused with PowerPC chips. They are server chips only, intended for seriously expensive machines.
    That line first appeared with the 386. And it was wrong then, too.

    What servers really need is multiple CPUs and huge I/O bandwidth, not faster individual CPUs. Loaded servers always have lots of threads running. On desktops, one thread typically is using most of the CPU time. Thus, the desktop is the place where the fastest CPUs are used. Servers are configured for max price/performance without sacrificing reliability, and tend to run a bit behind the fastest desktops.

    On the other hand, Apple can't afford to change CPUs again. The last transition cost them a big fraction of their applications (for example, almost all the CAD vendors bailed out) and a big chunk of their user base. In retrospect, better 680x0 machines would have worked out better than going to PowerPC. The whole PowerPC thing was supposed to get IBM into MacOS, remember, and that was a total disaster. Now that everybody knows how to make CISC machines faster, there's no reason 68K machines couldn't be up there with x86 machines. And the architecture is much better.

  • Drop them like a hot potato and Apple and especially OS X will soar. Give them a real processor, the support is very simple..update mach and you're in. Imagine the possiblities. It's endless...
  • Microsoft made an Alpha version of windows NT a while ago. It sucked, but they did do it.
  • 1GHz Power4 beats the crap out of a 500MHz G4 w/AltiVec. I don't care *what* you're trying to accomplish.
  • You'd think Jobs would be regretting his decision to kill-off Exponential.

    Now THERE'S a bad business decision.

    This all just proves one thing. It's not PPC that's broken. It's Motorola, and AIM that are broken. AIM doesn't allow any REAL competition between Motorola and IBM. Too bad that the entire computing world is held hostage by the whims of evil Bill Walker of Motorola.
  • BTW - only inexperienced and fat headed programmers call anyone else's coding 'stupid', since every experienced programmer has written stupid code at some time in his life...

    Now, I never said I hadn't written stupid code - And you don't have to be an experienced programmer to do that, either. Anyone can write bad code. Including me.

  • First off, the basic news here (that IBM's Power4 architecture will be two processors running at Gigahertz speeds) isn't news; that's been known for a while.

    However, it *is* nice to have this depth of technical information to examine, and also it's good to know that they're still doing this.

    I think the big advantage that VLIW instruction sets will have is strictly architectural, and I'm not sure how IBM's approach fits in yet, but it looks interesting. Throwing more chips at the problem is one approach, but remember that your competitors can do that too, *and* make the chips do more as well...

    However, IBM will have to make sure people design their apps with more than one processor in mind, which will be a Good Idea for the future, since more people might have multiprocessor computers.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Though I did work with a dude who was on a team that developed a mainframe-on-a-card for PCs. Not terminal emulators (like Wang cards), but an actual System 370 system that fit into an AT slot used for software development.. PC/370 IIRC..

    I also used to have an IBM PC Server Model 500, which is a Pentium 90. It had something like 32mb of ram, and when I got it it actually had a MCA bus mainframe card in it, and a number of terminals hooked up to it, because it was being used to test a networking product a company I worked for developed back in the day for IBM.

    I ended up yanking out the card, throwing it away, and putting NT 4 Server on the system, and using it for quite a while. It had a Mylex DAC960 in it, and 11 (!) 2.25gb UW disks in it. Eventually I pulled the ECC RAM, the disks, the 2.88mb floppy, DDS-2 DAT and 2x :) SCSI CDROM, saved that, and ditched the case, which was about the size of two full tower AT cases.

    I would have kept it, but have you ever seen the price on a 100mbps NIC for MCA bus?

  • I'm not sure which processor your referring to.. I remember the SPARC coming out at something like 33mhz when the fastest 386 your could buy was 20mhz. I remember being shocked at the clock differences between the Alpha and the 486.. I wish I could quickly find clock rate graph for the R2000, SPARC, etc and the year they were released.

    My understanding of RISC has always been lower CPI, higher clock at the cost of higher instruction counts. 'Deep' (and wide) pipelines had the advantage of both increasing the clock rate and lowering the CPI but at the cost of requiring more instructions to do similar actions. I don't really remember the RISC camp loosing on the clock rate war until the early '90s when the Pentium came out.

  • Also wanted to add that Oses like linuxppc and yellow dog have done wonders for the ppc world in new processor and device support. For a small team of devoted people they realyl clean up getting the job done...and yes as someone mentioned in another post, SMP is on the way in.

    Why get faster when you can get MORE for less (that is as long as you don't run windows to suck all your CPU cycles trying to look pretty)
  • by ToLu the Happy Furby ( 63586 ) on Tuesday October 17, 2000 @09:04PM (#697884)
    I think the big advantage that VLIW instruction sets will have is strictly architectural, and I'm not sure how IBM's approach fits in yet, but it looks interesting. Throwing more chips at the problem is one approach, but remember that your competitors can do that too, *and* make the chips do more as well...

    Not sure how IBM's approach fits in yet?? Read the article.

    Amongst other things, the POWER4 is *not* VLIW, it's straight-ahead modern RISC at its finest. With massively gigantic buffers, bandwidth and execution resources (8 functional units/core * 8 cores = wow), this chip'll do quite nicely on IPC/core, not to mention combined IPC for all 8. While presumably not quite as elegant, the design for the individual cores bears a lot in common with the archetype of perfect RISC cores, the Alpha 21264, and it has even more aggressive resources.

    Essentially what this means is, assuming this design is as good as it appears, the only way the competition will be able to catch up (without going the way IBM has and deciding on a prohibitively expensive 8-in-one design and packaging) will be through the use of innovative design tricks. The upcoming P4 has a few of those, incidentally, but the big one--and the one the P4 *doesn't* have--appears to be SMT, Simultaneous MultiThreading. Alpha has an 8-way SMT core coming out in a bit, and it ought to compete well with IBM's much more expensive 8-way SMP design here. And AMD appears ready to do 2-way SMT (or something similar) with the Sledgehammer in about 15-18 months. And Sun is rumored to have SMT in the USV design due in several years. But the POWER4 looks to lead in the "big bad" category for quite some time to come.

    (As for Intel's EPIC, the VLIW-like design strategy for their IA-64 chips, at the moment it's looking like a rather poor competitor to SMT. A quick explanation of why:

    There are exactly two ways to make an MPU run faster: 1) increase the clock speed, or 2) increase the IPC (instructions per clock). Unfortunately, the best we've been able to do so far in the IPC realm is about 1.4 IPC on SPEC benchmarks (Alpha EV6x). IPC on a P3 runs about 40% lower. Now, these IPC numbers are despite the fact that the Alpha can theoretically retire 8 instructions/clock, and the P3 5 (5 internal ops, not 5 x86 ops). Furthermore, simulations show that as far as attacking the IPC problem by adding more functional units, we're nearing the point of diminishing returns.

    The problem is, in order to run lots of instructions in parallel, you have to be able to safely extract parallelism from your code. And the problem with this is, you can't run instructions in parallel if they have dependencies, etc. And furthermore, nowadays all this parallelism has to be safely extracted in real-time by special hardware in the MPU itself; this makes your chip more complicated, and means you need to build a big buffer to hold instructions in flight so you can pick and choose which ones you want to run each clock.

    So many many years ago, HP had the idea, which it later sold to Intel (and which wasn't really there idea at all but has indeed been used in DSP chips for years and years), of getting rid of all that complex insruction-level parallel-finding logic on the MPU and doing it all at compile-time instead. This is the basic idea behind EPIC, the philosophy of Intel's IA-64 line.

    It sounds very nice, especially because in theory it means simpler chips (no complicated control logic), and simpler chips means faster chips. Heh heh heh. See it turns out that the amount of instruction-level parallelism which can be safely discovered at compile-time is way way less than the amount that could be found in the chip at run-time (which, as we recall, is too small already). Thus EPIC was modified to allow the compiler to just place "hints" in the code. Well, this means you still need all that complicated control logic back in place, because you still don't have deterministically scheduled instructions. But following the "hints" and other changes to the ISA ends up making everything *more* complicated, not less. This, in a nutshell, is why Itanium is 3 years late, way over budget, unable to meet its very modest clock speed goal of 800 MHz, and fitted with a laugh-enducing 96kb of on-die cache, lower even than the lowliest Celeron: all this added complexity means bigger, slower, more complicated chips that don't have the room for cache or the elegance for high (or even adequate) clock speeds. Plus we have very strong evidence that compiler technology is still not nearly good enough to make the kinds of insightful IPC-giving "hints" which are necessary to even make the damn fool scheme work. Thus the only benchmark Intel has "released" for the Itanium is that of an RSA-encryption--a routine simple enough to be hand-tuned in assembly. Meanwhile they have made the patently ridiculous claim that the SPEC benchmarks--directed precisely at the mid-cost server/workstation market which Itanium is aiming for--are "not relevant" to Itanium's market.

    A completely opposite approach is SMT, which uses a relatively small number of core changes to allow not just instruction-level parallelism to be gleaned, but also thread-level parallelism. In other words, the chip will run several threads in parallel, confident in the fact that their instructions will not have dependencies on each other, and thus be able to use much more of its full execution capabilities. Early indications are that SMT can improve IPC by remarkable amounts, like on the order of 2x the performance on otherwise similar cores!

    Unfortunately, it is too early to tell whether SMT will be as easy a design enhancement as is being claimed. Furthermore I've heard tell that SMT on IA-64 will be a lot more difficult than on a RISC MPU, so Intel could be missing out on a huge speed-up with this technique.)

    However, IBM will have to make sure people design their apps with more than one processor in mind, which will be a Good Idea for the future, since more people might have multiprocessor computers.

    These chips are not to be confused with PowerPC chips. They are server chips only, intended for seriously expensive machines.
  • by Anonymous Coward
    In the article, I saw no mention of:
    • running Quake 3 Arena
    • overclocking these babies
    • A Beowulf cluster of these
    Hardly a "discussion" IMHO.
  • I'd love to see the source for that.

    Even if they did have a contract, they could probably get out of it due to non-delivery of faster chips by Moto.

  • Well, now really it is.

    It's as silly as the debate when AtheOS came out - is this going to compete with Linux? Is it a threat?

    No, it's an option. When it comes to computer's, options are good.

    (honestly, it looks like a newspaper headline - that's a bad bad bad thing.)

  • A lot of good all that processing power would be when it comes to a grinding halt by the user pressing the mouse button, eh?

    a lot of macs are single purpose : for fast photoshop. Multitasking not required.

    So far, it is vapor

    -bzzzzt-!! try again ! mac os x server shipped over a year ago (not to mention the release of macosxbeta)

    on a side note i wonder how old you are ? 15? 16? your post certainly doesnt read like it was written by a grown up.

  • I know MOSR is not the most reliable of all sources (they were right about the cube though) but a few months ago they put up a rumors about Apple and IBM trying to get the Power4 to work in Apple's UMA-2. The rumor also had a pretty good link to some info on the Power4 chip. The link gives you a PDF that cover some of the same material in the article of this story but it is still worth a look.

    The rumor is now archived at:

    http://macosrumors.com/?view=archive/8-00

    If you dont feel like going to MOSR the link in the rumor to the Power4 info is:

    http://www.austin.ibm.com/resource/features/1999/p ower4.html
  • Maybe this:

    Apple didn't supported the PREP or later CHRP efforts made by IBM, Motorola and other hardware companies around the PowerPC and instead prefered to stay on its little Mac-island with Apple-crap.

    Imagine that: You might have been able to buy your PowerPC based PC (with working OpenFirmware!) from your local dealer at low-low prices and install your favourite OS as you do with IA32 PCs today...
  • Probably more accurately: the dominant 64 bit processor will be the one that runs your $K's worth of legacy 32 bit software reliably, and faster than its 32 bit predecessor. The other architectures will have to be amazingly faster, or amazingly cheaper than whatever intel offers, in order to outperform that huge advantage.
  • Linux DOES NOT handle 24 processors just now... we need to fix this!)

    There is (almost) nothing to fix. Linux kernel does indeed handle more than 24 processors. Check some recent post on linux-kernel (I can remember some post about a 128GB-equipped alpha).

    There is no explicit limit to the # of CPUs Linux kernel can handle. It is just that it is not known or proven tha Linux scales well on this amout of CPUs. But the work on improving scalability is in progress (NUMA memory allocators, etc). Stay tuned :-)


    -Yenya
    --

  • by Anonymous Coward
    So many many years ago, HP had the idea
    It wasn't HP's ideas. Look at the history of the people at HP and Intel. Multiflow. One of the first VLIW mini-supers in the 1980s. Then go and look at where some of the other people ended up .... Equator [equator.com]. They have a quite spectular chip. A TRACE VLIW machine on a single chip with a whole swag of special stuff. Check it out. Oh, and they do have the compilers.
  • There is no explicit limit to the # of CPUs Linux kernel can handle.

    Actually, yes there is. Linux currently uses a bitmask to specify certain CPU operations, so the number of CPUs is limited to the word length. In other words, Linux supports up to 32 CPUs on 32-bit platforms, and up to 64 CPUs on real machines (Sparc64, Alpha, Itanium (and MIPS64?)). Of course the fact that it supports that many CPUs doesn't mean that it scales linearly, but it looks like the 2.4 kernel will be good for at least 16 CPUs before performance starts dropping off. Various people (Dave Miller, Ralf Baechle and others) are working to remove the bitmask, and allow more CPUs than the word length. SGI in particular need Linxu to be able to support more than 64 CPUs for some of their machines.

  • One of the reasons for CISC in the first place (other than the non-existence of compilers that knew how to use registers effectively) was memory bandwidth. CISC was motivated by the extremely slow nature of memory several decades ago. Once DRAM became both cheap and fast, the CPU was the major bottleneck and RISC became more popular.

    Now we've gone full circle and memory is the bottleneck again. CISC could provide a performance advantage again.

  • Also, I hope not.

    Hey, this chipset is for some serious computing. Serious, serious. The range of boring, mundane software that would get a big boost from these fat, fat pipes into these fast, fast cores is limited.

    Quake]|[ would absolutely drip with 3D VR gore. (I get ill just thinking about it. Gibs everywhere!)

    But would you really need that kind of horsepower to run Word or an Excel spread sheet of even the maximal complexity that Excel can handle? I thought not. (Excel plays fast and loose with some math functions, Newton's approximations, etc. I just implemented algorithms which don't. Banks can't use Excel for real world amounts.)

    Face it, M$ can't use it. Even GHz x86 chipsets are a waste for the desktop.

    The server market is better served by Unix solutions that runs multi-user(NT is not), multi-threaded,) and across a range of big iron that's growing steadily bigger.

    M$ support this? I hope the [expletive deleted] not!
  • Yeah, my weather man said a cold front is coming in, and the gust of cold wind is threatening to blow all the vapor away. Intel had best batten down the hatches for this one.
  • Calling it a "supercomputer " is more a description of the specialized functionality of the processor, rather than the total speed. It used to be that vector processing was reserved only for the "big iron" machines from Cray & others. The introduction of AltiVec in PowerPC chips meant it was possible to get vector functions on a desktop machine for the first time. It really was a big step, and not just marketing blow.
  • Whatever the hype from Sun and HP no unix server can currently compete with even a middle of the road OS/390 machine for heavy server/transaction/database type workloads.

    The OS/390 was architedted by Gene Amhdal in the late 60s. The main problem was to get through the very large workloads required for NASA etc. using quite slow electronics. This was done by using the now well established tecniques of pipeline and parallel processing. What they also did (which is not common) is unload all the IO processing to seperate processors. On a modern OS/390 the IO is handled by up to 1024 of these processers called "channels".

    You can ask a channel for thing like "read all the blocks in cylendars 12,14 and 24 into the this list of buffers" and then go do something else, when the buffers have been filled, an interupt will be posted.

    As for innovative software. Nearly everything relating to transactions was done forst on an OS/390, databases in general, relational databases, messageing and queueing software, & etc. are all areas where the intial and continuing innovation took place on OS/390s (with an honorable mention for VAX VMS). For all the hype about UN*X very little innivation has taken place on these machines, (great exception being web servers).

    In fact the most hyped recent advances, in the areas of clustering and high availability, have been around on OS/390 (in wierd and wonderful ways) and VMS (perfectly formed and complete in the first release of VMS) for well over 10 years.

  • The problem is of course that CISC = variable length instruction, which means decode is a pain.

    An ISA designed with this in mind might be a win (say first few bits determine instruction length for example), but you still end up with decode of an instruction depending on finding out where it begins which depends on the previous instruction type which depends on the one before it which depends on... This makes decoding a lot of instructions at the same time hard, which means having a fat pipe is harder.

    Like anything, it's a tradeoff.
  • One of the big complaints rolling around is, why Apple doesn't get faster mhz.

    IBM is the primary source for all Apple G3 processors. Moto is the source for G4, solely because IBM up until now, has opted to not produce a chip with AltiVec.

    I have said for over a year now that if IBM fabbed the G4 for Moto, that the high speed yields would come up, and that if IBM produced the G4 that the speed rating would increase.

    IBM just dropped 5 billion on new fabrication plants. IF IBM wanted to *own* the OEM contract for all of Apple's processors, they'd only have to produce their flavor of a PPC7400.

    I predict they will within a years time, and at speeds comparable to Power4

    A host is a host from coast to coast, but no one uses a host that's close
  • Well, once hardware like this is readily available, that'll be the next step: making the underlying OS more multithreaded. That should solve most of the performance bottlenecks right there.

    BeOS should have no problems, and Linux should do better now with glibc...

    I think a model like this would be better served with processes rather than threads; in all of these systems, will there be unified access to memory? I know the POWER4 will have it, since this is just a beast of a CPU grafted onto a traditional computer, but I can see problems in any NUMA system, where the memory for one thread might be closer to a separate processor. I guess they'll have to take that into consideration as well, for systems like that...
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Exactly. RISC isn't inherently faster. What makes the RISC chipsets faster is that the cache management is easier and the big reason that's needed is that modern PCs don't have anywhere near enough fast RAM. Originally, PCs had 100% of their address space available to the processor in zero wait state memory (either DRAM or SRAM). Now, we treat the L1 and L2 cache the way we used to use RAM and system RAM the way we used to use the disk - as slow backing store.

    What we need is a new architecture that supports direct 1 clock access to RAM. With that, we don't need complex caching algorithms and can actually do interesting things with the instruction sets like the self-optimizing microcode that Zilog was working on for the Z-80000 or actually having instruction sets optimized for programming rather than ease of cache design.

  • a lot of macs are single purpose : for fast photoshop. Multitasking not required.

    ...and that is about all that they are good for! Just keep your fingers off the mouse button, and Photoshop in the foreground so that all of your CPU time isn't given to the mouse click or the Finder. :P

    -bzzzzt-!! try again ! mac os x server shipped over a year ago (not to mention the release of macosxbeta)

    MacOS X Server has absolutely nothing to do with MacOS X. It is a totally different operating system. MacOS X Server is based on NeXT, not BSD like MacOS X. MacOS X Public Beta is NOT A RELEASE. RELEASE means that it is a FINAL PRODUCT.

    on a side note i wonder how old you are ? 15? 16? your post certainly doesnt read like it was written by a grown up.

    On the contrary, your post is the one that reads like something written by a 15 year old. If you want to get particular, the only word that you capitalized correctly is multitasking - something that your precious Macs can't even perform!

    FYI, I am 29 years old. My first computer was a TRS-80 Model 1. I got a Color Computer when I was in the 6th grade. Since then, I've personally moved through an Amiga 1000, an Amiga 2000, a Quadra 840av, a 7200/75, a Power Computing PowerCenter Pro and various Intel boxes. I've used Macs almost every day for the last seven years, and I know the operating system inside and out. Believe me when I say that MacOS is shit.

    I used to like Macs because at one time, they actually performed better than PCs. Sure, the G4's AltiVec crap is wicked fast, but the OS does not take advantage of it. Advantages are found only in applications written specifically for the use of it. The only thing that it is good for is so that Apple can take benchmarks performed on AltiVec (which have no relevance to 99.9% of the computer's real performance) and use the results in a misleading ad campaign about G4 Macs being "supercomputers." The only thing that I have found that actually uses AltiVec is the Distributed RC5 client. It is fast as hell, but to what gain? When I'm not cracking RC5 keys, my 18 month old PIII/500 Linux box kicks the shit out of the G4/400 that I am writing this on 99% of the time - and for FAR less money.

    So, let me say it again: Give me a break! :P

    Only on Slashdot can your post be moderated down for stating facts.

    You Mac zealots just can't stand the fact that I'm right! I know, I used to be one of you. There is help available to you all! [linuxppc.com]

    (Score: -1 Troll)

  • Apple and Motorola repeatedly turned down attempts by IBM to share the burden of producing these chips. These Power4, which are really just 64-bit enhanced PowerPCs, are produced for IBMs enterprise server division. For political reasons, I seriously doubt IBM will sell these processors in anything other than a complete system, they have to date been totally unwilling to share these chips with anyone who wouldn't kick the cash in to sell them. On a side note, I have worked for IBM in the past, and know those who work for them still, and this chip has gotten an enourmous amount of resources. To the point it has become one of the largest projects IBM has ever executed. The processor team is rather large, so I doubt anyone at IBM would consider this chip Ho-Hum by any measure. While the industry and others are moving towards SMT architectures rapidly, for once in recent history, IBM seems to hold the current lead.
  • Well, considering that IBM researcher John Cocke invented RISC... (scroll down to 1980). Also do a keyword search on 'Ted Codd'. Guess where he worked?

    I have one word for you: ROMP. Well, I suppose it's an acronym.

    I used to have several IBM RT-PC model 135 machines; 16mb of ram, very classy. The model 135 was the most advanced RT developed, and it wasn't very exciting by the time I got them - of course, when I got them they were very far behind the curve. Even so, that machine supposedly cranked 5.6 MIPS (Meaningless Indicator of Processor Speed) and had an 80ns cycle time, so it's a bit faster than a 33mhz 486 as far as MIPS ratings go, yes? You could run AOS 4.3 (IE, IBM's port of BSD-4.3-lite) on them, normal BSD 4.3-lite, and then someone ported BSD 4.4-lite to them somewhat later, by which time the only RT-PC systems were legacy, and mostly running AIX 2.2.

    Apparently, someone is trying to organize a port [daemonz.org] of OpenBSD to ROMP. There's an article on the ROMP architecture (which included a full MMU) in IBM Systems Journal [ncu.edu.tw], Volume 26, Number 4, 1987. Unfortunately, you have to pay IBM $30 for back journals, and the text isn't available via the web. There's also a RT PC FAQ [jmas.co.jp]. It has the following interesting tidbit:

    The RT PC Advanced System Processor has a 32-bit Reduced Instruction Set Computer (RISC) architecture developed by IBM and implemented in a 1-micron CMOS technology. It has sixteen 32-bit general purpose registers and uses 32-bit addresses and data paths. The microprocessor is controlled by 118 simple 2- and 4-byte instructions. An IBM-developed advanced memory management chip provides virtual memory address translation functions and memory control. It provides a 40-bit virtual address structure capable of addressing one terabyte of virtual memory. Internal processor organization enables the CPU to execute most register-to-register instructions in a single cycle. The model 115/125 RT PC with their FAST ECC memory, is capable of providing the processor with a 32-bit word of data plus ECC each 100 nsec cycle. This memory consists of 40 1-megabit IBM RAM chips. These chips are the same megabit technology used in the IBM 3090.

    Ahh, gotta love 'dat internet. There's also a RT Hardware FAQ [jmas.co.jp] which has the following to say:

    The architectural work started in late spring of 1977, as a spin-off of the T.J. Watson Research 801 work (hence the "Research" in the acronym). Most of the architectural changes were for "cost reductions," such as adding 16-bit instructions for "byte-efficiency"--a main concern at IBM at the time.

    That's fairly entertaining.

  • No, it's an option. When it comes to computer's, options are good. (honestly, it looks like a newspaper headline - that's a bad bad bad thing.)
    It's not a fact or an opinion, it's a question. Note the terminating punctuation.
  • "Update Mach" and that's all, huh? I would think that they would need to update the Cocoa and probably the Carbon layers also.

    Anyway, with Intel and AMD coming, "64-bit" is going to be a key marketing requirement for desktop systems in the next few years. (If it sells game consoles, it will sell computers.) Apple, however, has to hold off for practical reasons until they get their base moved to 32-bit OSX.
  • IBM just dropped 5 billion on new fabrication plants. IF IBM wanted to *own* the OEM contract for all of Apple's processors, they'd only have to produce their flavor of a PPC7400.

    I predict they will within a years time, and at speeds comparable to Power4

    Of course, they don't really have a use for it personally, inside of IBM, because they'd have to develop a new RS6k architecture to support it. And do you really think that someone with the savvy (recently reacquired savvy, but savvy nonetheless) that IBM contains wants to get involved with someone as comfortable screwing their industry associates as Apple? I kind of doubt it. IBM has no problem providing the G3 chips because they use them themselves, so if Apple suddenly said "Today we decided to stop buying these chips from you and get them all from motorola when motorola can actually get their shit together and produce them" then IBM would just say "That's cool, fuck you too and good luck buying anything from us at a reasonable price ever again."

    Meanwhile, it's curious that someone with the silicon ability of motorola can't meet Apple's demand for the less-used G3 chips at this time. I tend to wonder if that's because they've geared up for G4, or if it's because they don't want to depend too much on Apple.

  • Microsoft made an Alpha version of windows NT a while ago. It sucked, but they did do it.

    They did a PPC version too, then dropped out of that particular rat race when they discovered that people didn't get behind PPC like IBM/Moto had hoped, and there was a distinct lack of both CHRP and PREP platforms out there. There were two standards, and even if they were put together there wouldn't be enough hardware to bother supporting NT on.

  • Their Windows based database for web content seems a bit lacking in power. I just received this error from the site. I haven't even been able to finish the article yet. fscking Winblows

    The error you received

    [Microsoft][ODBC Microsoft Access Driver] Could not update; currently locked by user 'admin' on machine 'CFX1'.

    indicates that someone was doing maintenance on the table in which the data is stored, probably updating a typo or something similar in the story.

    The only crime committed here is by a stupid programmer who doesn't know how to redirect you to a less-lame error page than the default.

  • Their Windows based database for web content seems a bit lacking in power. I just received this error from the site. I haven't even been able to finish the article yet. fscking Winblows

    The error you received

    [Microsoft][ODBC Microsoft Access Driver] Could not update; currently locked by user 'admin' on machine 'CFX1'.

    indicates that someone was doing maintenance on the table in which the data is stored, probably updating a typo or something similar in the story.

    The only crime committed here is by a stupid programmer who doesn't know how to redirect you to a less-lame error page than the default.

  • by NortonDC ( 211601 ) on Tuesday October 17, 2000 @07:37PM (#697913) Homepage
    The article is very strong, but it would have been enhanced if it also touched on AMD's upcoming 64-bit offerings, the Hammer family of chips.

    Hammer does not have a track record in the marketplace, but neither does Itanium, and it's odd to ignore an architecture that in all likelihood will sell in much greater volume than several of the chips profiled here. Even if AMD's 64-bit implementation turns out less than ideal, it will probably outsell the Power, Alpha and Sparc offerings by virtue of the vastly larger market it targets.

    A simulator for a Hammer chip has been released. A comparison, or at least an acknowledgement, would have made the article more valuable.
  • The trouble with Apple switching to IBM's Power chips is that they would have to drop support for AltiVec [apple.com], which is actually a very cool technology. 128-bit vectors that allow all sorts of nifty SIMD operations. They aren't kidding when the call it a "supercomputer". I would hate for Apple to have to give up on AltiVec.

    That said, I don't what Motorola's plans for a G5 are, if any. It may turn out that Apple has no choice but to go with IBM's chips after a while.

  • What do you think they compile Mac OS X with? The changes aren't yet merged into the official GCC tree, but they are out there.
  • by Detritus ( 11846 ) on Tuesday October 17, 2000 @07:41PM (#697916) Homepage
    If you want a "real processor", then you better be prepared to cough up "real money".

    PPC chips are optimized for cost, POWER chips are optimized for performance, screw the cost.

  • Microsoft's OS dominance is limited to low-end desktop systems. In the server arena they are but one player among many.

    Lee Reynolds
  • by Anonymous Coward
    Alphas are not dead. A few days ago, Compaq had a major deal with Ericsson to supply them with Alpha processors for Ericsson's upcoming AXE switchboards.

    Imagine, embedded Alphas!

    As this is a major deal, Compaq will have an output for years to come and the Alphas seem far from dead, or even threated.
  • Not if you do the decode in the over clocked microprogram. The rom "knows" how long an instruction is, and thus where the next instruction after it starts.
  • One of the biggest reasons that killed PReP and CHRP was that IBM said that having a parallel printer port was mandatory and Apple said that not having a parallel printer port was mandatory.

    Isn't it amazing what entrenched bureaucracies can do to each other (with the users as innocent bystanders)

  • It didn't run so extremely well. x86-win32 binaries on alpha-nt ran slower than on the equivalent x86 hardware, and stability also suffered, and not everything worked. On top of that was the fact that native alpha-nt software didn't start appearing, and NT had to pretend your alpha was a 32 bit machine to run. In other words, there wasn't really a good reason to choose for NT.
  • Sadly, i have to go offline for a while, Not ignoring anybody, just not here. Sorry.
  • It's true the Power4 isn't designed for graphic artists or desktop users. The cost is also not in line with that market. However, Apple _could_ use the discards. Power4 chips where one of the CPUs is non-functional. It will still be fast. It will still be expensive. However, it won't be nearly as expensive, and IBM would have tossed those chips anyways. This way IBM recoups more $ per wafer. As long as the non-functional CPU can be disabled and not use any power, I think a 1 way Power4 would be a great personal workstation CPU.

    Another approach is to just port Darwin to RS/6000's.
  • I see mebbe Embedded Linux, 'Official' (Linus) Linux, and Big Iron Linux as 3 forks that could happily live with each other

    But marketing would have to think of a name for the embedded kernel and the big iron kernel; as Linux® is taken by 'Official' (Linus) Linux.

    Linux is a registered trademark of Linus Torvalds.
  • IBM certainly does have some interesting hardware; the Power4 CPU looks likely to be pretty competitive to its Alpha and SPARC competitors... And IBM certainly has some clue on how to construct high end servers, with the attendant I/O processors and such.

    On The Other Hand, Apple's direction has lately been to using convection cooling, not the water cooling that would be necessary for the number of watts the Power4 dissipates.

  • I have one word for you: ROMP. Well, I suppose it's an acronym.

    Hehe, that's a smidge before my time on IBM RISC hardware.. I started with a 7012-340 workstation at about the time the 390 was released.. What a wonderful frankenbeast that was (ripped a sabine adapter out of one system and used it until AIX v4, various 8mm, QIC, HDDs chained off, 16mbps token ring..)..

    Though I did work with a dude who was on a team that developed a mainframe-on-a-card for PCs. Not terminal emulators (like Wang cards), but an actual System 370 system that fit into an AT slot used for software development.. PC/370 IIRC..

    Your Working Boy,
  • SMT sounds good but I think (HP/Intel)'s EPIC will ultimately turn out to be the most elegant solution. When they finally get it right. While IBM and Compaq are throwing more cores at the problem, HP/Intel are trying to improve the core itself. You can always put mutiple EPIC cores on a MCM and do the SMT thing too.

    The problems with compilers and EPIC is that current popular languages like C, C++ and Java, are not designed for parallel machines. There are extensions that add explicit parallel keywords to these languages but they're all non-standard. Anyway, explicitly stating what statements can run in parallel increases the application programmer's burden. And extracting parallelism from a imperative language puts a helluva burden on the compiler writer.

    Now languages like Lisp or Scheme should lend themselves well to the EPIC architecture. It's easier to extract implied parallelism from a functional language because more of the scheduling decisions are left up to the compiler. A Scheme like lowlevel language should produce code that screams on EPIC.

    I think HP/Intel are in unfamiliar territory -- both on architecture and languages. Once they gain that experience, later generation EPICs should be phenomenal.
  • It WILL be great when Apple can start buying CPUs from IBM again. Moto has dropped the ball in a big, big way by failing to raise the G4 and G3 clock rates significantly over the last year. Apple is getting screwed and they have no options for an alternate vendor at this point. I can certainly understand Jobs frustration but you would think the guy would have grown up by now. Its really sad seeing a so called buisness man acting in such a childish manner.
  • Actually, due to some architecture issues (read the last few months of comp.arch) with the 68K series it is doubtful that Motorola could have scaled the 68k the way Intel has the x86. From what I understand it has to do with the 68K having to many indirect addressing modes that depend on the value of a memory fetch to find the real data. This problem complicates the pipeline by forcing multiple dependent load stages.

    I agree though that apple may have screwed up going to the PPC but any apple customer should have been aware that apple has a track record of dumping there existing base of users as soon as a great new technology comes along.

    Its sort of interesting to note though that Transmeta is actually swinging the 'CISC' is better than 'RISC' argument around again. Even though every one claims that the P6 is a 'RISC' arch its really not true in the real sense of the word. Original CISC machines had 'translation' units that translated the ISA to an internal micro-OP execution engine. This is actually what is happening on nearly all processors now. Most of the 'RISC' processors acquired a number of 'CISC'y like instructions in the mid '90s which are turning out to be incredibly difficult to completely implement with hardwired SI and still maintain high clock rates. So instead they break these instructions up into more basic operations and feed them though the pipelines. Originally 'RISC' was designed to do LESS work per cycle and but have a higher clock rate and therefore get more work done.
  • by Anonymous Coward
    THen why not use POwer4 for servers and PowerPC for gamers/home use.

    How many servers need altivec? none im sure a 8 cpu power4 box would out do any altivec desktop.

    gee... what an easy solution, now send me the $10,000 for it.

  • Heh! If only Apple would use these, the new iMacs wouldn't exactly be quite able to hit their price points. Paul (the author of the article) and some others were involved in a thread over on the tech forum at Ace's [aceshardware.com] about (amongst other things) the expected cost of one of these puppies.

    To quote Paul's response [aceshardware.com]:

    Maybe another way of looking at it is perhaps the price of four POWER4 known good die and the ceramic substrate and metal carrier totals $3000 (although I suspect that a tested and 100% functional ceramic substrate itself might approach or exceed $3000 in cost).

    The real question is the cost of a fully assembled and tested, 100% functional, POWER4 8-way module? After all what are the chances one of these can be reworked if even just one of the 20,000+ solder ball joints was bad?


    So for one of these 8-way on a chip jobs (unsure if they'll be offering 4-way configurations too or if those were just a prototype) it's looking like upwards of $10,000 just for IBM to fab, package, and test the darn things. Add in a system capable of feeding it the tremendous bandwidth it requires to run up to its full potential--8 GB/s to DRAM and a phenomenal 84 (!) GB/s I/O--and...ok, so I know Hemos was just joking when he made that comment about Apple, but you get the idea. These are MPUs you use to fold proteins and run gigantic dynamic-content websites, not surf the web and edit the home video of your kid's elementary school graduation.

    On a related note, man these things oughtta show Intel a thing or two about how to marry clever instruction scheduling to brute-force functional units--forget about Itanium; it's gonna take a several-way McKinley system to even take a swing at this these. And it oughtta show Sun a thing or two about the dangers of resting on the laurels of your marketing success when designing new chips. And, as Paul notes in the article, it really oughtta make Alpha engineers worry that for the first time, having the most elegant design may not guarantee the best performance. Compaq has an 8-way SMT Alpha core on the way as well (EV8); too bad the Alpha group's customary position in the world--stepped on and neglected by their corporate masters--means they haven't got the money or manpower to bring it to market until well after POWER4.

"No, no, I don't mind being called the smartest man in the world. I just wish it wasn't this one." -- Adrian Veidt/Ozymandias, WATCHMEN

Working...