Become a fan of Slashdot on Facebook


Forgot your password?

Will PPC Become the Preferred Linux Platform? 276

grunkhead writes "Stephan Somogyi, aka the Darwinist, at MacWeek has an interesting take on Linux on the PowerPC in the wake of IBM's release of a free motherboard design for the G3/750, suggesting the PPC could be the preferred Linux platform of the future. "
This discussion has been archived. No new comments can be posted.

Will PPC Become the Preferred Linux Platform?

Comments Filter:
  • by Anonymous Coward

    RISC : An advertising term concocted by a group of researchers at Stanford in an attempt to positively differentiate their new processor design ideas from the complex, powerful designs popular in commercial machinery.

    CISC : An advertising term concocted by a group of researchers at Stanford in an attempt to negatively differentiate popular, more powerful commercial processor designs from those of their new, small instruction set designs.

    Comment: Eventually RISC processors exceeded the power of their so-called "CISC" counterparts, however it took time and significant research $$$ to make up for the headstarts that the then popular "CISC" ISAs enjoyed. Both terms now enjoy any real meaning only in advertising departments and processor design cults.
  • by Anonymous Coward
    The PPC may become more of a factor in the desktop Linux market, but Alpha and sun4u aren't going to disappear. No, not by a longshot. Compaq has ported it's Fortran90/95, C, C++ compilers and debugging environment to Linux. MetroLink and XiG have ported hardware-accelerated OpenGL servers to AlphaLinux. iD has hinted that AlphaLinux will be a supported platform for its 3D shooter games. Add to that the fact that the most powerful Beowulf clusters (c.f. the Top500 supercomputer list) are built on the Alpha platform, and you can see the Alpha is not going away any time soon. Rumor has it that Alpha Processor Inc. will be introducing a complete 533-600Mhz Alpha system for the sub-$1000 market. SUB-$1000. If you want to talk about affordable powerful Beowulf clusters, don't think PPC. Just think Alpha. See for more information.
  • I must admit that I'm not too familiar with PPC at all... What's the performence/cost ratio like? What sort of compatability issues are there with hardware and the like? Will a PPC box use my PCI video card?

    How does it compare to say SPARC or something?
  • You're forgetting the IA64 architecture here, and the M68k (like the Dragonball in Palms). There are probably other chips (vapor or existing) that aren't marketed as RISC, as well.
  • I just bought an ADS USB card and a Mouseman Wheel to replace my aging Mouseman (original version, USB). Works great on my 9500. Try one; you'll like it.

    (As always, #include <std-disclaimer.h>. Moof.)

  • s/USB/ADB/ for the first occurence thereof.
  • You sure told him. He won't be spouting off with his falsities anymore! You go!


    "One World, One Web, One Program" - Microsoft Promotional Ad

  • (Offtopic):

    The RHWM's reasoning is flawed. They calculate the percentage of RHAT's market value created by "the community" by the percentage of the Red Hat distribution code created by that "community." This is obviously a ludicrous measurement, as the majority of the value of Red Hat is due to their management and marketing, not to their code. If it was just the code, everybody would be buying Slackware and Debian GNU/Linux, and Red Hat would have no value.
  • No x86 is RISC. It may have a RISC core, but that's irrelevant. You program for the x86 instruction set, not the processor's core. The x86 instruction set is decidedly CISC.
  • You say you compile them "straight off of," but then you say that you have to edit one of the source files. Which is it?

    A program that won't compile without the end-user manually editing its source-code is a broken program.
  • You're confusing Mac users with PC users. Mac user typically really like their OS. I happen to love my mac and it's OS. Tecnically speaking-is irrelevent since Linux's "technical supriority" doesn't offer *me* any advantages. In fact Linux's technical supiriority is somewhat overated since many of these high tech goodies rarely come into play in ordinairy use.

    It's not like I'm going to program for the OS or anything; I use it. The MacOS is by far the most *usefull* and *usable* OS out there. I mean what can I say, you just gotta try it.

    As for the hardware. It's good, very good, but that's not the reasone I use a Mac. In fact this good hardware is a bit of a pain since it's typically requires a greater capital investement.

    A PPC base linux box is not in the cards for me because Linux is generaly not very usefull. My guess is a G3 based PPC box would be much of a threat to Apple's Mac sales since it couldn't run the MacOS. (as it currently stands).
  • Not without the Apple ROMs and ASICs you're not.

    No way in hell.

    Even if you did have Apple ROMs and ASICs you're still doomed since the MacOS isn't hardware abstracted so any small changes in hardware require a new version of the MacOS.

    Why do you think the computers that are introduced after a the latest version of the MacOS have enablers? -> the reasone? Apple hasen't included support for that model in the MacOS yet.

    fun, eh?
  • If we gain the victory I'm anticipating, competition is going to be fiercer among chip makers than ever before. I'm currently running Linux on x86 hardware, but that's solely because it currently gives me the best bangs per buck for what I'm doing and what I want to spend. If that changes, I'm entirely happy to shift with it. All my data, and all my skills will come with me. All my network protocols will stay the same, so I can still interoperate with everyone else. In the end, the instruction set of your processor may come to matter little more than your brand of hard drive.
  • Nothing is stopping anyone from making "Linus-sux-ix" or something. If you're unhappy with Linus' work then do your own work and make things better....just make sure you GPL it. :)
  • I thought that stupid arguement died out years ago. There are few "true" RISC systems still out there. The PowerPC surely isn't one.

    The "breakthrough" of the RISC design wasn't coming up with simplier instructions, it was breaking the CPU into independant subsystems that could work in parallel. Doing an integer operation at the same time as doing a floating point one, while loading data from the bus/memory means 3 things are going on at once. A "simple" instruction set means that it is more obvious where the pieces are, but is fundamentally irrelevant.

    When programming in assembly (the only time RISC/CISC is visible to a programmer), you want it to be as CSICy as possible, it makes your life easier. Think about

    C = A + B

    In an old assembler that is one instruction. Simple, easy to read (for assembly). In a load-and-store system that is

    load A to r1
    load B to r2
    add r1 to r2
    store r2 to C

    4 vs 1. You tell me which one you want to hand code in.

    Also, RISC pushes more of the work to the software, which is fundamentally slower than hardware. Lets do as much stuff as low as possible so our systems run faster. Adding layers sucks.

    Imagine a future with CISC instructions with pipelined cores. How one gets translated to the other is meaningless to programmers although an interesting research topic for the hardware folks. Think of it as a library. You just care what the interface is (the opcodes), let the library designer (EEs) handle the details of getting that interface to work well. Maybe some of the ease of the monster CISC stuff needs to go away to help out the core (trade-offs are part of an engineer's life), but as a rule CISC is better for programmers.

    Just lest RISC vs CISC be forgotten. We need bits and pieces of each one.
  • For starters, let me agree that fewer and fewer coders actually touch assembly as we let the compiler writers worry about such things. But I was following up a CISC vs RISC statement, so we're talking about that small minority. Most folks just use a higher level language and forget about the details (as well they should).

    You are absolutely right that if a compiler can break down an instruction into smaller pieces, the hardware has less work to do. The is pushing some of the work from run time to compile time. In general, this is a good thing.

    I just don't think that the savings are all that important, and there are things that cannot be known until run time. The compiler cannot do everything.

    Most modern microprocessors (and I guess the bigger ones too) have fetchers that read in instructions, break them down, and feed the pipelines. All the complexity of CISC goes away at this point. Instead of one instruction being pushed into a pipeline, two or more instructions get pushed (hopefully into different pipelines). This here is the wonder of the superscalar concept.

    The fetcher gets a bit more hairy, as does some of the speculative branch handling (more things to invalidate), but my goal wasn't to simplify the hardware. Anyway, the hardware already has some of this, so it isn't like we're adding anything new.

    Conceptually, I prefer the idea of pushing as much of the info down as possible (CISC) so that the lower bits has a larger view of what is going on. Think of "peephole optimization" in a compiler: the larger the peephole, the better the optimization (within reason).

    Compilers are handicapped that they often compile to the lowest common denominator. In the IA32 world, the instruction set hasn't changed too much in years, so many people forget that not all CPUs in the same family are identical. Think about the older PowerPCs: 601, 602, and 604 (I'm showing my age, eh?). The 601 was a hybred, so it had some odd opcodes. The compiler had options to generate for one of the CPU types, or target the lowest common denominator. Guess which one most applications used? Even if you stuck to the 603 and 604, life wasn't much better. I think that they had identical instruction sets (it has been a long time), but I'm sure that they had different pipeline geometries. It is not possible for a compiler to generate code that is optimal for both. That is why code that is optimal for a 386 is not optimal for a PII, even though the instruction set is the same. Details matter. The compiler can't know them all.

    As long as I'm rambling, this is the main reason I've gone anti-VLIW. If a merceed comes out with 4 actions per instruction, and a mckinley has 6 actions, how is the poor compiler going to optimize for both?

    Let the compiler parse, hack, and optimize all it can, but there are somethings that cannot be known before run time, and we need to let the CPU handle them. I think that CISC helps this out.

    Have I made my arguement any clearer this time?

    - doug

    PS: I cut my teeth back on the old M68000 family and it is still my favorite instruction set. I worked with PowerPC 403s for a while, which is where my coding RISC assembler background lies.

    PS2: To be honest, as long as the CPU isn't little-endian, I'm not going to get worked up over it. This is all just quibling over details.
  • I'm not sure about "common" definitions, but for me the RISC idea is one-instruction per cycle. That isn't explicit in the name, but that is what was pushed as RISC in the early days. When RISC became a buzzword, everything is RISC. Most computers today have multiple piplelines, out-of-order execution, speculative branching and so on. This isn't what I consder true RISC. If my definition is showing my age, I'm sorry.

    I am fully aware that newer IA32 CPUs (PII, K7, etc) have "RISC cores", and I think that this is the way to go (although I dislike the IA32 instruction set). RISC makes no software person's life easier, and CISC does. I don't care so much about the EEs doing the hardware, as I'm not one (yes, I'm callous).

    You are right that "RISC is supposed to make the hardware solve the same problem in a smaller amount of time than a CISC design in the same process with the same constraints (cost, power..) and the same amount of development money." The gain in RISC though wasn't in the instruction set per-se, it was in letting the different parts of the CPU work in parallel. That can be done in CISC too, although some of the most complicated instructions may need to go away. C'est la vie.

    Newer CPUs require that all instructions are the same length. This is required for RISC, but not for older CISC machines. Most modern CPUs (hybreds, the whole lot) have this restriction to simplify the fetcher's job. This is a fair tradeoff although some funky instructions go away, but it is worth it.

    I like the observation that "Some say it's because pre-RISC cpus were designed for assembler programmers and RISC CPUs are designed for compilers." This is more that most programmers would go crazy coding big stuff (whole applications) in RISC. Compiler folks are already crazy, so it is no loss. Fortunatly, most of us code in C or something else, so this is a moot point.

    As for your arguement "The flaw with this example is that you assume that we won't use C for some time", which is true in that case. Lets try a different example: think about a stack pointer. It is a common tactic to

    move value => (--stackpointer)

    There are three primitive instructions here: 1)change the stackpointer register, 2)get value from memory, and 3) store value in memory. It is quite possible that "value" will be used again in the near future, so saving it in a register is useless. Obviously as registers become more common, there is less motiviation to conserve them, so maybe this isn't important.

    - doug
  • by smartin ( 942 ) on Tuesday August 17, 1999 @08:20AM (#1742328)
    This begs the question: How much do you care what kind of processor you are running? The answer has to do with whether Linux does an adequate job of hiding platform differences so that porting a piece of software to a different machine is just a matter of a recompile. If it is easy to build an application on any machine, and most applications are distributed in source form, most people will probably not care what kind of machine they run on and PPC machines will be much more popular. On the other hand, if most applications are only available as binaries, and it takes a great deal of effort to port the code and QA it, then alterantive machines don't have much of a chance.
  • Neither of which are really RISC anyhow -- if you want RISC, take a look at ARM (the Alpha dies and the PPC dies are way too complicated to pass as RISC...)
  • So who remembers the 601, 604, 620 progression that was the plan when PPC's were introduced? The 620 was supposed to be 64-bit, does anyone know if a 64-bit PPC chip is in the works? -o
  • Since Be and Apple have not been getting along so well recently, this gives Be an Apple-free platform to run on. Anyone know if they're on it?
  • I will personally beat the shit out of Jean-Louise Gasse for lying through this fucking teeth for so long while cashing his checks from Intel.
  • I disagree, RISC is significantly different than CISC and it has proven itself to be so much better that x86 is copying it.

    The main difference now is that RISC chips generally won't let you do things like add from a location in memory without loading the value at that location first and x86 will still let you do that. If you want to write fast x86 code you will write it like you would write RISC code. The philosophies are very different still. Intel has just been good at adopting the ideas that IBM, Mips and DEC put out first.

    What RISC and CISC don't mean is a way to measure performance, that's why the marketeers use it but most users and probably even a lot of programmers don't know and don't need to know the differences.

  • I think this is just more good news. For the time being, neither alpha or PowerPC are going to be your average geek's machine, let alone you average user but the competition is good.

    IBM and Motorola are in a curious position, they have developed a good modern highend processor but because of the cost factors associated with PCs today they are having trouble pushing as many as they'd like. Likewise both have invested enough and depend enough on the architecture that they can't kill it. Free specs and cheap mobos only bodes well.

    Look at the netdriver, very cool, very sexy, very expensive for what you get. If they could cut a few hundred dollars off the price, you'd have a top notch internet appliance, a serious iMac competitor. I think the rationale from IBM could be one of two things, it could be good will, they had something they didn't need so they went public with it or it could also be that they think that if PPC mobos drop in cost enough they believe that they can compete with Intel and AMD on a manufacturing cost basis and as PCs continue to drop in cost the freeness of linux will begin to play a huge factor.

    If you're building linux based internet appliances, hardware cost is you only problem. They are already committed to making more and better processors and will be for some time to come. It's a good move on their part and I think the community will benefit too. If I could buy a PowerPC chip and mobo for just a little more than an Intel, I'd probably do it.

  • Its not like the code's locked up or anything. Just follow the current kernel tree and make the patches to the PPC arch. If Linus feels like adding it in to the official tree it will happen. If you don't like it, you have the source don't you? I'm not advocating a fork here mind you, but patches seem to work just fine.
  • It all depends on how well written the code is. If you actually use htons() and you are careful about endianess issues, the port from ppc to x86 or vice versa is trivial (assuming libraries are equal). I have found porting to ppc much easier than proting to alpha due to the fact that they are both 32 bit processors.

    The real question is how well written the original software is. Odds are if you can port it from x86 to sun4, you can port software to ppc with a simple recompile.
  • If you want it portable, then C is not always the best language. (Yes, I have to admit that, despite being mainly a C programmer.)

    Perl is a good choice. So is tcl/tk, or any other cross-platform scripting language.

    If you want cross-platform binaries (for those closed-source addicts), then use Java.

  • If AMD invested $1 million in Stampede, and $1 million in GCC/PGCC and had it optimized for AMD's (like say .k6.slp, k6-3d.slp, and k7.slp packages for specific tuning) assigned a single reasonably intellegent employee to handle documentation release to the Linux community for all thier other spiffy instructions for their processors...

    Then, maybe AMD would really blow the doors off of Intel ;-) And for a cost much less than $200 million.

  • by BadlandZ ( 1725 ) on Tuesday August 17, 1999 @08:58AM (#1742340) Journal
    The popularity of any CPU for use in Linux will probably be largly determined by how well the company pushing that processor supports the gcc project.

    Intel's own compiler for the Pentiums is very good, but GCC is also great for x86, so it's popular. The commercial DEC (rr... Compaq) compiler really rocks on Alphas, but gcc isn't nearly as good for Alphas as the commercial compilers. So, Linux/Alpha isn't nearly as popular as you would expect it to be (give shear preformance factors of the CPU are masked by the results of the compilers).

    I have no doubt in my mind Linux will run on almost any platform, the Linux community is very very active in getting the OS ported to new hardware.

    I have doubts that PPC will become popular. If Motorola or IBM puts some money, work, and support into GCC, then the G3's will really rock in Linux. If they don't, it'll just be "another" platform that Linux runs on, but nobody really uses (much like Alpha is now). Before you consider this a flame, check benchmarks of Comercial C and Fortran compilers for Alphas and benchmarks for gcc on Alpha. And, then notice that there are a lot of people who would consider Linux, but end up buying a commercial OS and compiler for thier Alpha insted.

  • Merced isn't RISC either.


  • I'm using Linux on PPC for quite a while now. And let me tell you something: as long as Linus uses x86, and doesn't mind breaking the kernel for all other platforms except his own, this won't happen.

    The so-called stable 2.2 kernel that claims to support PPC won't even compile on PPC! And that's not because there are no patches, it's because Linus refuses to include the patches before releasing a new kernel. He even intentionally breaks support for some platforms, as has happened in the 2.2.3 kernel! If you want to get a kernel that actually compiles, you'll have to find the (undocumented) directory on vger and check out the tree with an (undocumented) CVS tag. Just forget about going to, it won't work.

    Here's a hint: CVS _does_ work. Delegating work to other people _does_ work too. Do it for the main kernel. Now.

    Linux on PPC has a great future, but not as long as some bonehead on the top is blocking it.
  • Well, I didn't say it's impossible. I say that it doesn't compile out of the box, and that the fixes (which I've posted to linux-kernel, which is the official way to do so) don't make it into the kernel, even if they are _so_ trivial that they are obvioulsly correct.
  • Well, that's my theory at least. I expect that as more people buy PPC machines, they will consider a Macintosh because it is also a PPC machine. After all, that logic applies to why people by Intel PC's instead of Macs, or why they buy Intel PC's instead of AMD PC's. Consider the following not-so-likely-but-good-enough scenerio:

    A customer buys 100 PPC boxes to run 100 web servers. Now he needs a client desktop. He'll consider a Mac more than before because it's also PPC - "Just in case those Macs don't work out so well, I can turn them in Linux boxes like the ones I already have."
    Timur Tabi
    Remove "nospam_" from email address

  • Did Digital not do _Exactly_ the same thing with the Alpha, but hey, I could be wrong...
  • If Apple hadn't killed the Mac clones, the IBM CHRP LongTrail including a 604e at 225 MHz would have costed 450 USD (quantities of 1000) in September 1997. I paid 800 USD for my prototype board, which was damned cheap compared to a comparable Pentium II, and of course I run Linux only: werPC/ []. The LongTrail used off-the-shelf components, and I guess the new reference design is a further evolution, using a G3.
  • How much do you care what kind of processor you are running? The answer has to do with whether Linux does an adequate job of hiding platform differences so that porting a piece of software to a different machine is just a matter of a recompile.

    Linux already does that, and does it very well. I can compile virtually everything on my Sparc Linux box just as easily as I can on my Intel ones. The only exceptions are the few dolts that assume Linux == x86, and do things like include x86 assembler for a few routines ("for performance"). That's all well and good, but it makes your app gratuitously non-portable, when it needn't be. autoconf should be able to detect the platforms for which you can substitute fast hand-crafted assembler for slower but functionally identical C routines. That gives you proper portability with performance benefits on certain platforms. Either way, 99% of apps that use autoconf just compile straight out of the box on all my Linux platforms.

  • I'm currently running Linux on x86 hardware, but that's solely because it currently gives me the best bangs per buck

    Yep, couldn't agree more. Virtually everything else out there is superior in terms of design, build quality, etc., but when it comes down to it, market pressures have forced PC prices down so much that everything else is just not good enough value. I love my Sparc to bits -- PCs don't even come close to the simplicity and elegance of its design (why, oh why, haven't SCA drives become commonplace in the PC world?). However, your average punter isn't going to spend money on a decent RISC machine to get the same performance as a PC costing half as much, no matter how good the build quality. At the high end, pricing is closer to parity, but that's mostly due to Intel's extertionate pricing of Xeon's so they match equivalent Sparc / Alpha / MIPS offerings.

    If there was a cheap PPC option, I'd almost certainly go for it. That said, I'd still have to keep my x86 boxen to run those binary apps that don't yet have an open source equivalent of sufficient quality.

  • 'That said, graphics performance is critical to games. Unfortunately, Linux support for contemporary Apple graphics hardware, which is based on ATI's Rage Pro and Rage 128 chips, is nonexistent. So I was hugely pleased to hear from ATI last week that it is working with select external developers to create accelerated drivers for PowerPC-based Linux.

    Being the paranoid sort, I asked whether ATI would permit the resulting drivers, whose development would be based on detailed -- and presumably NDA'd -- information, to be open source. The answer was a definite "yes." '

    I don't know about you all, but it's finally happening- all of the 3D vendors are getting clues by the bushel load and they're making drivers happen.
  • How can choice make it worse???

    Choice seldom makes it worse for those doing the choosing (Linux folk in this case). Choice does make it worse for those being chosen. The PPC would be in much better shape if it were the only game in town.

    In this case I think it is hard to not choose the alpha if you want maximum single CPU speed, or the x86 for minimum price.

    I'm not sure what gets you the best bang for the buck. I expect the ARM gets the best bang per milliwatt (even the new 600Mhz intel ARMs run fairly cool).

    I honestly don't have a clue what the PPC is best at other then running PPC code!

    You're starting to sound like those Windows guys that espouse the superiority of MS because there are more games and it's a standard.

    I don't see why you would think that. My original post disn't espouse anything (except maybe cheep SS7 motherboards). This post has a bit more espousing in it. However you should note I never espoused anything merely because it was popular. I did say PCs were cheep, and it would be hard to beat them on price. However that is (a) true, and (b) not a popularity argument.

    You bring up a good point regarding portables though...

    Thank you. I may, of corse, be over excited about the iBook merely because of my fondness for 802.11.

  • by stripes ( 3681 ) on Tuesday August 17, 1999 @08:31AM (#1742351) Homepage Journal

    Frankly I don't see why a cheap PPC motherboard is going to make a huge diffrence. PC motherboards are quite cheep, under $100 for a Super7 motherboard. So if the PPC is going to compete in price it has a long road to hoe. A free design doesn't mean free motherboards, in fact the free design might not be as cheep to make as some of the more mature PC motherboards!

    The fact that Linux is more or less processer agnostic just makes it worse, after all why go for a PPC rather then an alpha just because there are vague roumors that the Alpha will gasp it's last any year now? I mean if switching to a new CPU is so easy, why not use the Alpha until it actually gasps it's last? (assuming of corse that the Alpha is faster -- with the SPECmarks seem to say, and cheap enough)

    The only real argument I could see for using the PPC is if it (the PPC) actually made it into nice cheep machines, like maybe the portables (they seem relitavly inexpensave for what you get -- but I havn't looked at PC portable prices, so i may be in for a shock). Actually that isn't the only argument. It would be intresting to see Linux on one of the big PowerAS machines...nicer still to see it hosted under VM on a 390 (but that's not a PowerPC).

  • Sorry. Not woo-hooing about the article - I'm just still excited bout the IBM announcement.

    I hope this blossoms, and we have a REAL price and performance war between x86 and PowerPC so we'll all benefit from better execution not just cheap hot running Intel processors.

    I really don't think Compaq will pull it off with Alpha Linux... their leaders needed vision on this a LONG time ago, and there's too much internal bickering and backstabbing. SGI was SMART when they ditched their NT division... when you make and sell an OS or OS' who wants departments with loyalties divided with the competition?

    Some folks I know did a "R.I.P" on SGI when they cutoff their NT division, but I think this was smart.

    You can say you think I'm smoking crack, but I think Jobs has already laid down some groundwork for Apple to become a Linux company whenever it becomes necessary. (If you doubt this is possible, think about how difficult it would be for OS X Server/Consumer... not at all, and it would be one giant fsck you to Bill Gates in the history books... :)

    Anyways, more CPU support in Linux is better. I agree Motorola and IBM better commit some resources to GCC if they want to be taken seriously - it's a relatively small problem to solve.

    I'm still completely blown away that Loki's supporting all the Linux games on PowerPC. This is something I hoped for and banked on happening about 1 or 2 years after Linux became viable for Commercial games... not MONTHS as it has turned out. Linux is looking more and more unstopable.
  • You are apparently not aware of the fact that people do build complete cars themselves. It's a hobby. If you had actually read the original post in your hurry to be a smart ass you would see that it said nothing about price. The poster wanted to build one for fun (presumably just because it's possible), just like people build cars for fun.
  • As Linux grows in popularities, the commercial vendors are bringing there "binary only" ways to the platform. Thus suddenly that "quick recompile" doesn't become an option.
  • Thus, the developper choose the platform for wich he release binarie. If he don't have access to, say, a Sparc, he will probably not release his software for Sparc/Linux since he can't test his binarie. That's cheap, but that's the closed source way.
  • I love my linuxppc box, fast little bugger, they need to fold the PPC code into the 'official' kernel.
  • The hardware architecture Linux is running on does matter.
    I'm running Linux on AXP and SPARC for quite some time now.
    They both have a smaller user (and developer) base than
    Intel platforms and it shows! Network code is far from being
    as stable as on Intel. A number of applications won't compile
    on AXP at all because they aren't 64bit clean. Others will
    compile and even run but crash after a short while. (Thank
    goodness, that isn't true for most of the KDE apps.)
    If PPC gains a really large user base it will mean that
    Linux/PPC will become more stable and reliable.
    If only that were true for AXP!

  • i dont see why macworld would want to talk about it.

    ibm's move isnt going to help APPLE any is it?

    i mean, seriously, aesthetic considerations aside, people tend to like apple hardware but hate the (technically speaking, now...) OS. if this does spawn a clone war apple could be skrewed.

    what if individual companies try to sell g3 boxes with more features and better price points than apple? apple cant very well revoke licensing or buy out competition THIS time...

    who here would buy a g3, considering the architecture and processor power, if you didnt have to subsidize apple's os which youre not going to use anyway?

    i know i would.

  • BeOS hasn't been able to run on G3s, to this day. But with this "open" setup here, can Be make BeOS work on better PPC hardware? I think the question isn't so much whether Linux will become more prominent on PPC than x86, but whether Be will become more dominant on x86 due to information apple wasn't giving them.

    Be's explanation of why BeOS doesn't run on G3s. []

  • Re: The Need to fold PowerPC Code into the 'offical' kernel.

    Linus tries to keep the Linus Kernel as fair on all platforms as possible -- but unforently he is a very busy man, and sometimes he loses / messes up PowerPC patches, just like he sometimes messes up Alpha or Sparc ports (although he typically doesn't ship stable broken i386 versions ;-).

    Personally, I don't find it acceptable to be shipping stable production kernels that are broken -- stable to me, means it works up to the promise -- and is not unstable (if you want a broken kernel get 2.3.x, you)!

    2.2.0 had support for the PowerPC--but recently Linus (and the powerpc kernel deveopers didn't get there patches in at time) had some issues. This is unaccepatible for a stable kernel -- but I guess Linus doesn't think it's important enought to make sure a Kernel 100% stable before shipping it marked stable.

    One more thing to note: Linus's tree might be good for some -- but it's highly recommended that you get your Platform's stable spefic kernel (such as vger-ppc 2.2.x or vger-alpha 2.2.x).

    So for the last time, shipping defective / broken code in a 'stable' product is just unacceptable.
  • Yes, the PowerPC 750 (which Apple calls the G3 since it 'sounds cool'), is designed to be the successor of the 603ev -- not really fast, but good enought for a desktop system. That's why most of IBM's lowend RS/6000 systems still use the 604e, it beats the hell out of the G3 at the same clock speeds, especially at FP.

    Apple dumped the 604e from there line, because Apple wanted to make the PowerMacintosh line, cheap, simple and easy. So they all use the same processor (a consumer on), logic boards that are very much similar, and cheap PC RAM, that standard on all current machines they sell.

    Apple won't go G4 (likely) untill they discontinue making all of there G3 systems -- and that may be a while.

    This helped then reduce inventory, and become lean and mean -- no extra baggage.

    Of course this pissed off high-end PowerMac customers -- they are either too slow or lack to many PCI slots to be usefully. But it made iMac possible, and cheap for Apple -- but it came at a cost.

    PowerPC would be useful in the portible market -- except for one big problem -- there is no CHRP portables ever made -- they are all big desktop machines, and no portibles. Maybe somebody can design a portible machine....
  • Forget about desktops - to what extremely large customer base would that appeal to enough to make it economically viable? But - low end PPC boxes like the RS/6000 43p-260 or an F40 running Linux as opposed to AIX would have to make an awful lot of sense to someone who's used to managing these boxes and paying for the licences. Yeah sure the FPU isn't up for quantum chromodynamics or floating body problems but throw one of these 2-way or 4-way boxes at some corporate application like Domino
  • Apple's new iBook, running on a new copper PPC 750 (aka G3) has no need for a cooling fan. When running, it's runs only slightly warmer than room temperature. This is also in part to Apple new product-wide UMA (Unified Motherboard Architecture), A.K.A. "Open World" design that uses only one primary controller for USB, Firewire, PCI, AGP, and IDE/66, etc.

    Ahhhh, LinuxPPC would smoke on this box... k.shtml
  • Even if someone finds a buffer overflow in a package that you have, finding an exploit becomes a lot harder. :-)

  • Alphas are fast but expensive; I haven't seen any figures, but I bet for the same amount of money, the detested Pentiums give more performance.

    That's the reason to be interested in PPCs. Low prices come from volume production, and Alphas just ain't got it. PPCs have a start in that direction with Macs, and it's possible that adding Linux would boost production enough to keep the price low. Since PPCs (like Alphas) make better use of silicon die space, they have an inherent advantage over any x86. It just takes volume production to realize that advantage.

  • Actually the current x86's are RISC (mostly). Most of the old x86 instructions get decoded into risk-ops, then processed in a RISK fashion.
  • The only significant drawback to choosing an Apple Powerbook G3 (series 3, Lombard, bronze, whatever) as my next Linux laptop is the delivery wait that I'm currently enduring. I waited long enough to see that LinuxPPC was running on it, and then ordered one (the 400MHz model). I then thought better of a big download and ordered a LinuxPPC CD. The Linux CD arrived long ago, the laptop has yet to arrive.

    I'm not too concerned about the lack of a three button mouse. I think that the Linux environment I'll be using is configurable enough to make up for that. Pasting text is one of the few things I'd need to adapt to. Can't say I use the right and middle buttons on this Thinkpad 770 for much more. I might as well be pressing a keyboard modifier for my speed with any other operation.

    I am perhaps fortunate in that I don't have an investment in architecture-dependant binary software on Linux. Since I don't plan to run either office-typical suites or big relational databases on my laptop, there's no problem there. I'm happy to see that there's both an x86 and PPC Linux version of Xing's MP3 encoder, so I'll be able to build up my MP3 library as I use LinuxPPC, using modern, licensed codecs.

    Hardware-wise, the G3 Powerbook kit is ahead of the game WRT most x86 laptops. Anything that comes close is at least as expensive. The real plus is that when it comes to running mainstream proprietry applications, the MacOS is a pleasure to use, and Windows is a chore. If, like me, your use for a "toy OS" is to run media-creation software rather than play games, the MacOS and hence PowerPC is a better place to be.

    Finally, it's *way* cooler to be running a hacked-togther LinuxPPC on a G3 'book than a stock RedHat on a Thinkpad.
  • by MAXOMENOS ( 9802 ) <maxomai@ g m> on Tuesday August 17, 1999 @08:25AM (#1742368) Homepage
    Just a note for LinuxPPC users: the G3 Mac doesn't load properly unless you remove USB support from the machine before install. This caused us some problems before one of our managers figured this out :)

    This having been said, my only problem with the PPC architechture is that so many darn PPC machines still use one-button mouses...
  • ..if to take avantage of specific SIMD functions you HAVE to write it in assembler. (Or are there any good cross platform numerical libraries - C wrappers for SIMD (MMX, 3DNow, G4) available for me to use?)
  • Code Warrior? Umm, I use gcc/Code Fusion for development. I thought about some open MMX/Altivec cross-platform libraries (ideally compatible with STL containers) to write my numerical code.. Are any existing (web search yielded noting so far) -
    My dream would be something like GNU Scientific Library, but in C++ (it is very contrproductive to write scientific analysis software in C, sorry all C fans..) and with MMX/Altivec enhancements available...
  • You don't seem to know much about AltiVec and MMX

    That's true. And I do not write assembly either. But I do write a lot of numerical analysis code for my research and would like to know if any generic libraries using SIMD are available. Especially for C++ STL - so I can write getChecksum(vector& Data) and it uses MMX2/Altivec or whatever to calculate that faster then the build in alogorythms or my hand code.

    Thank you for pointing at my ignorance - but you did not answer my question :) Still searching web..
  • It ate my "bracket" int "bracket" !!
  • ...the platform most widely available.

    Right now, computers based on Intel and x86 compatible CPUs outnumber everything else. And thus, they are the preferred platform for Linux.

    As soon as another platform emerges, I'm sure that some folks will port the compiler tools for that hardware, others will work on the kernel, yet another group of hackers will port XFree and in the end, it will be fully supported -- if only enough people feel the urge to use this other platform with Linux.

    And because of this, the question "which is the preferred platform for Linux?" is pointless: It's the platform you already own.

  • Well, it might mean that gcc has better optimization on x86/Merced than it does on PPC.


  • Well someone mentioned above that Apple has been folding PPC changes into gcc.

  • by Outland Traveller ( 12138 ) on Tuesday August 17, 1999 @10:35AM (#1742377)
    I just "aquired" a new Apple 333Mhz G3 512K l2 cache, with a 14" active matrix screen, 64MB RAM, and 4G HD. It comes with an IrDA port, 10/100 ethernet BUILT IN (not via a fragile card clip-thing or cable), 2 usb ports, a built in modem (which I believe is a REAL v.90 modem, and not a software modem, but I might be wrong.) a CDROM drive that can be replaced with a DVD drive without any other special upgrades, and ATI video chip with 8MB SDRAM, VGA-out, S-video output, sound in/out jacks, builtin mic and speakers,a cardbus slot, and a SCSI connector.

    This cost 2300$ retail and for the hardware it appears to be an absolutely great deal. I have a few other new PC laptops here from compaq and toshiba. In the same price range they don't have built-in ethernet, they don't have SCSI, and they don't have as much video RAM. They are also Celeron or AMD K6-x chips, not P2-3's. In my subjective opinion the G3 laptop is cooler looking as well.

    So.. before you go and bash apple powerbooks, check out the specs, pricing, and use one for a week. All my other computers here are PCs but you have to give credit where credit is due. Apple's G3 powerbooks are real contenders, even leaving out the OS.

    PS: Compaq prices their consumer laptops very low, but who would want a "retail" button with a shopping cart icon right next to the trackpad, even if the rest of the specs rock? GRRR.

    Outland Traveller - new and laptop enabled!
  • [x86] will make things much easy for the newbies (which, in a few years, will greatly out number us).

    The newbies have always outnumbered us. Exponential growth is like that.

    What might be different is that we're getting users less interested in learning how things work.
  • I've been trying to find the schematics for this since last friday's article []. Unfortunately, it doesn't seem to be on the website [], and no one's returned my email so far. Perhaps they're thinking similar things to what's been expressed in this thread.

    I did find schematics for the earlier reference designs that have been pointed out as counter-hype:

    1995 December dual 604 design []

    more recent 'spruce' 6xx/7xx reference board [] (uniprocessor, based on the CPC700 hostbridge)

    There's no license on either of these, though, so it may be the 'free' part that's new.

    To counterflame:

    Hardware designs can benifit from open development in all the same ways that software can--faster development, better designs through the pooling of resources and peer-review. Our community can benefit in the same way as well--no one can take control of your computer away from you. How dare you flame someone for wanting to hack!?

    Yes, prototyping hardware is expensive, but a large part of that is because fabs are designed for mass production runs. There's a fixed cost associated with setting up a particular design, whether your making a couple of prototypes or 10,000 units. I bet we could design a fab for low-volume production that would be the other way around. And no, it's not cheaper, but it's definitely possible to build motherboards in your garage.

    What really bothers me here are the off hand dismissal of a call for openness. Are you trolling? Remember, amateurs can't write production quality software. Something as serious as an application suite (I won't even speak of an operating system) can only be designed by a qualified team of professionals, and their work costs $100 per line of code. What are you going to do, hire some college kid to do it?
  • Running one OS on a variety of platforms and processors is very useful: it ferrets out platform dependencies and bugs, it encourages hardware competition, and it discourages vendors from shipping binary-only versions.

    All the processor dependencies that have crept into Windows and its software architecture (ActiveX, drivers, etc.) are one of the biggest problems Windows is facing, and it is good if Linux can avoid falling into the same trap.

  • Alpha seems like a much better processor design to me than Merced and its successors.

    But Intel understands one thing that DEC doesn't seem to: providing a free, open source C/C++ compiler is essential for success with free software. In effect, that approach recoups the cost of compiler backend development from the people who buy the chip; charging for the compiler puts the burden on the software developers. But since developers of free software already donate their time, charging them for the compiler doesn't make a lot of sense.

    In different words, at least in the open source world, the GNU C/C++ compiler is essentially part of the processor itself, and if GNU C/C++ doesn't perform well, then it doesn't matter much how fast the processor is with some proprietary compiler.

    Unless DEC sponsors work for improving the gcc backend for Alpha to be competitive with their own proprietary compiler, my guess is that the 64-bit Intel chips are going to win.

  • windowmaker compiles just fine on my powermac 6500. I'm running MkLinux DR3, so we're talking about the linux kernel sitting on top of MACH 3.0. Many Red Hat utils are standard with this release too (netcfg for one). I'd say the biggest reasons to use PPC for linux are small voltage requirements, Copper process (some people are still using aluminum), RISC (not just a crunchy RISC outside and a chewy CISC x86 center), 32 registers vs 8 (better SIMD down the road), and a smaller sized processor that brings down price.

    Or you can still use Intel. If LinuxPPC is as close to a linux distro as you need, I don't see any reason NOT to be using a PPC, especially Since Apple is on the brink of releasing G4 boxes.
  • I can see it now.. XFree86... now with Perl Drivers!!

    That's the big deal anyhow.. I have an Alpha. My biggest deal is not linux at all(stable as hell). My problem is getting drivers (3D hardware acceleration comes to mind.)

    I've noticed that alot of the cross-platformism is starting to dwindle a bit. One of the reasons that Linux became so popular was the joy of hackers digging into hardware. (Atleast for me).

  • We can all brag and swish our feathers around, but we 2nd tier platforms have to UNITE.

    Hardware Peripheral manufacturers (Certain 3D card makers like 3dfx) refuse to support the alpha. ATI hasn't been that great (Maybe changing soon?). There are just loads of peripherals.

    Mr Alpha and Ms PPC had better let everyone know that we need specs. As companies come in and dump their binaries for x86 out there, we have to let them know that that's not good enough. You either support LINUX or don't. I'm not saying that companies have to give out their family jewels, but they should be willing to allow people to use whatever platform they want.

    They don't realize that a good programmer can build cross platform code. Bad ones don't. Look at LinuxPPC and AlphaLinux. 99.9% of programs compile out of the rpm. (The only exceptions being ones that are very hardware dependend and include lots of x86 asm "speedup code")

    We should have a platform independent stance. As more people and newbies move into Linux, they arn't going to care what platform Linux runs on. This is a good thing.

    By keeping Linux Plaform Independant, we arn't tied to the death of Intel by Transmeta.. or Elbrus, or whatever.

  • Yeah, OpenFirmware is supposed to support processor-neutral card firmware. It has a built-in Forth interpreter. In theory, card manufacturers write their firmware in Forth, which then works on any OF-supporting hardware. Of course, since Apple is the only hardware maker (that I know of) that uses OF, most card makers probably just use PPC assembly. I could be way off base here, perhaps OF forces the use of a higher-level language, I simply don't know enough to say.
  • If the IBM motherboard is CHRP compliant. Then whether the PPC chip is from IBM or Motorola shouldn't matter. It would be similar to a socket 7 motherboard in the wintel world. You can put a Intel, AMD, or Cyrix chip in as long as its a socket 7 chip...

    If my understanding of AltiVec is correct, it (AltiVec) by itself won't do much. Unless Linux (or MacOS, or DarwinOS, etc) takes advantage of it.
  • by bwz ( 13374 )
    Well, the S70 AIX box is very similar to the largest AS/400 PowerAS. It uses the same CPU too. And it ought to be able to support Linux (don't think anyone has done that yet though), I don't know if the current AS/400 hardware is capable of running in 'tag-less' mode (neccesary for running any UNIX like system). On the VM front I've seen some work being done and I even tried to boot a kernel but it didn't work :-/ (I'm no '390 guru) see Linux on the IBM ESA/390 Mainframe Architecture [] for some info..


    Has it ever occurred to you that God might be a committee?
  • ThinkPad PowerSeries 8x0 (850, 820 that I know of and was there a 830 too?), CHRP portable! You can't buy them anymore though :-( I think the CPU was PPC601 or somesuch..


    Has it ever occurred to you that God might be a committee?
  • I thought that stupid arguement died out years ago. There are few "true" RISC systems still out there. The PowerPC surely isn't one.

    Claim: The last general-purpose-designed-for-performance CPU that's internally RISC like was the DEC/CPQ Alpha 21164, today the core of every such CPU is very much like dataflow!

    The "breakthrough" of the RISC design wasn't coming up with simplier instructions, it was breaking the CPU into independant subsystems that could work in parallel. Doing an integer operation at the same time as doing a floating point one, while loading data from the bus/memory means 3 things are going on at once. A "simple" instruction set means that it is more obvious where the pieces are, but is fundamentally irrelevant.

    A few things to consider;

    Everybody does not agree 'why RISC is better than CISC, and why it came when it came'.

    Some say that it's all in the title of a book "Computer Architecture A Quantitative Approach". i.e. that pre-RISC designers had a flawed design philosophy.

    Some say that it was because you could finally get enough transistors on a single die to make a pipelined non microcoded general purpose CPU in a single die.

    Some say it's because pre-RISC cpus were designed for assembler programmers and RISC CPUs are designed for compilers.

    And: What makes RISC RISC?

    Is it the load-store architecture (must load data into registers before manipulating it)?

    Is it the uniform insn length?

    Is it the design philosophy?

    IIRC the first RISCs (the first CPUs that were called RISC) didn't do multi-issue of insns in the same cycle and they most definitely were not out-of-order..

    Oh, well - mostly pointless for anyone but historians and markerers :-)

    When programming in assembly (the only time RISC/CISC is visible to a programmer), you want it to be as CSICy as possible, it makes your life easier. Think about

    C = A + B

    In an old assembler that is one instruction. Simple, easy to read (for assembly). In a load-and-store system that is

    load A to r1
    load B to r2
    add r1 to r2
    store r2 to C

    4 vs 1. You tell me which one you want to hand code in.

    The flaw with this example is that you assume that we won't use C for some time - usually C will get used again (maybe the only reason it ever was in main memory was that the CISC where the asm was written was register starved)? But yes - RISC systems generally do more insns than CISC systems - just not four times as many :-) :-)

    Also, RISC pushes more of the work to the software, which is fundamentally slower than hardware. Lets do as much stuff as low as possible so our systems run faster. Adding layers sucks.

    I assume this is irony? Software in-and-of itself has no 'speed' - only when executed on hardware does the software have 'speed' - and RISC is supposed to make the hardware solve the same problem in a smaller amount of time than a CISC design in the same process with the same constraints (cost, power..) and the same amount of development money...

    Imagine a future with CISC instructions with pipelined cores. How one gets translated to the other is meaningless to programmers although an interesting research topic for the hardware folks. Think of it as a library. You just care what the interface is (the opcodes), let the library designer (EEs) handle the details of getting that interface to work well. Maybe some of the ease of the monster CISC stuff needs to go away to help out the core (trade-offs are part of an engineer's life), but as a rule CISC is better for programmers. Just lest RISC vs CISC be forgotten. We need bits and pieces of each one.

    Must be heavy irony, because thats what we have today - the P6 (used in PPro, PII and PIII), K6 and K7 cores are multi-issue super-pipelines ones, they happen to be implemented as translators to 'internally RISC' that's really a kind of dataflow (just like all current RISC CPUs)..


    Has it ever occurred to you that God might be a committee?
  • Room temp 1GHz 21164!!?? Maybe it didn't have cryotech cooling but it must have had some (Alphas dissipate more heat than anyone else)

    The IBM PPC demo was really room-temp but it was not a complete CPU (no fpu etc.) and was just a 'technology demo', but it also used a older process technology (It was around when IBM announced copper interconnects but the 1GHz demo was in aluminium)


    Has it ever occurred to you that God might be a committee?
  • 21264 dissipates 72 W at 667 MHz in the .35 micron technology, and to make it run at 1GHz I bet they had to increase the core voltage. So I suspect either a serious fan or liquid cooling. Not that liquid cooling is 'cheating', 'once upon a time' the 'distributed heat' (for warming apartments) was in part from IBMs datacenter (1-2 Km from where I live), thats real heat dissipation :-) :-) :-)

    Has it ever occurred to you that God might be a committee?
  • i clearly remembered somethign about the PPC architecture having 64-bit support, but couldn't find the details, so i did a search on Google.

    apparently there is a 64-bit PPC. IBM uses it in some of their workstations. lookit. tar.html
    "BM's NorthStar superscalar RISC microprocessor integrates high-bandwith and short pipe depth with low latency and zero cycle branch mispredict penalty into a fully scalable 64-bit PowerPC-compatible symmetric multiprocessor (SMP) implementation. Based on PowerPC architecture, the first in the Star Series of microprocessors, the NorthStar processor contains the fundamental design features used in the newly available RS/6000 and AS/400 server systems targeted at leading edge performance in commercial applications as characterized by such industry standard benchmarks as TPC-C, SAP, Lotus Notes and SpecWeb. "

    i'd imagine that you ought to be able to easily cobble together something based on this chip. it might not be able to run MacOS, but linux is the point of this thread anyway.

    - mcc-baka
    who needs sheepshaver?
  • This is interesting. One of my first exposures to Linux was a talk given at a local LUG by Jon 'Maddog' Hall. He gave a very insightful history of Linux in the early days and Linus' difficulties getting equipment to do development and testing. My recollection is that Linus was about to start the main development using some of the first Apple PPC's that were hitting the market, as they were going to be made available through some benefactor. In steps Maddog, aka FormerDigitalUnixAlphaGuy (now with VA?), and gets some Alpha machines shipped over to Sweden gratis. PPC gets left in the dust......quite a shame.
    I myself run both flavors of machines, and prefer the LinuxPPC R4 (gotta get me the 1999 issue soon!)on my Apple 6500 PPC 250 over RH, Caldera or S.U.S.E. on my Intel boxes. IBMs latest action will only make the PPC boxes more mainstream, and therefore better supported. I can't wait till next year.
  • I must admit that I'm not too familiar with PPC at all... What's the performence/cost ratio like?

    So far, it has been so-so, mainly because Apple is the only company that sells PPC computers for desktop users, and Apple stuff is overpriced. On the other hand, a few years ago (before Apple backstabbed the Mac cloners) the PPC perf/cost was pretty good -- better than x86. The hope provided by IBM's latest announcement is that the cost of making PPC systems will go down. This would increase the perf/cost ratio again, competitive with or exceeding x86.

    What sort of compatability issues are there with hardware and the like? Will a PPC box use my PCI video card?

    Yes, assuming someone writes a driver for that card and the OS that you're going to run.

    One "issue" (just off the top of my head) is that some cards have some software burned into ROMs, and that software is probably written for 8086 and makes calls to an IBM PC BIOS (and probably extends that BIOS as well). While someone will write drivers for your cards, you might not be able to press Ctrl-A during boot up to reconfigure your Adaptec SCSI card's settings, for example. You'de probably be able to run a program like that after booting, but that sounds like it could get chicken-and-egg-ish.

    I think there was supposed to be some kind of fix for this issue back in the CHRP days, which involved putting processor-neutral code on PCI cards. I dunno much about it, though. (Is this what "Open Firmware" was?)

  • BeOS hasn't been able to run on G3s, to this day.

    Sorry to pick nits, but this is deceptive. BeOS' compatability problems are with recent Macs (which just happen to be the only machines running G3s), not the G3s themselves. Building a G3-based machine that works with BeOS shouldn't be a problem.

  • And is Intel going to attach a string to this money, saying that Red Hat must withhold source code? Are they going to encourage people to code in assembly language? Not likely. All those Intel dollars being spent on Open Source will be very useful to the PPC users who don't mind typing "make".

  • "It looks like PowerPC could well become the preferred RISC architecture for Linux."

    x86 is not RISC. The point that i get from this is not that PowerPC linux boxen will overtake x86, just that the Merced will be all too expensive. Opening the door for a more afordable alternative, which alpha has proven not to be. The point is that PowerPC with the help of IBM's proposal will make powerful RISC systems that run linux which are more affordable than the Merced alternative. If the archetecure bound to it is proven to be superior, its all up to the programmers after that.

    Long Live PowerPC.
  • Not with Intel investing over $200 million in Red Hat. /grin/

    See the Red Hat Wealth Monitor []
  • But it still only has one mouse button.

    I'd buy a PowerBook and never look back if they would make one with three buttons like the ThinkPads.

    Anyone know if MacOS X is going to continue the one-button tradition?
  • people use what they have. Period. I have a dozen Intel boxen around that I use because I don't have to spend money on them -- I *have* spent it.

    Of course nobody's going to throw away their old boxes, whats your point.

    It doesn't matter that the market may suddenly swell a little with new PPCs that are less expensive than Apple boxes; it still entails SPENDING money instead of using what we already have.

    Well, duh. New x86 computers cost money too.

    The whole point is that if you're building a new Linux box you can go with an affordable RISC platform instead of just another Intel machine.

    If you're talking about recycling old hardware, you should be able to use old hard drives without problems, and maybe your RAM and some PCI cards.
  • Yeah, but back then you had once choice for an OS on those machines: Apple's, since the plug was pulled on the PPC version of NT and the OS/2 port didn't go anywhere.
  • ...the platform most widely available.

    Uh, no, that would the the one with the most market share.

    Right now, computers based on Intel and x86 compatible CPUs outnumber everything else. And thus, they are the preferred platform for Linux.

    Right, like Windows is the worlds most "prefered" operating system, just because its running on more machines than anything else.

    And because of this, the question "which is the preferred platform for Linux?" is pointless: It's the platform you already own.

    Just because you're running Linux on a 486 doesn't mean that the 486 is your prefered processor. I'm running a dual celeron but would prefer to have quad K7's.
  • Because both Moto and IBM don't see themselves in the desktop market.

    It's entirely possible, because both companies sub-licensed the MacOS to other companies, and Motorola even built their own line of SuperMac clones. Understandably, this pissed of some execs (mainly Moto's CEO) and is one of the reasons why PPC development is lagging behind.

    If anything, Moto and IBM will push for these boards to get back as Jobs for stabbing them in the back. Hope they do it.
  • First of all, the Processors available today are not even getting any use. Build a better BUS, find an alternative to IRQ's thats modern. Give me a backplane system that handles gigabits of data so my processor actually has something to do. Give me a drive system that pumps our in gigs a second rather then 10-15 megs at a time. Give me something i can run visual interpretations on, exploration systems.

    Well, the correct answer is the usual one: it depends. The location of the bottleneck is highly dependent of what exactly you are running. Some of the processes I run are I/O-limited and having faster hard drives would speed them up. Others are bandwidth-limited (yes, on a LAN) and a gigabit Ethernet would help. But most of my stuff (guaranteed to be untypical) is actually CPU-limited - and I am running on a dual Sun Ultra 60.

    So, yes, I understand the importance of the bus, and DMA, and AGP and all the other TLAs. But for me, at least, processor speed is more important right now.

    P.S. For example, in the FPS (Quake, etc.) gaming community there is a very well understood distinction between being CPU-limited and fillrate-limited. Depending upon your specific hardware, any of these can be your problem.

  • The G3 and G4 chips are best used in portables. The fact is that Apple makes the best portables. The reason is the fact that the G3 can run at fast speeds without overheating. If a Linux Laptop vendor would use the G3 or possibly G4 chip in a laptop it would beat any Intel based laptop on the market. Also without all the proprietary Apple stuff the laptop would be somewhat inexpensive.
  • A lot of arguments on here have all hinged on the assumption that we need real 64-bit processors. I don't think that it's as important as people are making it out to be.

    The PPC design has 64-bit extensions that would be more than adequate for your 64-bit needs (file-systems or whatever). Lets face it, the majority of your integer work is perfectly happy in 32-bit, and when you do need real double-precision floating point values, the PPC is happy to oblige you in native 64-bit floating point arithmetic.

    More importantly, the amount of power the average user needs is not increasing quite so rapidly anymore, especially for those not dependent on Micros~1 products (which have allegedly been deliberately bloated to force users to buy newer hardware). Does Uncle Joe need a GHz machine to browse the web? Hell, at the speed he moves the mouse we could have gotten him a 486 and he wouldn't have noticed the difference.

    And while it may not be the most powerful architecture on the block, it certainly kicks some ass in the 32-bit world (and I've studied the architecture). Given the alternatives, I think that it definitely deserves a "best in its class" type award.

    The more important question to ask is (and people have been asking this), will it be affordable? If it is, it will succeed and everybody will be happy (myself included). I think that it's a beautiful architecture, and would love to be able to buy an affordable computer based on it.
  • by James Lanfear ( 34124 ) on Tuesday August 17, 1999 @09:07AM (#1742440)
    But the 64-bit Alpha processors are expensive, and Alpha's future in general is uncertain. Systems based on Intel's Merced are still a year or so away, and they're going to be quite expensive. PowerPC, on the other hand, has excellent floating-point performance, today, for cheap.

    I think they missed an important point here. The PPC is wonderful arch., but it isn't going to be long before the industry starts the Big Move to 64-bit (Merced will probably be the catalyst, right before it bombs[0]). Unless IBM is planning a G5 based on the PPC620, this will leave them behind.[1]

    Of course, if the price/performance favors the G4/5 enough (say, dual 800Mhz G4's for the price of a 1GHz Alpha) then it may get ahead, but otherwise any victory will be short-lived.

    [0]: I still think that IA-64 is a plot by HP to kill Intel. Instead of trying to compete with them, HP offered to help design the new arch., then came up with something so horrible that there is no way it can succeed.

    [1]: Before anyone flames me for implying that 64-bit is always superior, think about this: Once the industry begins the move to 64, the Alpha, Sparc, etc will all be there waiting. If Merced bombs, the chip most likely replace it is the Alpha, which blows away a PPC. Demand == lower prices.
  • Macworld made the decision about a year ago to start covering *nix-type issues because MacOS X will be running on top of NetBSD (or did they decide to use FreeBSD instead?).

    In addition, many Mac users feel like kindred spirits to Linux users. The operating systems may be vastly different, but the rejection of Windows and belief that the OS does matter makes the Mac community pull for the Linux community.

    Finally, if Linux became popular on PPC, the market for PPC hardware components would decrease as the market size increase (good old economies of scale).
  • No, pushing to "software" isn't slower! That's the whole point to RISC! Optimize the hell out of a few simple instructions to the point where the four instructions execute faster, cheaper and cooler than the 1 CISC instruction. The more granular the instructions the easier it is to schedule and pipepine.

    My understanding of Merced is that much of its speed comes from the compiler building in hints for out of order execution. In which case 1) Assembler by hand will be harder anyway and 2)Old software will need to be recompiled to get the most out of the chip.

    The first point doesn't bother me. I did a bunch of MIPS assembler in school. That was enough. Compilers these days do a pretty good job. Let them!

    The second point doesn't really matter for linux because we've got the source for damn near everything. It's not so easy for Windows where there's tons of assembler everywhere and updates will cost the end user...
  • "PowerPC, on the other hand, has excellent floating-point performance, today..."

    It's well known this isn't the G3's strength.

    Not today, but soon. The G4 has great FP *and* altivec to boot.

    Also, as a previous poster pointed out, Altivec is Motorola's baby, so I have to wonder if it'll be supported by IBM's spec.

    IBM has agreed to support Altivec.

    Hopefully the PPC will come of age in the portable market where power consumption matters...

When the weight of the paperwork equals the weight of the plane, the plane will fly. -- Donald Douglas