Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

A Look Into The Cell Architecture 318

ball-lightning writes "This article attempts to decipher the patent filed by the STI group (IBM, Sony, and Toshiba) on their upcoming Cell technology (most notably going to be used in the PS3). If it's as good as this article claims, the Cell chip could eventually take over the PC market."
This discussion has been archived. No new comments can be posted.

A Look Into The Cell Architecture

Comments Filter:
  • Dupe! (Score:4, Insightful)

    by Lostie ( 772712 ) on Saturday January 22, 2005 @11:41PM (#11445651)
    Posted only a couple of days ago too. [slashdot.org]
    Timothy do you actually read Slashdot?
    • by Anonymous Coward on Saturday January 22, 2005 @11:47PM (#11445693)
      "Timothy do you actually read Slashdot?"

      Here's a better question. If he will not, why should we?
    • /. editors actually reading the articles? You must be new here.
    • Re:Dupe! (Score:5, Insightful)

      by Ohreally_factor ( 593551 ) on Sunday January 23, 2005 @12:24AM (#11445857) Journal
      Timothy do you actually read Slashdot?

      Wouldn't that be like eating from the toilet?
    • Re:Dupe! (Score:3, Insightful)

      by gl4ss ( 559668 )
      yes it's a dupe. and the article is STILL FULL OF CRAP.

      he's buying the sony propaganda on full throttle, probably wasn't around couple of years when they did the EXACT same thing with ps2 - overhyping it to the max.

      it's not some revolution chip that will give you a desktop with 4x the power for cheapo cheap..
    • rhetorical question: I wonder, how hard would it be to write a bit of code to check if a story in the past 2 weeks or so has pointed to the same URL?
  • by Anonymous Coward
  • x86 (Score:4, Insightful)

    by mboverload ( 657893 ) on Saturday January 22, 2005 @11:42PM (#11445655) Journal
    Only if it complies with x86. Seriously, x86 will be around for a century.
    • Re:x86 (Score:2, Insightful)

      I can see x86 disappearing only if console-style computers become much more popular than they are now. If, for example, HDTV set-top boxes supported email, Word, and spreadsheets, it'd happen pretty quickly.
      • by tepples ( 727027 )

        If, for example, HDTV set-top boxes supported email, Word, and spreadsheets, it'd happen pretty quickly.

        I'm not buying a console-style computer until it supports GCC out of the box. I want the freedom to compile my own software for a given machine and distribute it without having to go through a console maker that refuses to even talk to individual developers and smaller firms.

        • by that request i take it that your os of choice is either linux or some bsd variant. this puts you outside of the user group a console style computer would be aimed for (and at the same time would put you under monitoring by mpaa and riaa as a suspected pirate)...
    • Except for games the next XBox, PS3, and Nintendo Revolution are on PowerPC (The cell is PPC). If the Mac gets cell, the article claims it could be a massive turnaround, as the cell benchmarks seem to be well and beyond anything Intel could offer in the near future.
      • Except for games the next XBox, PS3, and Nintendo Revolution are on PowerPC (The cell is PPC). If the Mac gets cell, the article claims it could be a massive turnaround, as the cell benchmarks seem to be well and beyond anything Intel could offer in the near future.

        No kidding. That's exactly what I was thinking. x86 has been completely supplanted in the console market, with no share whatsoever. This encourages game developers to the PowerPC playform.
        PS3 is Cell (PowerPC based), XBox is PowerPC 970 b
        • This would cause a massive migration to Macs for the high-end market. Add that to the fact that:
          1) Game developers would already be using PowerPC systems to develop their games for all three consoles
          2) The Macs would be far more powerful than their x86 counterparts, allowing for much higher-powered software and games

          3) Linux runs very well on powerpc...
    • Re:x86 (Score:3, Funny)

      by DarkMantle ( 784415 )
      x86 won't be around for 100 years.... No way. That would be too limiting. With recent advances in optical storage (I don't mean bluray, I mean optical "chips" that resemble star treks Isolinear technology) x86 won't be able to keep up.

      Hell, if Intel's processors get any warmer, I'm going to get the gas cut off and let the computer warm the house.

      We need to advance to 64, or 128 bit technology to be able to keep up with other technologies. Cell seems like a logical next step after reading this post a few
    • Re:x86 (Score:5, Interesting)

      by Screaming Lunatic ( 526975 ) on Sunday January 23, 2005 @03:33AM (#11446535) Homepage
      Only if it complies with x86. Seriously, x86 will be around for a century.

      That's ridiculous. x86 is dead. The overheating and power consumption confirms it.

      CISC hardware is horrible in mobile devices because of battery life and power consumption. Your camera, iPod, cell phone, and PDA do not use x86 hardware.

      All next generation consoles will use CISC hardware. Hence, economies of scale to get the price down.

      x86 is dead and mobile devices wrote the eulogy.

      • Re:x86 (Score:3, Funny)

        by NanoGator ( 522640 )
        "x86 is dead and mobile devices wrote the eulogy."

        Until my mobile devices can play Wing Commander, you're full of shit.
    • Seriously, x86 will be around for a century.

      No, that would be C86. X is 10.
    • by ceeam ( 39911 )
      Ja, und das Reich ist fur 1000 jahren! // I don't know German :)
    • x86 will die sooner, it'll just take a lot of guts. AMD and Intel both want to can it, but can't because it makes them look like dead-beat dads. If these two companies would talk to eachother a bit more and develope a mutual new ISA, we could just move on from this mess. Too bad so many companies are using code older than me, or worse, binaries older than me which no longer have code. That said, why can't x86 be legacy?
  • Its a dupe (Score:3, Insightful)

    by mnmn ( 145599 ) on Saturday January 22, 2005 @11:42PM (#11445658) Homepage
    The article was interesting, but we dont have to read it twice.

    Maybe slashcode should have a link repository, if someone adds a new story with a link, they get a warning another story pointing to the same link was posted 18 hours ago...

    We've even seen triple-dupes.
    • Thats a very good idea. Obviously, something has to be modified in the script used to add stories to the system that compares the stories to previously submitted articles. A link comparison is probably ythe best way of doing this. Of course, somethines linkes can be duplicated, especially if used a s a follow-up to a previous article.
    • We've even seen triple-dupes.
      "Tripes," you mean? Yeah, we've seen a lot of tripe here at Slashdot, all right.

    • Those are called tripe.
    • You have got to be on freaking crack.

      Are you serious? I have been reading slashdot since 1997 or some time around there, and I can tell you that any good suggestions ever made, such as the one in your post, will *never* get implemented.

      Slashdot started with a very very good seed of an idea about a quasi-community news amalgamation site - but since its inception has has proved, beyond a shadow of a doubt how lazy the founders actually are.

      They have had opportunity upon opportunity to build upon this site
  • Reading the article makes it seem like all computers will disappear. I find it so hard to believe that the new cell processors will be that advanced. I can believe they are good for specialized uses but not as a general computer.
    • by JQuick ( 411434 ) on Sunday January 23, 2005 @04:46AM (#11446669)
      The author had a good grasp of the high level architecture, but beyond that was clueless. His interpretation of the design is way off the mark.

      He seemed astonished by the 1024 bit wide data paths. The Power family is design with cache fill lines of 128 bytes. So, for instance the G5 L2 cache already does fetches 128 bytes into cache for each main memory read.

      Similarly all the talk about doing with cache and VM is bullshit. Instead of having each vector unit interfere with a shared cache as is done today, they've simply added smaller per ALU caches to the design, and complemented it with a device that is a souped up cache controller/MMU unit (the DMAC). The dmac apparently will be able to address both memory, and other hardware by having a virtual address layer, to enable reference to remote cell units as well as local physical hardware. The 64 MB of high speed rambus memory, may be all that is required for a PS3, but in a workstation implementation that memory is L3 cache.

      Altivec currently has 32 vector registers. Each ALU as 128. It it highly likely that the core opcode architecture will remain similar. The most likely addition will be to add a few flow control instructions to the existing mix.

      Altivec is already powerful but the biggest limiting factor is latency. Altivec can peform 1 instruction per clock on the G5, However the pipeline is 8 levels deep thus the overhead involved in fetching data, loading registers, performing a calculation among 1-3 registers, and getting a result is prohibitively expensive. However, if you can arrange to submit 8 calculations (or more) in rapid sequence, you can keep Altivac and the CPU busy and reap great benefits.

      The beauty of Cell will be in proving the ALUs with a bit more autonomy (thought not much more, they are still basically vector units), and enabling the main CPU to keep doing useful work while a number of ALUs are cranking away. Other novel design features provide for communication and synchronization with other units via remote addressing and timing (that's what those realtime clock signals are all about).

      This will be very fast, and very cheap. However, all the hand waving, and theorizing this guy does about both hardware and software reads like patent bullshit.
  • ...when you don't read your own news site. :/

    As someone posted above, it seems like it would be fairly trivial to at least make a "dupe check" program that tells you whether you have linked to the same URL before...
  • Dataflow squared (Score:5, Interesting)

    by Space cowboy ( 13680 ) * on Saturday January 22, 2005 @11:53PM (#11445725) Journal

    The original PS2 design was for a dataflow architecture - the Cell is a continuation (and significant evolution) of the theme. Interestingly enough, if this *does* take off it may be that the best programmers of tomorrow turn out to be the PS2 low-level guys, who've already written the algorithms that are about to be important.

    In the PS2, the MIPS chip was there mainly to do the simple stuff, all the heavy lifting was done on the 2 vector processors, and they were designed to have programs uploaded into them and data streamed through them using a very flexible (chainable) DMA engine. Sounds similar (if in a limited sense) to the Cell chip itself.

    Simon.
    • by Rares Marian ( 83629 ) <hshdsgdsgfdsgfdr ... tdkiytdiytdc.org> on Sunday January 23, 2005 @12:07AM (#11445778) Homepage
      A measly 68k CPU with hardware that was autonomous.

      A measly MIPS with hardware that is autonomous.

      The only thing they need is to sync to the TV set.
    • Re:Dataflow squared (Score:3, Interesting)

      by Syre ( 234917 )
      Here's an article [eetimes.com] that goes into some detail on the cell architecture and why it may not actually be as fast in practice it is in the glowing predictions made by Sony executives.

      The essential quote:

      UNC's Zimmons has his doubts. "I believe that while theoretically having a large number of transistors enables teraflops-class performance, the PS3 [Playstation 3] will not be able to deliver this kind of power to the consumer," he wrote in response to an e-mail query from EE Times. "The PS3 memory is rumored to

    • That's exactly right. Despite all the hype, this is basically a new generation of the PS2 architecture. There's a conventional CPU and a number of dataflow vector units. The dataflow units have a small amount of fast local memory and access to main memory. Just like the PS2. This time around, everything is bigger and better, and there's more of everthing, but it's the same idea.

      The PS2 was revolutionary, in that it was the first successful non von Neumann machine. There have been many exotic architec

  • Transmeta (Score:4, Insightful)

    by jfonseca ( 203760 ) on Saturday January 22, 2005 @11:54PM (#11445731)
    The last time I read about a revolutionary chip that would forever change the world and the company was so great they even had the Linux creator as a board member it turned out to be not much more than a loud fart in the wind. (Enter Transmeta)

    This is a distributed-processing-capable chip. They're moving software into the chip, doing what software can do in a more compact and probably more efficient way. There's nothing revolutionary here and besides being a dupe story it's way overrated. The only attractive here is the fact PS3 will use it instead of embedding something open, like Mosix.

    And no it won't "eventually take over the PC market."
    • Re:Transmeta (Score:3, Interesting)

      by kai.chan ( 795863 )
      There's nothing revolutionary here

      There is a _lot_ of revolutionary ideas behind the Cell processor. As shown in the write-up, the Cell takes a drastic change from the conventional arithmetic-unit/cache setup. Additionally, the way the Cell can pipeline parallelizable problems amongst the 8 processing units within itself is a revolution of chip design already. Take, for example, the video encoding/decoding example shown in the write-up, whereas an an Intel chip will require processing of each procedure
    • Re:Transmeta (Score:2, Insightful)

      by eobanb ( 823187 )

      The only attractive here is the fact PS3 will use it instead of embedding something open, like Mosix

      I'm not sure if you're praising or knocking Mosix (or more accurately, OpenMosix), but the method by which OpenMosix migrates processes bears very little resemblance to Cell. OpenMosix's redeeming quality is binary compatibility with most, if not all, existing software written for whatever architecture the cluster is running on. Cell resembles MPI more than Mosix, by far, in that software will have to be

    • Re:Transmeta (Score:3, Insightful)

      by Jeff DeMaagd ( 2015 )
      Transmeta was influential, if nothing else, but for pushing Intel to develop the Pentium-M chips. The Pentium-M pretty much squashed the mainstream market for Transmeta, particularly after the delays in getting faster designs out.
      • Re:Transmeta (Score:3, Interesting)

        by rossifer ( 581396 )
        Interestingly (to me), the Pentium-M looks well on it's way to squashing the Pentium-4 market.

        Pentium-4 was an architectural mistake conceived with the goal of pushing the MHz numbers up (since the mass market appeared to trust MHz over "MHz-equivalent" labels). AMD astonished them by finally making their alternate naming scheme credible and the plan behind the P4 went straight down the crapper.

        New x86 development at Intel is largely derivative of the P3 core (the family that includes the P-M) and has la
        • Amen to that. If I were in the market for an x86 desktop these days, I would go one of two routes: DP PIIIs or a P-M. I'd prefer the latter because of low power consumption and heat dissipation, despite performance that rivals P4s and Athlons.
    • One of the main differences is that while Transmeta's chip had no market to speak of, there is a market waiting in the wings for this technology -- and it is mega-hella-huge: HDTV.

      Within a flexibile timeframe, there is an inflexible truth -- there will be no popular analog TV broadcasts in the US. This cannot happen without the technologies in place for digital to replace analog TV. The market is, what, a couple of hundred million sets in the US, and a billion or so world-wide, eventually?

      Do you think the
  • by auzy ( 680819 ) on Saturday January 22, 2005 @11:56PM (#11445739)
    its very rare for a system to be able to be completely parallelised.

    There will always be "critical sections", data which can only be used by 1 thread at a time, which limits how much it can be split up.. Then you have programs which cant be.. I mean, you can split up a game for instance into a sound, video, and keyboard threads easily. To really utilise parallel processing takes a massive amount of code, which with current languages, seems to make it a bit implausible to get a massive increase.

    It should also be remembered that the G5's and G4's already have altivec, and even though this is on a much grander scale, there will always be bottlenecks that slow it down preventing 99% of commonly used apps from getting a significantly large increase..

    • by Space cowboy ( 13680 ) * on Sunday January 23, 2005 @12:31AM (#11445893) Journal

      All the programs that run on PC architectures expect certain things to be in place - they expect a single fast central CPU. They expect that good cache usage is important for performance. They expect to have access to gobs of RAM. Etc. Etc. The PS2 (and by extension the cell) is completely different.

      Consider a different architecture. You have a job that consists of multiple things to do. Some of these can be easily parallelised, others are mainly sequential. Divide it up so the parallel ones are coded separately, maybe with some IPC to synchronise to some clock.

      For a sequential part (say rendering the object list of a scene back to front to gain occlusion) the approach that worked for me on the PS2 (which is logically similar, if significantly less powerful) was to divide the job into tasks. Each task (say, one per object in the above) gets its own bit of code and knows about the data that it needs to perform its task.

      The key thing is that the Harvard separation of code and data just isn't, on a PS2. You set up a DMA chain that loads the program into the processor, then streams the data through the program on the processor, lather, rinse, repeat. Make the chain self-submitting and you can effectively forget about that chunk of code now, it'll just happen.

      This is still doing things sequentially (but we've agreed that this is a sequential task, right?) - the point is that it's being done highly efficiently within the architectural constraints. You have a dataflow architecture and even sequential code can hit the performance limits if you code to the architecture.

      The Cell looks even more powerful, in that you can chain execution modules together, so you can load code into APU's 1,2,3,4 and stream the data through 1,2,3,4 automatically before it's considered 'done'. This was possible on the PS2, but ... awkward. It'll keep the effective instructions/clock down because you're effectively pipelining your software... Nice idea.

      Simon
      • I have enough knowledge to be dangerous - would a message passing architecture microkernel be able to take advantage of this sort of architecture more than a macro-kernel would? I was thinking specifically of DragonFly BSD, and the modules that make up OS X.
        • Hmm - short answer: Don't know :-)

          I *think* the programming model will be sort-of-like CORBA, with 'messages' being sent from a central despatcher (the G5 probably, though it could be another APU). I think the messages will be self-contained program+data though - they've even called them APUlet's. The OS then schedules them to be executed on the first available APU.

          The message is the data, but the code will be bundled along with it, and when it's finished, it'll send another message back to the despatcher
          • Firstly, the apulets are unlikely to be both code and data. The APUs are vector processors designed to process streams of data, making it far more likely that you upload the code and the stream data to it (either directly or from another APU).

            Secondly, Darwin will not need porting to the Cell. It will almost certainly run with no modification on the PU. Things like QuickTime, Quartz and CoreVideo/Audio are likely to benefit by having components run on an APU, as might things like the network stack, bu

    • its very rare for a system to be able to be completely parallelised.

      Not really. Current gaming computers are usually bogged down while trying to display a graphical-intense game. Home electronics are composed of video and audio. Much of 2D and 3D visualization and audio are "embarrassing parallel problems". Take the video encoding/decoding example from the article, you don't need to parallelize a video frame in terms of each pixel elements, instead, one opts to parallelize each video encoding process t
  • by Leto-II ( 1509 ) on Saturday January 22, 2005 @11:57PM (#11445743)
    Okay, who was down for Timothy on Saturday night for the /. Dupe Pool?
  • Some Thoughts (Score:5, Insightful)

    by logicnazi ( 169418 ) <gerdesNO@SPAMinvariant.org> on Sunday January 23, 2005 @12:11AM (#11445793) Homepage
    Well, I think we all recognized that article was a little over enthusiastic but it does suggest some interesting possibilities.

    First of all I want to say I think it is completly possible to make a processor with 8APUs and so forth. For starters PowerPC chips already have several seperate execution units on them, and I think they use fewer transitors than intel chips. Moreover, a huge chunk of the transitor budget goes to doing things like cache consistancy or complicated instruction prediction which is probably not used on the much simpler APUs.

    Of course it seems like this is primarily of interest to game systems or signal processing applications (note that a 4 threaded 32 stream processors is just another way of saying 4 cell procesors, each has a PPC core with 8 APUs). However, I would not be so quick to dismiss this for the PC market. While it may be true that many individual applications may not easily multi-thread it seems we are approaching a point where the biggest complaint is not the maximum processing rate in one application but the ability to run multiple applications at once. On my computers I'm rarely if ever frustrated at the rate some program is running at, but slowdown in other programs when I run a processor intensive job or turn on a video. So while drawing a webpage may not be speed up by this processor drawing several webpages at the same time will be and that is the sort of thing which makes a big difference for the end user.

    Also, a processor like this offers great possibilities for JIT and VM code. The main thread can dispatch instructions and threads to the APUs dynamically based on what is happening in the system. Also I find it interesting that IBM is going the same way as intel in pushing all the complexity on the compiler. It makes one wonder if itanium is really as dead as everyone thinks. Perhaps in 4 years when AMD can't squeeze anything more out of x86 intel will be ready to jump in having worked out all the bugs to their new chip.
    • A 1.3 Ghz machine with 1 gig ram is enough to do everything but video editing, throw a decent 8x AGP card in and oyu can play any game out there at good quality. This Cell technology looks like a neat toy, but it will be useless for consumers, what matters now is cutting power consumption and heat loss, making things quieter and cooler will sell a lot more than making things faster.
    • Overly enthusiastic is putting it mildly.

      I think that the author "knocks" current CPU architecture entirely too much (both PPC and x86) with the comment that the vector units on these chips aren't dedicated enough. While somewhat true - it's also misleading. Typical application code isn't terribly suited to vector processing. Pushing pixels, and decompressing and compressing video and audio - sure. Word processing code, not so much so.

      Of more possible interest than pushing the complexity to the compil
    • Re:Some Thoughts (Score:2, Informative)

      by David Greene ( 463 )

      First of all I want to say I think it is completly possible to make a processor with 8APUs and so forth.

      Check.

      For starters PowerPC chips already have several seperate execution units on them, and I think they use fewer transitors than intel chips.

      Multiple function units on a chip is not the same thing as the 8 APUs of the Cell. First off, there's no indication whatsoever that this is a single-chip architecture. Even if it is a single chip solution, the coupling of a superscalar's function units

      • I understand that functional units are not the same thing as an APU. However, the fact that the G4 (and I think G5) have multiple functional units which can handle the same type of operation means there is silicon to spare in some sense.

        As for the quesiton of eliminating the cache I thought this was true in name only. Isn't that 8K or whatever each APU is claimed to have access to supposed to be on chip?

    • "This system isn't just going to rock, it's going to play German heavy metal!" - From 'Part 2 - Again Inside the Cell'

      Ahahahahaha!

  • by zymano ( 581466 ) on Sunday January 23, 2005 @12:31AM (#11445888)
    Dally's Merrimac processor. [weblogs.com]

    It's so similar that you wonder if they lifted it from him. The only difference is that Prof. Dally's chip has a big cache.
  • My bad, I thought this was going to an article about cubicles in the modern work environment.
  • by mcc ( 14761 ) <amcclure@purdue.edu> on Sunday January 23, 2005 @01:27AM (#11446097) Homepage
    I've had for a very long time the suspicion that the XBox was basically just a big blindside at Sony. The XBox loses a huge amount of money, and looks as if it will continue to lose a huge amount of money right into the XBox 2 line; Microsoft must be doing this for some reason. My personal theory for awhile has been that at least one of Microsoft's motivations in spending all this money is because they see the Playstation as a potential future threat; i.e., they feared and fear that at some point the Playstation 2 or 3 or 4 will become so close in power and functionality to a PC that it will begin to supplant the PC for common tasks. This would be disastrous for Microsoft; their lockdown on the PC market is complete, but this doesn't protect them from the PC market itself being slowly eaten away at from the bottom by consumer electronics like the ones Sony makes. So to stave off this threat, Microsoft begins to instead grow the PC market it monopolizes downward, so that the PC (as it becomes the "Windows Media Center") begins to slowly suck up the consumer electronics market, competing directly with the Playstation, bringing the fight to Sony's door instead of Microsoft's. Since consumers wouldn't on their own be interested in a PC that supplants consumer electronics, Microsoft instead basically bribes them into being interested with subsidized hardware; they make a big money blackhole out of the XBox to undercut Sony's ability to maneuver with the Playstation, the way the money blackhole that was MSIE undercut Netscape's ability to maneuver.

    This is, of course, all just conjecture.

    But when I begin to see people seriously talking about the chip from the Playstation 3 eventually potentially being used in PC hardware, I begin to wonder if it's maybe reasonable conjecture...
    • by Anonymous Coward
      IBM -> PowerPC
      Apple -> PowerPC

      Cell -> PowerPC
      IBM, Sony -> Cell

      IBM, Apple -> Linux, BSD (Unix)

      Doesn't take a genius to come up with:
      IBM->Cell->Apple+Sony

      Sony makes the best computers, Sony makes one of the best gaming console.

      Although I'd rather see Apple join forces with Nintendo since these two companies are more alike than any other (quality over quantity).
    • What you say MS fears is called WebTV, and it failed, and anything like it will continue to fail over the short to medium term (five years, at least), for a variety of reasons, including the problems of the input device and the fact that not many people want to use their computer as their TV monitor. Not many people have HD capable TVs, and until they come down drastically in price, not many people will.

      What MS really worries about, and what you got at least somewhat right, is the "Media Center" idea. Even

  • 3 architectures (Score:5, Interesting)

    by SunFan ( 845761 ) on Sunday January 23, 2005 @01:34AM (#11446133)

    It's been said before, but mature industries tend towards three of something, such as GM-Ford-Chrysler. For CPUs, it has to be AMD64/ia32e, PowerPC, and SPARC. They're the only ones with any high-volume prospects. SPARC will certainly be in third place, with AMD64/ia32e and PowerPC duking it out for one and two. The fact of the matter is that Itanium won't be a mainstream processor, and PA-RISC, Alpha, and MIPS are all more-or-less EOL.

    For operating systems it will still be Windows, Linux, and UNIX (predominately Mac OS and Solaris). Okay, that's four, but the other historical major players are all becoming niche legacy platforms.

    For office suites, it'll be MS Office, StarOffice/OpenOffice.org, and iWork. The others are all niche players.

    For browsers it'll be IE, Firefox, and Safari.

    At least this will tend to simplify some things, because the non-Microsoft platforms will be fewer making supporting them easier. This is a good thing, IMO.

    • by Monthenor ( 42511 ) <monthenor@NOSPam.gogeek.org> on Sunday January 23, 2005 @01:41AM (#11446163) Homepage
      Did you intend this post to clash humorously with your sig? Because it does.
      • Re:3 architectures (Score:4, Interesting)

        by SunFan ( 845761 ) on Sunday January 23, 2005 @01:53AM (#11446210)

        I don't think it does. Microsoft will be around for a while, unfortunately. In my sig, I expect Solaris, Mac OS, and Linux to be the top three of the UNIX side (not necessarily in that order). The BSDs are there for completeness, as they are good systems but are niche players. The main point behind my sig is that all the options listed are either cheaper/freer than Microsoft's options or just flat out better than Microsoft's options (or both). Microsoft really is in a precarious situation, where they have only inertia carrying them at the moment (granted, it's a lot of inertia but it's definitely finite).
        • A lot of inertia, and if it is finite agreed.

          It is finite of course but it isn't fixed: it increase each time someone creates a Word document, an IE only webpage, a HW device which works only with Windows, etc..
          And it decrease each time someone use open standards or use MacOS X..
    • It's been said before, but mature industries tend towards three of something, such as GM-Ford-Chrysler.

      That example is a little antiquated. So which are the three car makers now?

      It seems to me that the number of players varies for every industry. After all, wasn't "one" the number of major players in the PC OS business?

      -a

    • It's been said before, but mature industries tend towards three of something, such as GM-Ford-Chrysler.

      And what about Toyota, Hyundai and VW? You have a very US-centric view here.

      For CPUs, it has to be AMD64/ia32e, PowerPC, and SPARC. They're the only ones with any high-volume prospects.

      I don't have any links to prove it, but I am fairly certain that in the last few years, there have been sold more ARM-based CPUs than those three architectures combined.

      I think you oversimplify things a bit with t
  • So predictable. The "techie" nerd crowd never fails to nod when someone explains to them the pitfalls of Digital Rights Managment, software patents, infinite copyrights and so called "Intellectual Property" in general. And then ... some BigMegaCorp introduces a shiney new string of beeds.. and they all kill each other rushing over to say their "Ooohs" and "Aaaahs" while reaching for their wallets to eagerly pay for yet another link in a chain being forged to enslave them and all the future generations to co
  • by i41Overlord ( 829913 ) on Sunday January 23, 2005 @03:28AM (#11446519)
    "If it's as good as this article claims, the Cell chip could eventually take over the PC market."

    And if I had 4 legs, I could outrun a dog.

    But I don't, so I can't. And this chip won't be as good as the (overenthusiastic) article claims. It won't take over the PC market.

    This chip will take over the PC market the same way that BitBoys took over the graphics card market; the same way that Transmeta took over the mobile CPU market; the same way that the Elbrus 2k took over the desktop CPU market. That way is: deliver endless hype that you can't possibly back up. By the time it hits the market, the hype will be so built up that people won't be able to help but to feel let down by the chip. Then they'll lose interest in the product.

    This chip might be fast for the money, and enable them to put 4 cores in a consumer device like the Playstation, but it's not going to outperform (or even match) a CPU like the P4 or Athlon 64.

    When will people learn to stop falling for the same tricks?
  • Cells have another older ancestor besides the Cray. Job's Next cubes had an integrated DSP/Vector unit. And, lest we forget, Steve Jobs produced the Mach operating system for his Next Cubes. And Mach is the spiritual godfather of OS X.

    He also sold tens of thousands of these boxes to a government agency who's name is Not Said Aloud. Seems their early APU-like design was very good at some important things.

    Cells are the Next big thing. PS3 will indeed kick ass - real time virtual video - and so will fut

    • And, lest we forget, Steve Jobs produced the Mach operating system for his Next Cubes.

      Wrong. Jobs hired the guy who produced the Mach operating system at Carnegie Mellon, Avie Tevanian [apple.com].

      Tevanian started his professional career at Carnegie Mellon University, where he was a principal designer and engineer of the Mach operating system upon which NEXTSTEP is based.

      Mach is the spiritual godfather of OS X

      Not only that, it's the kernel!

      I'm not sure what this has to do with anything, though. Are MKs especi

  • From TFA:

    while a PS3 sits in the background churning through a SETI@home [SETI] unit every 5 minutes.

    The Cell is designed to fit into everything from PDAs...

    Let's ignore the obviously ridiculous claim that supercomputer-scale computing power is coming to my home in the next year or two and think about power consumption. How is this uber-CPU going to get enough battery power in a PDA?
  • by (outer-limits) ( 309835 ) on Sunday January 23, 2005 @04:30AM (#11446641)
    Which is what this seems to resemble to me. http://vl.fmnet.info/transputer/
  • Why don't they just do a Cray 4 [mscd.edu] on a chip?
  • 4 Cells? (Score:2, Insightful)

    by Jozer99 ( 693146 )
    In the article, they say the PS3 will have 4 Cells each running at 1.6v with 85W heat disipation!!! If that is true, they are not only going to need at least a 500W power supply (maybe significantly more), but also to get rid of 340W of heat! How is this going to fit under my TV?
  • by Glock27 ( 446276 ) on Sunday January 23, 2005 @12:49PM (#11448185)
    This article [semireporter.com] has some interesting and somewhat current information.

    Looks like pilot production should begin soon on a 90 nm. process similar to that used for current Athlon 64s and Opterons. No word in this article on initial clock speeds and power dissipation.

    Anyone have additional info?

    BTW, another article I hadn't seen linked [com.com] claims that Cell will be relatively easy to program...seems that Sony learned from some of its PS2 mistakes. That contradicts a lot of the threads responding to the original article and this dupe.

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...