Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

A Glimpse Inside the Cell Processor 66

XenoPhage writes "Gamasutra has up an article by Jim Turley about the design of the Cell processor, the main processor of the upcoming Playstation 3. It gives a decent overview of the structure of the cell processor itself, including the CBE, PPE, and SPE units." From the article: "Remember your first time? Programming a processor, that is. It must have seemed both exciting and challenging. You ain't seen nothing yet. Even garden-variety microprocessors present plenty of challenges to an experienced programmer or development team. Now imagine programming nine different processors all at once, from a single source-code stream, and making them all cooperate. When it works, it works amazingly well. But making it work is the trick."
This discussion has been archived. No new comments can be posted.

A Glimpse Inside the Cell Processor

Comments Filter:
  • Re:eh (Score:4, Insightful)

    by Zediker ( 885207 ) on Friday July 14, 2006 @02:05PM (#15720481)
    "Console gamers get consoles because they can't deal with installing video card drivers."

    Nope, console gamers buy consoles because they offer games that dont appear on the PC and/or dont have the money to buy a pc gaming rig. $1200+ (im talking building from the ground up with reliable and decent parts) to just start getting a decent computer together usualy isnt as justifiable as spending ($100:GC, $130:DS, $150:PS2/Xbox, $200:PSP, $400:360) for a console of some sort.
  • Except, of course, that ray tracing is not easily parallelizable as you need a significant amount of data to each of those postage stamp size pieces

    The mesh is common to all the processors, and not that big, it can be broadcast. Textures are the big chunk, but most pieces will only need high resolution versions of the textures in their direct view... unless a processor is looking at an optically interesting surface (for reflections or refractions) it can get by with mesh-resolution approximations to the textures outside its part of the scene.

    This requires new technology, yes. You need mesh caches shared among not-too-many processors, and techniques to broadcast the mesh to the mesh cache efficiently, and a front-end to apportion the space to the processors and parcel out textures, maybe even go to a finer subdivision for "interesting" areas. But raytracing is practically the poster boy for "embarassingly parallelizable" applications.

    Adding cache and cores is also, to some degree, the solution when you are out of ideas.

    Not to that great a degree, and we've really only scraped the surface with what we'll be doing with multi-core. DEC laid down a long term plan for the alpha in the early '90s and multi-core was planned for the early '00s right from the start. Compaqtion and having Intel pull a fast one on HP wasn't in their plans, but 4 or 8 cores and enough cache to keep them fed is just the next step.

    Another thing we're going to see, particularly for laptops, is super-integrated chipsets. Freescale's e600 would have been the next step for Apple if they'd been faster getting it to market (or if Apple had been less reluctant to break the G4 bus compatibility and they'd gotten started sooner), and it seems to me that adding the GPU in as well makes a lot of sense. Expect to see Intel CPUs with GMAxxx (or their descendants) on-chip, and AMD cutting deals with nVidia and ATI.
  • I think the article's point was that once you get more and more transistors on there it becomes very difficult to design things to not end up overheating all the time and not use up insane amounts of power, not to mention just becoming extremely complex like x86 cores today.

    I wasn't talking so much about the article as a whole, but the insane levels of hyperbole in the particular paragraph I quoted. "We're capable of putting more transistors on a chip than we can think of things to do with". That's not even vaguely true.

    More transistors == more power, all else being equal, because it's all those junctions flipping state so quickly that uses the power.

    As for the insanity if Intel's processors... that seems to be a perversion particular to Intel. In the past three decades that I've been following the industry, Intel has only managed to produce *one* sane CPU design, the i960, and they promptly caponised it by removing the MMU and relegating it to embedded controls lest it outcompete their cash cow.

    The rest... from the 4004 through the 8080, the 8086 and its many descendants, iApx432, i860, and Itanium... have been consistently outperformed by chips with smaller transistor budgets built by companies with far fewer resources. They only occasionally broke past the midrange of the RISC chips, and were usually trailing back with the anemic Sparc. Where they have excelled has been marketing and in the breadth of their support... both hardware and business. IBM went with the 8088 because they could get them in quantity and they could get good cheap support chips for them: if you went with Motorola or Zilog or Western Digital or National Semiconductor you pretty much had to go back to Intel to build the rest of your computer anyway.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...