A Look Into The Cell Architecture 318
ball-lightning writes "This article attempts to decipher the patent filed by the STI group (IBM, Sony, and Toshiba) on their upcoming Cell technology (most notably going to be used in the PS3). If it's as good as this article claims, the Cell chip could eventually take over the PC market."
Dataflow squared (Score:5, Interesting)
The original PS2 design was for a dataflow architecture - the Cell is a continuation (and significant evolution) of the theme. Interestingly enough, if this *does* take off it may be that the best programmers of tomorrow turn out to be the PS2 low-level guys, who've already written the algorithms that are about to be important.
In the PS2, the MIPS chip was there mainly to do the simple stuff, all the heavy lifting was done on the 2 vector processors, and they were designed to have programs uploaded into them and data streamed through them using a very flexible (chainable) DMA engine. Sounds similar (if in a limited sense) to the Cell chip itself.
Simon.
There are always critical sections (Score:5, Interesting)
There will always be "critical sections", data which can only be used by 1 thread at a time, which limits how much it can be split up.. Then you have programs which cant be.. I mean, you can split up a game for instance into a sound, video, and keyboard threads easily. To really utilise parallel processing takes a massive amount of code, which with current languages, seems to make it a bit implausible to get a massive increase.
It should also be remembered that the G5's and G4's already have altivec, and even though this is on a much grander scale, there will always be bottlenecks that slow it down preventing 99% of commonly used apps from getting a significantly large increase..
They reinvented The Amiga! (Score:5, Interesting)
A measly MIPS with hardware that is autonomous.
The only thing they need is to sync to the TV set.
Re:Transmeta (Score:3, Interesting)
There is a _lot_ of revolutionary ideas behind the Cell processor. As shown in the write-up, the Cell takes a drastic change from the conventional arithmetic-unit/cache setup. Additionally, the way the Cell can pipeline parallelizable problems amongst the 8 processing units within itself is a revolution of chip design already. Take, for example, the video encoding/decoding example shown in the write-up, whereas an an Intel chip will require processing of each procedure in sequence, the Cell can separate each procedure, pipeline the process, and produce results in a fraction of the time it takes an Intel chip. Since much of our processing power in home electronics goes into Video, Audio and 3D Visualization (all of which are highly parallelizable), being able to separate tasks onto separate processing units dramatically increases the speed of computation.
Add to the fact that you can also pipeline processes amongst Cells within one piece of electronic, or spread the problems to multitude of other home electronics, makes the design a much different type of processor than the everyday Intel and AMD. The way to "upgrade" the Cell is also revolutionary, as buying another piece of electronics will increase the processing power of your household.
Re:Its a dupe (Score:3, Interesting)
What I can't help but think (Score:5, Interesting)
This is, of course, all just conjecture.
But when I begin to see people seriously talking about the chip from the Playstation 3 eventually potentially being used in PC hardware, I begin to wonder if it's maybe reasonable conjecture...
3 architectures (Score:5, Interesting)
It's been said before, but mature industries tend towards three of something, such as GM-Ford-Chrysler. For CPUs, it has to be AMD64/ia32e, PowerPC, and SPARC. They're the only ones with any high-volume prospects. SPARC will certainly be in third place, with AMD64/ia32e and PowerPC duking it out for one and two. The fact of the matter is that Itanium won't be a mainstream processor, and PA-RISC, Alpha, and MIPS are all more-or-less EOL.
For operating systems it will still be Windows, Linux, and UNIX (predominately Mac OS and Solaris). Okay, that's four, but the other historical major players are all becoming niche legacy platforms.
For office suites, it'll be MS Office, StarOffice/OpenOffice.org, and iWork. The others are all niche players.
For browsers it'll be IE, Firefox, and Safari.
At least this will tend to simplify some things, because the non-Microsoft platforms will be fewer making supporting them easier. This is a good thing, IMO.
Re:Consider a different approach (Score:3, Interesting)
I *think* the programming model will be sort-of-like CORBA, with 'messages' being sent from a central despatcher (the G5 probably, though it could be another APU). I think the messages will be self-contained program+data though - they've even called them APUlet's. The OS then schedules them to be executed on the first available APU.
The message is the data, but the code will be bundled along with it, and when it's finished, it'll send another message back to the despatcher (or 'return' some value, depending on how you view these things). In a traditional messaging system, the code is fixed. In this paradigm you get to change the code as well as the data - could be a nightmare to debug, but the flexibility is staggering.
So, yes, I think messaging systems will be the way this pans out. I wonder if Apple R&D are at this moment chained to a Cell, porting Darwin...
Simon
Re:What I can't help but think (Score:2, Interesting)
Apple -> PowerPC
Cell -> PowerPC
IBM, Sony -> Cell
IBM, Apple -> Linux, BSD (Unix)
Doesn't take a genius to come up with:
IBM->Cell->Apple+Sony
Sony makes the best computers, Sony makes one of the best gaming console.
Although I'd rather see Apple join forces with Nintendo since these two companies are more alike than any other (quality over quantity).
Re:3 architectures (Score:4, Interesting)
I don't think it does. Microsoft will be around for a while, unfortunately. In my sig, I expect Solaris, Mac OS, and Linux to be the top three of the UNIX side (not necessarily in that order). The BSDs are there for completeness, as they are good systems but are niche players. The main point behind my sig is that all the options listed are either cheaper/freer than Microsoft's options or just flat out better than Microsoft's options (or both). Microsoft really is in a precarious situation, where they have only inertia carrying them at the moment (granted, it's a lot of inertia but it's definitely finite).
Re:x86 (Score:5, Interesting)
Re:x86 (Score:5, Interesting)
That's ridiculous. x86 is dead. The overheating and power consumption confirms it.
CISC hardware is horrible in mobile devices because of battery life and power consumption. Your camera, iPod, cell phone, and PDA do not use x86 hardware.
All next generation consoles will use CISC hardware. Hence, economies of scale to get the price down.
x86 is dead and mobile devices wrote the eulogy.
No one has mentioned the Transputer (Score:3, Interesting)
Re:Transmeta (Score:3, Interesting)
Pentium-4 was an architectural mistake conceived with the goal of pushing the MHz numbers up (since the mass market appeared to trust MHz over "MHz-equivalent" labels). AMD astonished them by finally making their alternate naming scheme credible and the plan behind the P4 went straight down the crapper.
New x86 development at Intel is largely derivative of the P3 core (the family that includes the P-M) and has largely deprecated the overheating/underperforming P4 core.
Regards,
Ross
Re:Looks like we need to throw all computers out (Score:5, Interesting)
He seemed astonished by the 1024 bit wide data paths. The Power family is design with cache fill lines of 128 bytes. So, for instance the G5 L2 cache already does fetches 128 bytes into cache for each main memory read.
Similarly all the talk about doing with cache and VM is bullshit. Instead of having each vector unit interfere with a shared cache as is done today, they've simply added smaller per ALU caches to the design, and complemented it with a device that is a souped up cache controller/MMU unit (the DMAC). The dmac apparently will be able to address both memory, and other hardware by having a virtual address layer, to enable reference to remote cell units as well as local physical hardware. The 64 MB of high speed rambus memory, may be all that is required for a PS3, but in a workstation implementation that memory is L3 cache.
Altivec currently has 32 vector registers. Each ALU as 128. It it highly likely that the core opcode architecture will remain similar. The most likely addition will be to add a few flow control instructions to the existing mix.
Altivec is already powerful but the biggest limiting factor is latency. Altivec can peform 1 instruction per clock on the G5, However the pipeline is 8 levels deep thus the overhead involved in fetching data, loading registers, performing a calculation among 1-3 registers, and getting a result is prohibitively expensive. However, if you can arrange to submit 8 calculations (or more) in rapid sequence, you can keep Altivac and the CPU busy and reap great benefits.
The beauty of Cell will be in proving the ALUs with a bit more autonomy (thought not much more, they are still basically vector units), and enabling the main CPU to keep doing useful work while a number of ALUs are cranking away. Other novel design features provide for communication and synchronization with other units via remote addressing and timing (that's what those realtime clock signals are all about).
This will be very fast, and very cheap. However, all the hand waving, and theorizing this guy does about both hardware and software reads like patent bullshit.
Cell IS POWER (Score:4, Interesting)
Re:x86 (Score:1, Interesting)
"AMD is in a much better position to go against Cell than them. There is a reason why Intel is out of the next gen game".
Do you have ANY idea how much resources Intel has? Not just money either, but production capacity as well.
AMD is an annoying insect to Intel, albeit in recent times one with a bit of a sting.
Btw im an AMD user and supporter.
Re:Consider a different approach (Score:3, Interesting)
Secondly, Darwin will not need porting to the Cell. It will almost certainly run with no modification on the PU. Things like QuickTime, Quartz and CoreVideo/Audio are likely to benefit by having components run on an APU, as might things like the network stack, but this is likely to be done over time rather than all for the initial release.
Re:x86 (Score:4, Interesting)
I've seen this naive opinion just too often to let another utterance of it escape unchallenged.
64-bit does indeed offer more address space, which is an advantage to those needing more now/soon. But it has more important advantages; with a large, empty address space you can encode permissions, types and other info in pointers. You can pack or aggregate instructions/data. You can more easily/directly share an address space with everyone getting a large portion, or support novel/faster memory layouts by dividing the space into areas with different access permissions in the context of reasonable memory access strides. 32-bit constraints on such techniques make them less generally useful or excessively constrained, but in 64-bit (and above) they could become much more effective. Think of the ways people are proposing to use ipv6 addresses [though there are a few more orders of magnitude difference there] versus the ways people currently use ipv4---an increase in address space can be used for more than just more addresses.
It may require some imagination to exploit it well, but it could have a much larger impact than you (and many others) think.
Re:Dataflow squared (Score:3, Interesting)
The essential quote: I hope for great things, but will believe them when I see 'em.
You shouldn't do most of this. (Score:3, Interesting)
Semiconductor Reporter article... (Score:4, Interesting)
Looks like pilot production should begin soon on a 90 nm. process similar to that used for current Athlon 64s and Opterons. No word in this article on initial clock speeds and power dissipation.
Anyone have additional info?
BTW, another article I hadn't seen linked [com.com] claims that Cell will be relatively easy to program...seems that Sony learned from some of its PS2 mistakes. That contradicts a lot of the threads responding to the original article and this dupe.
Yes, it's basically an improved PS2 (Score:3, Interesting)
The PS2 was revolutionary, in that it was the first successful non von Neumann machine. There have been many exotic architectures along these lines, from the Illiac III to the Transputer to the nCube to the Connection Machine, but they've all been failures in the marketplace. The PS2 sold in volume and made money. That was enough to get people to develop techniques for programming dataflow machines, which aren't fun to program. Working out those problems delayed games for the PS2 by a year or two, but now it's been figured out.
Now that the techniques have been worked out, at least within the game development community, a new generation of the same approach makes sense. Especially for graphics, which parallelizes well. You can keep throwing hardware at graphics until you get to one processor per pixel per triangle, and still get performance improvements.
Note the limitations. Each vector processor has only 128K (not MB) of local memory. This is like DSP programming; you don't have much local storage. There's access to main memory, but it will stall the vector processor, so you can't overdo it. Bashing your problem into chunks that fit that constraint is a major hassle.
Re:Dataflow squared (Score:1, Interesting)
The (memory) bottleneck you point out only really matters in streamed data. But I'd bet real money that the PS3 design team hopes to see game engines that "emerge world" by massive iterative calculations on rather compressed primitives of data (NURBS-like "infinitely accurate" geometry, shader-like procedures mixed and mangled, some kind of low-level matter descriptions used for visuals and physics, et cetera). And games do heavy math on AI, physics, 3D sound generation -- these aren't necessarily bandwidth intensive at all.
Computationally games are (or can be) very unlike video editing or other bandwidth heavy stuff. And Cell/PS3 is all about vector calculations, really. The bandwidth they tout is mostly just an inevitable side effect, not an outstanding strong point.
But as somebody pointed out already, whether PS3 is going to be a giant PITA to program for will depend on the tools that Sony (or IBM) offer for it. Then again, with the multi-year lifespan of a console, it doesn't really hurt if some of the power is harnessed even a coupla years after the first wawe of game titles (at launch).