Octopiler to Ease Use of Cell Processor 423
Sean0michael writes "Ars Technica is running a piece about The Octopiler from IBM. The Octopiler is supposed to be compiler designed to handle the Cell processor (the one inside Sony's PS3). From the article: 'Cell's greatest strength is that there's a lot of hardware on that chip. And Cell's greatest weakness is that there's a lot of hardware on that chip. So Cell has immense performance potential, but if you want to make it programable by mere mortals then you need a compiler that can ingest code written in a high-level language and produce optimized binaries that fit not just a programming model or a microarchitecture, but an entire multiprocessor system.' The article also has several links to some technical information released by IBM."
Makes you wonder (Score:5, Insightful)
Hello, Itanium... (Score:5, Insightful)
Sadly, not a lotta FPU hardware. (Score:5, Insightful)
'Cell's greatest strength is that there's a lot of hardware on that chip. And Cell's greatest weakness is that there's a lot of hardware on that chip.
Sadly, there's almost no FPU hardware to speak of: 32-bit single precision floats in hardware; 64-bit double precision floats are [somehow?] implemented in software and bring the chip to its knees [wikipedia.org].
Why can't someone invent a chip for math geeks? With 128-bit hardware doubles? Are we really that tiny a proportion of the world's population?
Anyone having flashbacks? (Score:5, Insightful)
All this meant that as the PS2 aged it could 'keep up' because the coders kept getting better and better.
Mere mortals do not write the latest graphics engines. I think there are a lot more tier1 people running around then /. seems to think. They are just to busy to comment here.
All that really matters is wether the launch titles will be 'good' enough. Then the full power of the system can be unleashed over its lifespan.
If your a game company and your faced with the choice of either making just another engine OR spending some money on the kind of people that code for super computers and get an engine that will blow the competition out of the water then it will be a simple choice.
Just because some guy on website finds it hard doesn't mean nobody can do it.
Re:Hello, Itanium... (Score:3, Insightful)
From TFA:
"I say "intended to become," because judging from the paper the guys at IBM are still in the early stages of taming this many-headed beast. This is by no means meant to disparage all the IBM researchers who have done yeoman's work in their practically single-handed attempts to move the entire field of computer science forward by a quantum leap. No, the Octopiler paper is full of innovative ideas to be fleshed out at a further date, results that are "promising," avenues to be explored, and overarching approaches that seem likely to bear fruit eventually."
Too early to say for sure, of course, but I'd rather take this guy's word for it than study the papers myself. - Would I invest/bet money on it? Yes, I would.
compilers ... (Score:5, Insightful)
Far too complex? (Score:2, Insightful)
Your average C programmer doesn't take architecture into account, and so there's no user indication of whether a variable can be paged to maim memory, if code needs to be fetched, and crucially: how far in advance data can be pre-loaded into the local storage, to avoid the SPE hanging on a memory operation.
I'd guess that this new compiler will try to address these issues, which is suggested by the article.
Re:Sadly, not a lotta FPU hardware. (Score:3, Insightful)
Re:Hello, Itanium... (Score:4, Insightful)
Fortunately for IBM and Sony, games are one place where hand-optimizing certain algorithms is still practical. I doubt they will place all their eggs in the octopiler basket. I can't imagine a compiler will find that much paralellism in code that isn't explicitly written to be parallel. Personally, I think they should instead focus on explicitly parallel libraries for common game algorithms like collision detection.
Re:Far too complex? (Score:3, Insightful)
Re:A summary of the idea here... (Score:5, Insightful)
Parallel programming and automated parallelization have already been researched exhaustively throughout the last thirty years of the 20th century. The outcome of all this research is that it is not feasible/tractable to create a compiler that is capable of recongising parallelism, as you suggest. Compilers that can do this are sometimes called 'heroic' compilers, for the reason that the required transformations are so incredibly difficult, and heroic compilers that actually work (well) simply don't exist.
special compilers, expert programmer = DOA product (Score:3, Insightful)
Also, the division into "expert programmer" and "regular programmer" is silly. Most coding is done by people who aren't experts in the cell architecture (or any other architecture). That's not because people are too stupid to do this sort of thing, it's because it's not worth the investment.
If Cell can't deliver top-notch performance with a simple compiler back-end and regular programmers who know how to write decent imperative code, then Cell is going to lose. Hardware designers really need to get over the notion that they can push off all the hard stuff into software. People want hardware that works reliably, predictably,and with a minimum of software complexity.
Maybe CISC wasn't such a bad idea after all--you may get less bang for the buck, but at least you get a predictable bang for the buck.
Re:special compilers, expert programmer = DOA prod (Score:4, Insightful)
Re:special compilers, expert programmer = DOA prod (Score:3, Insightful)
Pretty much all modern CPUs need special compilers to give good performance. Unless you can keep track of the number of pipeline stages, the degree of superscalar architecture, etc. you will get sub-optimal code. The P4, for example, can have 140 instructions in-flight at once. Can you keep track of your code over a 140 instruction window and make sure there are no hazards? If not, then you're probably better off using a 'special' compiler.
The days when a compiler could just turn each statement into a fixed instruction sequence are long gone.
Maybe CISC wasn't such a bad idea after all--you may get less bang for the buck, but at least you get a predictable bang for the buck.
No, actually, you don't. One of the key features of RISC was that instructions took the same time to execute. On a CISC architecture, instruction timings are far from constant. Some instructions (have you looked at the x86 instruction set? It even has string manipulation instructions) can take several times longer to execute than others, which makes generating code very difficult. For example, you might know that it takes n instructions for a load to complete if accessing from memory and m if accessing from cache. How many instructions is that? That's much, much easier to work out on RISC. To prevent pipeline stalls, you need to make sure that you have a minimum of m instructions (and ideally m) between your load and your first operation that depends on the that data. Try doing that with a fixed-timing instruction set (RISC), and then with a variable-timing instruction set (CISC), and see which is easier.
Re:Hello, Itanium... (Score:1, Insightful)
That's the key right there. Cell will only run brand-new software, while Itanium is expected to run a bunch of 30 year old C code originally written for the VAX.
Re:Time to let C die ? (Score:2, Insightful)
C definitely has a niche. I, for one, vote to let C return to it.
Large parts of the kernel, if not the whole kernel, fall into that niche. I'm less convinced about the network stack. Compilers fall quite far away from it. Graph-based or continuous path-finding, artificial intelligence, concurrent programming, interpreters, webservers, webbrowsers, VoIP applications... all that is getting further and further away from that niche.
But, please, whatever you do, everyone, stop considering C as a general-purpose language. It has been. It is not anymore. It wastes too many precious hours of everyone's life. Which could be better spent trolling on
Re:Check out William Kahan at UC-Berkeley. (Score:2, Insightful)
It's like somebody asking why the move from eight bit colour to sixteen bit, and me linking to a 16 bit image versus an 8 bit rendition of that same image. Sure, it isn't all that relevant nowadays, but it still helps to explain the problem.