Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

More Itanium-Linux Capability 69

gregus writes "Cnet is reporting that SGI and Red Hat have released their Itanium compilers and will make them open source." Mentions the Trillian kernel porting effort, and other stuff. Kinda a fluff piece: any piece that explains what a compiler is is probably fluff ;)
This discussion has been archived. No new comments can be posted.

More Itanium-Linux Capability

Comments Filter:
  • Yankees selling snake oil? Never happen...

    Jpowers

    ------
    Kibo lives in my town
  • Some niceties cc gets that gcc doesn't:
    • Variables that are assigned a value, but never used in an expression/function call
    • Static functions that aren't referenced by any other function (gcc does do this for variables, however)

    GCC gets those two, at least if you use a sufficiently high level of optimization:

    % gcc --version
    2.7.2.3
    % cat foo.c
    static int
    unused(int arg)
    {
    return arg + 17;

    }

    int
    foo(int bar)
    {
    int a = 17;


    return bar;

    }
    % gcc -c -O2 -Wall foo.c
    foo.c: In function `foo':
    foo.c:10: warning: unused variable `a'
    foo.c: At top level:
    foo.c:3: warning: `unused' defined but not used

    although it doesn't get the other two (although "function arguments that aren't referenced" are sometimes desirable if the function is called through a pointer, and some functions pointed to do use the argument in question; other times, though, it's an indication of an error).

    To gcc's credit, it does do some pretty spiffy control-flow analysis with -O9

    Does -O9 do better than, say, -O2?

    (One problem with said flow analysis is that it sometimes gives "false hits", so one occasionally either has to filter out the noise or stick in an unnecessary initialization.)

    Back to gcc, I'll have to try -W along with -Wall... that really turns up the analness?

    I don't think so. The man page says:

    -W Print extra warning messages for these events:

    [list of events elided]

    ...

    -Wall All of the above `-W' options combined. These are all the options which pertain to usage that we recommend avoiding and that we believe is easy to avoid, even in conjunction with macros.

    so -Wall would appear to include -W.

    (At work, we compile the software for our products with -Wall, some other -W options to insist that functions have prototype declarations to increase the chances that a prototype declaration will be available when the function is called, and -Werror to ensure that if you violate the rules you don't get an image to download to the box....)

  • Cool. I just whipped up an entry. On my P-II 400MHz it runs in ~44ms with the supplied test vector, and the binary is ~5K bytes (when I compile it w/ the calls to gettimeofday in it).

    Alot of this depends on what speed machine you have, though. What speed machine are you running on?

    --Joe
    --
  • Using -Wall with -On, where n is >= 2 seems to enable a very large number of warnings. I actually use the following warning set with the more recent GCC's:

    -Wall -W -Wshadow -Wpointer-arith -Wbad-function-cast -Wcast-qual

    That catches alot of stuff, including statement-not-reached, unused function arguments, 'foo' used before assignment, mismatches between printf format and arguments. There are a few things it doesn't catch though that TI's compiler does, though. (And there's a couple things TI's compiler doesn't catch that GNU C does.)

    (Yes, I used TI's C compiler, since I work for TI. Duh. :-)

    --Joe
    --
  • I'm using a K6/300, and I also tested it out on a 300Mhz UltraSparc.

    Calls to gettimeofday? Randomizing, are we? :)

    What are you using for development? Linux / gcc, or something completely different? Oh heck I'll just send you e-mail again. :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • ... was that you can't trademark a number. Other manufacturers were bringing out 80x86's (AMD?, cyrix?, not sure if one or the other or both - I was not paying attention at the time).

  • I seem to recall seeing something on the 'egcs' site which said -W and -Wall together enabled a few more warnings than -Wall alone.

    In fact, on this page [gnu.org], about 2/3rds of the way down, it says:

    • -Wall no longer implies -W. The new warning flag, -Wsign-compare, included in -Wall, warns about dangerous comparisons of signed and unsigned values. Only the flag is new; it was previously part of -W.

    So there you have it. (Incidentally, these were the release notes from EGCS 1.1).

    --Joe
    --
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Wednesday February 16, 2000 @10:32PM (#1266491)
    I think TM should at least document the instruction set for their chips

    You left an "s" out following "instruction set"; Transmeta's technical white paper on Crusoe [transmeta.com] says on pages 7 and 8 that "the native ISA of the model TM5400 is an enhancement (neither forward nor backward compatible) of the model TM3120's ISA and therefore runs a different version of Code Morphing software."

    As others have noted, publishing the native instruction set architecture may trap them into continuing to provide products that implement that ISA (or writing a binary-to-binary translator (he says, avoiding the "CM" phrase) to map that ISA to the new chip's native ISA), and that appears to be one thing they don't want to do - they want to be able to change the internal instruction set from product to product as they think appropriate.

  • -Wall no longer implies -W.

    Well, that's annoying; any idea what the rationale for not including the -W warnings in the list of "all" (not-too-hard-to-eliminate) warnings was?

    Oh, well, time to tweak the recipe files from which Makefiles are built at work, and to tweak Makefile.am for Ethereal....

  • by Mr Z ( 6791 ) on Wednesday February 16, 2000 @10:50PM (#1266493) Homepage Journal

    The Haifa scheduler and other "interesting" pieces in the backend should really help alot. From what I recall, Haifa includes a software pipeliner as well as some other block-scheduling pieces which will be very necessary to get parallelism out of this beast.

    One thing I wonder is whether they're actually generating bundles, or if they're just issuing a serial code stream. For the uninitiated, a bundle is Intel's term for a group of instructions that have been marked for parallel execution. An early compiler port that's striving for correctness need not know about bundles by simply issuing bundles which contain a single instruction each. The peephole optimizer might do trivial pairing of instructions after-the-fact, but you really don't get alot of parallelism that way, trust me.

    The compiler won't truly shine until the full IA-64 pipeline model, complete with instruction latencies, numbers and mixes of functional units, etc. is described in minute detail to the compiler, and the compiler has the infrastructure for stitching together tightly packed bundles. There are many techniques and optimizations that will need to be implemented in order to stitch those bundles together.

    It'll be even more interesting if the compiler can tune for different EPIC iterations, since different chips will have different numbers of functional units. Although the EPIC encoding is scalable, the best performance will be reached if the code provides parallelism which matches the available hardware, rather than exceeding it, since overly parallel code may tie up more registers than is necessary and will trash the instruction cache if it's unrolled too much.

    I'm willing to wager that this early GNU C port is available now because the IA64 offers a protected pipeline. IMHO, the single biggest difference between EPIC and VLIW is that EPIC provides pipeline interlocks, whereas traditional VLIW exposes all delay slots and requires the programmer to get it right. While the protected pipeline allows early compilers to ramp up quickly, it also lowers the performance ceiling for a given transistor count.

    If anyone here wants to see really hairy VLIW code, go check out TI's C6000 benchmarks page. [ti.com] The C6000 can issue 8 instructions every cycle, and has a fully exposed pipeline. (For those of you crazy enough to click the link, the '||' are used to denote parallel instructions, and branches occur 5 cycles after they're issued.) It's an absolute blast to program by hand (it's my day job), but you don't want to program anything larger than a function in scope. You get a very strong appreciation for compiler technology too. :-) Let me tell you, I've seen some of these "interesting" optimizations coming from the C6000 compiler, and they're pretty mind-bending. I wonder how long they'll take to get these into the IA64 compilers...

    --Joe
    --
  • by Mr Z ( 6791 ) on Wednesday February 16, 2000 @11:19PM (#1266494) Homepage Journal

    EPIC adds an awful lot to the VLIW base. It encodes explicit parallelism, much like VLIW does, but it breaks away from some VLIW principles in order to make it easier to get initial compilers targeted to the platform and easier for Intel to change the pipeline later.

    Traditional VLIW machines sport a "fully exposed pipeline", which means that if an instruction takes more than 1 cycle, the program doesn't see the result until it's actually written back, and the machine lets the user read the old value in the meantime. (For those of you who are familiar with the MIPS or SPARC architectures, you might recognize this concept as "delay slots". VLIW takes this to the extreme such that all delay slots are always fully exposed.) The benefit of this is that you eliminate pipeline interlocks, thereby simplifying the hardware greatly. The pipeline always knows it can issue the next instruction and never has to compare notes between packets. Very clean, and quite simple compared to the heavy voodoo modern CPUs currently perform.

    EPIC, in contrast, offers a protected pipeline. From what I've read, it sounds like it's using a simple scoreboard approach to keep track of in-flight values, so it's not nearly as complex as the many register-renaming approaches that are out there; however, it's still quite a bit more complex than the traditional VLIW approach. The protected pipeline makes it easier for Intel to change the pipeline depth later. VLIW doesn't have that luxury for its native code, since changing the pipeline changes the delay slots and breaks all existing code. (Incidentally, that's probably the real reason Transmeta doesn't want anyone targeting its VLIW engine directly. It can't change the pipeline very much if anyone actually does. It's not the instruction set that matters as much as it is the pipeline!)

    Traditional VLIW also encodes the exact functional unit that each instruction will be issued on. It does this either positionally (by having a slot in the VLIW opcode for each functional unit and using a fixed-length opcode), or, in the case of C6000, by assigning each unit a different portion of the opcode space and stringing together independent instructions through some bundling mechanism. The main point here is that traditional VLIW encodes the mix of functional units in the code stream. This makes it difficult to change the number or mix of functional units, but it can greatly simplify dispatch, as the dispatcher only needs to look at the instruction word -- it doesn't need to know if the functional units are busy or whatever.*

    EPIC, on the other hand, relies on superscalar issue techniques to identify functional units that are available an to issue instructions to them. Again, this costs alot of hardware, but since the parallelism is encoded for the CPU, the hardest part (determining if two instructions have a dependency) is taken care of. There still needs to be a fair amount of logic in the pipeline, though, for pulling instructions out of bundles and finding units for them.

    That said, there are many ways in which EPIC and VLIW are the same. EPIC features such as predication, speculative loads, rotating register files, and so on are also available in the VLIW world. (Not all VLIWs implement these though. The C6000, for instance, only implements predication, but arguably it's the feature with the greatest bang/buck ratio.) Also, explicitly coded parallelism is another unique feature of both EPIC and VLIW.

    But please, don't confuse the issue by insisting they are the same. A true VLIW core has very spartan decode and dispatch hardware compared to what will be necessary to fully support an EPIC machine. The VLIW will be much more finnicky to support, but as long as you have a compiler of some sort in-between your codebase and the core (eg. the Transmeta Code Morphing software as one example), you're safe.

    --Joe

    [*Actually, it does need to know, if the architecture has some instructions that aren't fully pipelined. However, it only needs to know enough so that it doesn't blow up the chip. Code which issues an instruction to a unit that's busy is incorrect code in the VLIW world, and the hardware won't save you. Period.]


    --
  • Well, that's annoying; any idea what the rationale for not including the -W warnings in the list of "all" (not-too-hard-to-eliminate) warnings was?

    Yes, it is annoying, and no, I have no idea what the rationale was. I wish there was a single flag which said "Sock it to me. Give me every possible error and warning you possibly can. Make lint cower in shame at your glitch finding prowess." Alas, no single flag seems to do that.

    If it were a single flag, perhaps more people would use it. Unfortunately, alot of the people who need it most are the ones least likely to use it.

    --Joe
    --
  • GnuPro from Cygnus/RedHat is a version of gcc, which is kept synchronised with the official GNU gcc. So basically this is gcc.

    What interests me is the SGI compiler. I am guessing that it is based on SGI's own compiler technology and not on gcc, but it is GPL, so suddenly we have two big GPL compilers. I wonder whether it can compile the kernel, or whether the kernel is still gcc-only. According to Linus it shouldn't be that difficult to make a kernel port that used a different compiler, since most of the gcc-specific stuff is in the architecture dependent area. But I don't think anyone tried it.

    Anyway if the IA64 Linux ports use gcc for the kernel and the SGI compiler for everything else that would be fine, too. So does glibc compile with a non-gcc compiler?


  • Does -O9 do better than, say, -O2?


    Definitely. I'm not sure at what -On it happens, but at -O9 (with -Wall) you get the "variable might be used before initialization" warnings which are very helpful. (There might be a -Wsomething flag for it, but I never looked into those)

    Seeing some of the other comments posted here, however, it seems -Wall isn't quite all it's cracked up to be. I'll have to dig through the gcc docs a little. (Not to mention cajole the boys at Cygnus to add a -Whit-me-with-everything-youve-got flag or something ;-)
  • And you've always known, right?
  • Cmdr Taco: Kinda a fluff piece: any piece that explains what a compiler is is probably fluff

    CNet: Much of the performance gain that's expected from Itanium will only become a reality if compilers can line up instructions in just the right way so that the chip can operate efficiently. And compilers also are an essential tool for getting higher-level software, such as databases or e-commerce software, to work on the new chip.
  • by pb ( 1020 ) on Wednesday February 16, 2000 @07:47PM (#1266501)
    On a chip this weird, we'll need the compiler. The fact that it's open source is awesome. That's just as cool as if Transmeta made their code-morphing software open source... (just so people understand, these are somewhat similar issues) Actually, maybe Transmeta could work on fast x86 translation for running natively on these platforms. I don't know if it'd be faster or better than the emulation or not.

    CISC was made to make the assembler programmer's life easier. RISC was made to make the hardware manufacturer's life easier. VLIW was made to eke out more speed without using different (increasingly weird) techniques. I don't think it makes anyone's life particularly easy except for perhaps the end user. But I know it will make the compiler writer's lives hell. :)

    My take on it is that by executing instructions in parallel by design, you can avoid the bother of reordering so many instructions on the fly, and trust the compiler to do a good job the first time. Therefore, good compilers will be cruicial to the speed improvements with this new platform.
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Intel said it will offer only minimal help to Sun because Sun wasn't doing enough to encourage software companies to use Intel chips instead of Sun's own UltraSparc chips. Sun isn't too popular lately, eh? They're charging Java developers and now this... McNealy doesn't seem to care about public opinion - yet.
  • I think the only thing that need be said is that compilers are essential.

    :-)

  • one: according to my cursory search, this is the ninth c|net story posted to /. in the month of february. clue to everyone reading /. - read www.news.com - it's good, and there are no grits.

    clue to those of you who post this stuff: c|net has a daily email digest. you can just procmail it straight into HTML.

    two: On top of the above, isn't it sort of ironic that Taco makes fun of articles about compilers, when /. is written in perl?

    Maybe not.

    --
    blue, burning karma because he can.
  • by Accipiter ( 8228 ) on Wednesday February 16, 2000 @07:32PM (#1266505)
    Intel said it will offer only minimal help to Sun because Sun wasn't doing enough to encourage software companies to use Intel chips instead of Sun's own UltraSparc chips.

    I guess since it's Intel's ball. If they don't want to play, They'll take their ball and go home.

    This can backfire though. Okay, so Intel is doing the same thing Sun did, and most likely will have a similar result. So Sun won't encourage users to run Solaris on Intel chips. (It's not going to have a huge impact on Intel, but it's a factor.)

    -- Give him Head? Be a Beacon?

  • Anyone have any idea when AMD's Sledgehammer is going to start being released (prototypes n' such for this type of initial development), and/or if there is any planned linux port?

    Mike
  • Fast binaries rock!! GO SGI!!!

    (*dances jig*)

    I'm wondering, especially since they went the whole nine yards to release this under GPL-- will this be a new backend to gcc, or is it a whole new animal?

    While I haven't had cause to complain about gcc's compiled-binary performance, I will go gaga if [SGI's compiler] has the same code-analysis capability as the IRIX cc. Having lint practically built right in is soooooo nice for debugging... if you can build it cleanly with -fullwarn, you can build it anywhere. (IMHO, gcc's greatest fault is that it's too damned lenient }:->
  • I was quite surprised to read this article and see absolutely no mention of gcc whatsoever...So what's the deal currently? I know Linus just started putting the ia64 arch directory in the 2.3 kernel. Will it compile with GCC for IA32? Or does it need one of these. Frankly I'm not sure why I'm even concerened with this. I estimate that it will be at least another 3 years before I can get my paws on IA64...First they have to come out, then they have to get cheap, that'll take a while. Williamette is a possiblity though. Or maybe just a souped up Athlon in a few months. That'll be nice :)
  • Having never used IRIX's C compiler in-depth, I can't really comment on its capabilities (they sound impressive). But GCC is one of the most correct compilers I've used. Do you have any examples of invalid code that GCC will allow through with "-W -Wall -ansi -pedantic"?

    --
  • I've found that

    gcc -W -Wall -ansi -pedantic

    is pretty good. And, if you want it to be really mean, turn on warnings as errors.

    I know what you mean, though, with liking to have really clean code. I always compile with the above 4 options.
  • Man, C-Net really broke it down this time, didn't they? I guess the saddest part was this little bit:

    And compilers also are an essential tool for getting higher-level software, such as databases or e-commerce software, to work on the new chip

    Uh..k. If there is backwards compatibility, what does it matter? It might make them work FASTER, but in essence does nothing "magical" to them. I think C-Net wanted to throw those magical buzzwords "databases" and "e-commerce" in so they could appeal to the 'average' user. I guess its true that an article that explains exactly what a compiler is is kinda non-consequential to anything. Intel has a new chip, has a beef with Sun because they didn't worship the black monolith like everyone else, and this is the result. An article with no real technological merit that in essence proves nothing that couldn't have been said in a few sentences. The article doesn't really focus so much on the compiler than it does the beef between Intel and Sun and what that 'Linux' thing is. Yay. Congrats C-Net, you've pleased the masses again. As for the rest of us, we could live our entire lives and not see this article and die happily. Sigh.

    Obiwan Kenobi
    =============================

    The three rings of marriage:
    Engagement Ring
    Wedding Ring
    Suffering

  • by possible ( 123857 ) on Wednesday February 16, 2000 @07:57PM (#1266512)
    "For this architecture, you really need a great compiler," said HP's David Mosberger in an interview earlier this month. Mosberger has been working on Linux for Intel's upcoming chip families for two years.

    My understanding is that this new Intel chip will be the first commercially available chip to use the EPIC (Explicitly Parallel Instruction Computing).

    From what I've read, the philosophy of EPIC is to have the CPU slavishly execute instructions in the exact order and manner prescribed by the compiler, allowing compilers to do intense optimizations without worrying about being second-guessed by the CPU. To quote from an article in this month's issue [computer.org] of IEEE Computer magazine:

    [EPIC and VLIW code] provides an explicit plan for how the processor will execute the program, a plan the compiler creates statically at compile time. The code explicitly specifies when each operation will be executed, which functional units will do the work, and which registers will hold the operands.
    There is a decent overview of EPIC at http://www.linux3d.net/cpu/CPU/epic/ [linux3d.net].

    What I couldn't determine from my reading was whose standard it is and to what degree the IA-64 chip will implement it?

  • Itanium? Intel really should hire some new people to come up with the names. Does the chairman has an Italian girlfriend or so? Whats the new line-up gonna be? Germanium, Englanium, Spanium?
  • We need to be pointed to articles that explain what a compiler is....NOT!

  • by Signal 11 ( 7608 ) on Wednesday February 16, 2000 @08:07PM (#1266515)
    Itantium, pentium, xeon..

    Okay, I'm seeing a pattern developing here.. but why not name the chip what it is? I propose a new chip...

    Marketanium

    Marketanium is a revolutionary new 13th generation Inhell(tm)(r)(c) processor capable of over 30 FudFlops per second. It also has the new MNI (Means Nothing) instruction set and boasts a 1.6 BogoHerz speed....

    Bleh. I wish they'd just name them the way they used to: 8088.. 80286..386..486..586... or atleast come up with better names for their chips.. like the Sextium!

  • CISC was made to make the assembler programmer's life easier. RISC was made to make the hardware manufacturer's life easier.


    Must take issue with that - as a low level coder, gotta say a good RISC instruction set (eg. ARM) is pleasure to use, okay you have a limited set of instructions but you can do anything to anything and it gives you an excellent degree of control. I've programmed CISC too, and I don't really see them making my life any easier - the 80/20 rule says I hardly ever use those fancy CISC instructions anyway!
  • I must admit, I've never programmed for ARM, I like x86, but there are still a lot of instructions I don't use. (but a few of them can really come in handy, like xchg) I wouldn't mind if the x86 had more registers, but it still does a pretty good job with what it has, anyhow.

    Also, I find x86 code readable, and looking at a relatively clean RISC design, (based on what I know about RISC processors) Sparc assembler for instance looks pretty nasty. It uses three registers per instruction, so it doesn't have a mov: it just or's with a register that's always zero instead. When it branches, it also executes the instruction after the branch. Also, you constantly end up specifying which 16 bits of a 32-bit number you want to look at, and possibly or-ing the darn thing back together.

    Blah blah blah fixed length instructions blah blah blah retiring in a single cycle... Maybe I'm just not used to it, or I shouldn't read optimized compiler output in my spare time, but that looks like a kludge to me. And performance seems pretty similar. (from the little benchmarking I've done, my K6/300 offers similar performance to a 300Mhz UltraSparc, and my computer is a lot cheaper.)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • I would have to suggest -Wdammit for that, case-insensitive. (-WDAMMIT :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Hey, Taco isn't stupid. That's why he's using Perl, instead of reinventing the wheel. :)

    No, but I think the size of their egos might be directly proportional to the popularity of their website. Think about it...

    Hey, I'm not Bruce Perens too! (he's bp -- I'm pb. :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Actually, all the items you list are caught if you use g++ (i.e. compile as C++, not C), and use -W -Wall -pendantic -ansi. And you still get inline functions.

    OK, I'm a C++ "bigot", and I don't understand why anyone would program in C when they have a perfectly good C++ compiler handy. The only times I drop back to plain ol' C is when I'm working on a microcontroller or DSP that has no C++.

    And remember: You don't HAVE to use objects/inheritance/polymorphism/RTTI/exceptions/s treams/STL just because you threw the C++ switch: You want to pretend you're writing C, go for it.

    And to forstall the flames: C++, written by somebody with a clue how to use it and compiled with a good compiler is every bit as efficient as C. The only times I've seen poor size or speed from C++ was when the person writing it had no clue how to design objects, and rather than sticking with plain C they botched the design. No language, not Java, not Ada, not Pascal, not Modula, can save you from an incompetent programmer.

  • It's great to see the Linux community so actively participating in 64-bit Linux development. Or actually is it the big guys who are crunching the code? :)

    Now that the (0.01 alpha) compiler is out, community hackers can get their hands on I64 code, if they manage to acquire a I64 chip to do testing on. Consult VA Linux on this one... :)

    If Linux is in the game with full throttle, I'd like to know what's going to happen to Redmond based OS. I've never heard Windows mentioned when talking or reading about I64 technology and chips, only Linux. We'll (soon(?)) be pushing Microsoft deeper underground if Linux beats Windows family on I64 field.

    The future's interesting. Just you wait. :)

  • I'm itching to get my hand on this Trillian chick.
    The compiler technology it the most interesting of all imho. Allow me to draw a long and shaky parallel to Crusoe. Transmeta does runtime code morphing cuz they need to be compatible with the standard. Intel is the standard and while they don't need to do run-time code morphing into another instruction set, the IA-64 sounds like it uses the silicon-to-software theory Transmeta has been yelling about. Here the software optimization is done in the compiler instead in runtime code-morphing layer. The compiler sets up the binary code to be ready to do parallel branch execution, instead of the chip doing speculation as in the current pentium hacks. Data storage speculation and some other tricks are supposedly facilitated by the compiler. So it seems to me like a lot of silicon could be saved by smart compilers, and thats what IA-64 could accomplish.

    Of course, if delivered as promised, the Transmeta solution is way cooler, and more practical. Otherwise we'd have to wait till somebody gets apache to recompile on this beast.

    personal to Intel: i'd love to be the first one to help.. umm... debug.. your 'open source' software. really i would. all you need to do is send me one of them chips, its open source after all right?

    flip - out.
  • Sun actively advertising Intel chips would be like AMD actively advertising them. Why would Coke put Pepsi advertisements in every case of Coca cola?
  • Who's Arkady? It rings a bell with the name "Darrell" but I'm not sure why!
  • How good are these new SGI compilers? I use SGI compilers on Origin and Onyx2's and am curious to see if any of the quality enhancements of those proprietary compilers will push its way into the Itanium compilers....


    -- Moondog
  • I guess this is the kind of publicity linux needs in order to become more mainstream. Though I keep reading that Linux is destined to be the next OS/2 warp. I don't get how with this much support that it could compare to the OS/2. Developers and Techies have seen linux as a wave of the future for years now. And now everyone else gets to randomly read aobut it in the papers or rarely hear about it on the news.

    Linux needs people to be born into it. And with corperate support like SGI, Intel, and even Redhat I think we can start to see that. The main reason windows is so popular is that people are used to it. And the question raised "Why switch if it already works".

  • (Very Long Instruction Word)

    Only Intel calls it "EPIC", probably because it hypes better and makes it look like a genuine Intel innovation.

    Transmetas CPUs are VLIW, too.

    But letting the compiler do the guessing (instead of the CPU) is the only reasonable choice indeed.
  • In fact, compilers are an essential tool to get any software to work on any chip :)
  • It's not that gcc lets through incorrect code, just that SGI's compiler is very good at pointing out quirks that could be potential problems. All warning-land stuff, of course.

    Some niceties cc gets that gcc doesn't:

    • Variables that are assigned a value, but never used in an expression/function call
    • Static functions that aren't referenced by any other function (gcc does do this for variables, however)
    • Function arguments that aren't referenced (a bit nitpicky, but it's good to name them foo_unused or whatever)
    • Mixing of integer and enumerated types without (int) casts (annoying sometimes, 'specially when building GTK+ code, but hey, it doesn't hurt)


    There are others, but alas, I don't have a big hairy codebase under development right now to give more examples :-( Most of the time, it's things that make you say "okay, no big deal, but I really should clean up that bit..."

    To gcc's credit, it does do some pretty spiffy control-flow analysis with -O9 ("variable foo may be used before initialization", "statement is unreachable", etc.). But still, I don't consider a program done until I have an excuse for each and every warning given by SGI's compiler.

    (Back to gcc, I'll have to try -W along with -Wall... that really turns up the analness? I'd use -ansi, except that it takes out nice things like inline functions and printf'ing long long ints)
  • Three-operand instructions are a blessing. You don't kill a value just because it was an operand to an instruction, which is incredibly useful. Also, it makes "virtual" register lifetimes shorter, which makes it easier to shove your algorithm into the register file.

    The Sparc weirdness with sethi, etc. (that constant generation stuff you alluded to) is a necessary evil to keep the opcodes a fixed length. Believe me, having fixed length opcodes is the way to go there. So you end up needing a sethi followed by an or to generate a 32-bit constant. Big deal. It's codesize neutral, and once you understand the idiom, it's not that confusing.

    As for performance between a 300MHz UltraSparc and the K6/300, I doubt your K6 would come close on double-precision floating point. Server applications with a large working set (memory footprint) will also fare better on the UltraSparc. For integer applications that are bottlenecked on display bandwidth, though, you're not going to see much, if any difference between the two, and in fact the K6 will feel alot faster since it probably has highly accelerated display hardware attached to it. (Workstations seem to always have sluggish displays hooked to them, as compared to their PC cousins.) Also, if you're running Solaris on the Sparc, remember that it wasn't tuned for interactive responsiveness as much as it was tunes for batch processing. It'll feel slower, but it's cranking numbers faster.

    At any rate, I'd say that Sparc is probably not a good example of clean RISC architecture these days. I don't think any architecture that claims the RISC title actually is. If you want to understand the purity that RISC originally achieved, go grab a MIPS R1000 book.

    --Joe
    --
  • by Richard Wakefield ( 136917 ) on Wednesday February 16, 2000 @09:03PM (#1266535)
    The README [cygnus.com] for the Linux/ia64 Developer's Release on Cygnus' ftp site (which incidentally is what RedHat's site links to), has some very interesting tidbits:

    The entire GNU toolchain has been extended to support IA-64 (this includes binutils, gcc, and gdb).

    The compiler generates working code, but does not generated optimized code for the Itanium processor yet. It has some basic optimizations, but no "interesting" optimizations yet.

    Binutils is mostly functional, with the exception of shared library support and a few other things.

    Gdb has only partial functionality--basic commands work, but most advanced commands are not working.

  • Thanks for the informative reply.

    I figured sethi probably didn't really take up space, since they have to use it all the time--but it still looks ugly. Not as ugly as having variable length instructions, I guess.

    Yeah, I was talking about your average integer stuff, using gcc on x86 and cc on the Sparc, probably bottlenecked on I/O. (for a programming contest, actually.)

    I wouldn't want to use my K6 for hardcore floating-point stuff, no, then I'd want an Athlon (or if I could afford it, an Alpha, but it shows that AMD is using Alpha technology...). :)

    None of the architectures are really 'pure' anymore, instructions that retire in one cycle are getting more complicated these days, (and a lot of x86 instructions can do that now, but never on the 8086!) and we're using all kinds of weird optimization tricks, internal micro-ops, etc. But since CISC and RISC are pretty much theoretical anyhow, I'd still argue that CISC makes for simpler assemblers and RISC makes for simpler chips / decoding logic. :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • That's entirely my point.

    Sun makes UltraSparc. Intel is complaining because Sun didn't endorse Intel Chips. WHY would Sun advertise for Intel?

    So BECAUSE Sun didn't endorse Intel, Intel won't help them port Solaris to Itanium.

    -- Give him Head? Be a Beacon?

  • [ears perk up] Did you say programming contest? Which one?

    ...and on the RISC vs. CISC debate, there isn't much difference between the two anymore. :-) Hence, VLIW and EPIC.

    --Joe
    --
  • Heh heh heh.

    This one. [msoworld.com] Since I told you about it, send me $50 if you win or something. ;)

    Anyhow, I wrote my entry in C on Linux, it's like under 4k and runs in 80ms. I think the last version I submitted actually works correctly too, which really matters more. But it's a pretty simple problem. I just didn't want to change my simple algorithm any more. :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Transmeta doesn't want to open their source for a good reason. If people knew the instruction set of the chip, many would want to program for it natively. Transmeta doesn't want this because they want to be free to modify the instruction set in the future. As it stands, modifying the instruction set just requires them to replace the code morphing software. However, if they released instruction set info for their chips, anybody who had programmed natively for them would have their code break on the next CPU, rather than coming right along (which is the whole point of code morphing of course).
  • by Anonymous Coward
    It's VLIW, but with a ton of other features thrown in.

    My favorites:
    1) It allows predication of instructions. Suppose you have an "if...else..." branch. If it is sometimes true, sometimes false, then it will really slow down current processors. EPIC allows you to execute both sides (the "if" section and the "else" section) concurrently, and just throw away the results of the part which turn out to be unneeded. This is a hell of a lot better than conditional moves.

    2) Software pipelining. It will automatically unroll and pipeline "for" and "while" loops. This means that you will not get branch misprediction stalls on these loops, and you should get pretty close to the theoretical limit of performance for the chip.

    3) The ability to move loads and stores around (advancing loads, speculative memory accesses, etc.). This won't mean anything to most people here, but this is a big deal when you are trying to optimize code.

    None of this stuff is part of VLIW. VLIW just means that you issue groups of instructions at the same time to the decoder, to parallel execution units.

    One other point that keeps getting screwed up on Slashdot: most of the newer compilers do profile-guided optimization. (I know that Intel's compiler and Digital's Fortran compiler both do it.) This is the technique where your executable is instrumented by the compiler, which then recompiles the code based on the run-time statistics.

    Transmeta announced it as though they were the only ones to have thought of something so simple...just like "code morphing."

  • Yeah, guys, I know, I know. But back to the somewhat similar part:

    If Transmeta opened that source, we'd have a look at how to convert our current x86 instructions into VLIW instructions. This would be interesting information for anyone trying to run x86 code on a VLIW chip decently, or anyone writing a C compiler for a VLIW chip looking for optimization tips.

    (if we didn't already have a compiler, one approach would be to take something like egcs which optimizes for x86 very well, and use Transmeta's code as a VLIW back-end, maybe have it do some "profiling" as well.)

    I also realize that all VLIW chips are not created equal, I know nothing about IA64 internals, I know they're supposed to try emulating x86 stuff anyhow. But this sort of experience would be helpful.

    And I don't want Transmeta to have to give away their product. I'm just pointing out how useful their researches would be in a similar endeavor. I'm sure their experience will come in handy for them soon, either on their platforms, or someone else's.
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Ask Taco to write even a simple compiler. For that matter, ask him to use Perl to write a full parser. He'll probably run screaming from the room and send Jon Katz in to berate you...

    As time passes, /. operators seem to think that their website's popularity is somehow related to the size of their ... egos. Does this make any sense?
  • Any real leet hacker knows you don't use anything but machine code. Compilers are for wussie programers!

    I use Brain Gain 4000 to keep ahead of those new gigahertz systems coming out any day now. You just watch it.

    Seriously, though. I know a guy (yes, I know him) who broke into _____'s mainframe and transfered 100,000 (US dollars) into his bank account (I don't know if it was Swiss or not), and got away with it.

    Wanna know how he did it? Machine code. He was trained to code like that. I think he did something at login that, when inputed and executed, exited the authorization section of the program and dumped him to something else. This was back in the day, like late 70s to mid 80's.

    Stupidly he tried it again and got caught, and then he went to jail.

  • >Any real leet hacker knows you don't use anything but machine code.

    yeah, and IA64 increases the fun factor a lot!

    >I know a guy (yes, I know him) who broke into _____'s mainframe and transfered 100,000 (US dollars) into his bank account

    sounds like a nice buffer overflow...
  • 1) is nice

    2) looks more like the usual compiler optimization (like optimization for the two parallel units in the P5)

    3) hmm... P3 ISSE and AMD 3dNow? :)

    in general, Intel seems to have fucked up their initial merced design. Wasnt VLIW supposed to allow long pipelines and mucho MHz?
    And now the x86-Athlon is the fastest chip on the world (MHz-wise, at least)...

    Maybe its just the chip, but it may also be the instruction set.

    profile-guided optimization: I wait for gcc to support this...

Scientists will study your brain to learn more about your distant cousin, Man.

Working...